Sep 4 23:44:27.451598 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 23:44:27.451621 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Sep 4 22:21:25 -00 2025 Sep 4 23:44:27.451629 kernel: KASLR enabled Sep 4 23:44:27.451635 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 4 23:44:27.451643 kernel: printk: bootconsole [pl11] enabled Sep 4 23:44:27.451648 kernel: efi: EFI v2.7 by EDK II Sep 4 23:44:27.451655 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 4 23:44:27.451661 kernel: random: crng init done Sep 4 23:44:27.451667 kernel: secureboot: Secure boot disabled Sep 4 23:44:27.451673 kernel: ACPI: Early table checksum verification disabled Sep 4 23:44:27.451679 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 4 23:44:27.451685 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:27.451692 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:27.451699 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 4 23:44:27.451707 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:27.451713 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:27.451720 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:27.451728 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:27.451734 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:27.451741 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:27.451747 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 4 23:44:27.451753 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 23:44:27.451760 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 4 23:44:27.451766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 4 23:44:27.451773 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 4 23:44:27.451779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 4 23:44:27.451785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 4 23:44:27.451791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 4 23:44:27.451799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 4 23:44:27.451806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 4 23:44:27.451812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 4 23:44:27.451818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 4 23:44:27.451824 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 4 23:44:27.451830 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 4 23:44:27.451836 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 4 23:44:27.451843 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Sep 4 23:44:27.451849 kernel: Zone ranges: Sep 4 23:44:27.451855 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 4 23:44:27.451861 kernel: DMA32 empty Sep 4 23:44:27.451867 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 23:44:27.451878 kernel: Movable zone start for each node Sep 4 23:44:27.451884 kernel: Early memory node ranges Sep 4 23:44:27.451891 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 4 23:44:27.451898 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 4 23:44:27.451904 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 4 23:44:27.451913 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 4 23:44:27.451919 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 4 23:44:27.451926 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 4 23:44:27.451932 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 4 23:44:27.451939 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 4 23:44:27.451946 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 23:44:27.451953 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 4 23:44:27.451959 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 4 23:44:27.451966 kernel: psci: probing for conduit method from ACPI. Sep 4 23:44:27.451972 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 23:44:27.451979 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 23:44:27.451985 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 4 23:44:27.451993 kernel: psci: SMC Calling Convention v1.4 Sep 4 23:44:27.452000 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 4 23:44:27.452006 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 4 23:44:27.452013 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 23:44:27.452019 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 23:44:27.452026 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 23:44:27.452033 kernel: Detected PIPT I-cache on CPU0 Sep 4 23:44:27.452039 kernel: CPU features: detected: GIC system register CPU interface Sep 4 23:44:27.452046 kernel: CPU features: detected: Hardware dirty bit management Sep 4 23:44:27.452052 kernel: CPU features: detected: Spectre-BHB Sep 4 23:44:27.452059 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 23:44:27.452067 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 23:44:27.452074 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 23:44:27.452081 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 4 23:44:27.452087 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 23:44:27.452094 kernel: alternatives: applying boot alternatives Sep 4 23:44:27.452102 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:27.452109 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:44:27.452116 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:44:27.452123 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:44:27.452129 kernel: Fallback order for Node 0: 0 Sep 4 23:44:27.452137 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 4 23:44:27.452145 kernel: Policy zone: Normal Sep 4 23:44:27.452152 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:44:27.452158 kernel: software IO TLB: area num 2. Sep 4 23:44:27.452165 kernel: software IO TLB: mapped [mem 0x0000000036530000-0x000000003a530000] (64MB) Sep 4 23:44:27.452172 kernel: Memory: 3983524K/4194160K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 210636K reserved, 0K cma-reserved) Sep 4 23:44:27.452179 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:44:27.452185 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:44:27.452192 kernel: rcu: RCU event tracing is enabled. Sep 4 23:44:27.452199 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:44:27.452206 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:44:27.452213 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:44:27.452221 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:44:27.452228 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:44:27.452234 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 23:44:27.452241 kernel: GICv3: 960 SPIs implemented Sep 4 23:44:27.452248 kernel: GICv3: 0 Extended SPIs implemented Sep 4 23:44:27.452254 kernel: Root IRQ handler: gic_handle_irq Sep 4 23:44:27.452261 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 23:44:27.452267 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 4 23:44:27.452274 kernel: ITS: No ITS available, not enabling LPIs Sep 4 23:44:27.452280 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:44:27.452287 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:44:27.452294 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 23:44:27.452302 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 23:44:27.452309 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 23:44:27.452316 kernel: Console: colour dummy device 80x25 Sep 4 23:44:27.452323 kernel: printk: console [tty1] enabled Sep 4 23:44:27.452329 kernel: ACPI: Core revision 20230628 Sep 4 23:44:27.452336 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 23:44:27.452343 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:44:27.452350 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:44:27.452357 kernel: landlock: Up and running. Sep 4 23:44:27.452365 kernel: SELinux: Initializing. Sep 4 23:44:27.452372 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:27.452379 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:27.452386 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:27.452393 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:27.452400 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 4 23:44:27.452407 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 4 23:44:27.452421 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 23:44:27.452428 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:44:27.452435 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:44:27.452442 kernel: Remapping and enabling EFI services. Sep 4 23:44:27.452449 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:44:27.452458 kernel: Detected PIPT I-cache on CPU1 Sep 4 23:44:27.452466 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 4 23:44:27.452473 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 23:44:27.452480 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 23:44:27.452487 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:44:27.452507 kernel: SMP: Total of 2 processors activated. Sep 4 23:44:27.452515 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 23:44:27.452522 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 4 23:44:27.452530 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 23:44:27.452537 kernel: CPU features: detected: CRC32 instructions Sep 4 23:44:27.452544 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 23:44:27.452552 kernel: CPU features: detected: LSE atomic instructions Sep 4 23:44:27.452559 kernel: CPU features: detected: Privileged Access Never Sep 4 23:44:27.452566 kernel: CPU: All CPU(s) started at EL1 Sep 4 23:44:27.452575 kernel: alternatives: applying system-wide alternatives Sep 4 23:44:27.452582 kernel: devtmpfs: initialized Sep 4 23:44:27.452590 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:44:27.452597 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:44:27.452605 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:44:27.452612 kernel: SMBIOS 3.1.0 present. Sep 4 23:44:27.452619 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 4 23:44:27.452627 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:44:27.452635 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 23:44:27.452644 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 23:44:27.452651 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 23:44:27.452658 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:44:27.452666 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 4 23:44:27.452673 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:44:27.452680 kernel: cpuidle: using governor menu Sep 4 23:44:27.452687 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 23:44:27.452695 kernel: ASID allocator initialised with 32768 entries Sep 4 23:44:27.452702 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:44:27.452711 kernel: Serial: AMBA PL011 UART driver Sep 4 23:44:27.452718 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 23:44:27.452726 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 23:44:27.452733 kernel: Modules: 509248 pages in range for PLT usage Sep 4 23:44:27.452740 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:44:27.452748 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:44:27.452755 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 23:44:27.452762 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 23:44:27.452770 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:44:27.452779 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:44:27.452786 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 23:44:27.452793 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 23:44:27.452801 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:44:27.452808 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:44:27.452815 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:44:27.452823 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:44:27.452830 kernel: ACPI: Interpreter enabled Sep 4 23:44:27.452837 kernel: ACPI: Using GIC for interrupt routing Sep 4 23:44:27.452846 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 4 23:44:27.452854 kernel: printk: console [ttyAMA0] enabled Sep 4 23:44:27.452862 kernel: printk: bootconsole [pl11] disabled Sep 4 23:44:27.452869 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 4 23:44:27.452876 kernel: iommu: Default domain type: Translated Sep 4 23:44:27.452883 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 23:44:27.452891 kernel: efivars: Registered efivars operations Sep 4 23:44:27.452898 kernel: vgaarb: loaded Sep 4 23:44:27.452905 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 23:44:27.452914 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:44:27.452922 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:44:27.452929 kernel: pnp: PnP ACPI init Sep 4 23:44:27.452936 kernel: pnp: PnP ACPI: found 0 devices Sep 4 23:44:27.452944 kernel: NET: Registered PF_INET protocol family Sep 4 23:44:27.452951 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:44:27.452959 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:44:27.452966 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:44:27.452974 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:44:27.452983 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:44:27.452990 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:44:27.452998 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:27.453005 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:27.453012 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:44:27.453020 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:44:27.453027 kernel: kvm [1]: HYP mode not available Sep 4 23:44:27.453034 kernel: Initialise system trusted keyrings Sep 4 23:44:27.453041 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:44:27.453050 kernel: Key type asymmetric registered Sep 4 23:44:27.453058 kernel: Asymmetric key parser 'x509' registered Sep 4 23:44:27.453065 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 23:44:27.453072 kernel: io scheduler mq-deadline registered Sep 4 23:44:27.453079 kernel: io scheduler kyber registered Sep 4 23:44:27.453087 kernel: io scheduler bfq registered Sep 4 23:44:27.453094 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:44:27.453101 kernel: thunder_xcv, ver 1.0 Sep 4 23:44:27.453108 kernel: thunder_bgx, ver 1.0 Sep 4 23:44:27.453117 kernel: nicpf, ver 1.0 Sep 4 23:44:27.453124 kernel: nicvf, ver 1.0 Sep 4 23:44:27.453273 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 23:44:27.453347 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T23:44:26 UTC (1757029466) Sep 4 23:44:27.453357 kernel: efifb: probing for efifb Sep 4 23:44:27.453365 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 23:44:27.453373 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 23:44:27.453380 kernel: efifb: scrolling: redraw Sep 4 23:44:27.453390 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 23:44:27.453398 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 23:44:27.453405 kernel: fb0: EFI VGA frame buffer device Sep 4 23:44:27.453412 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 4 23:44:27.453420 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 23:44:27.453427 kernel: No ACPI PMU IRQ for CPU0 Sep 4 23:44:27.453434 kernel: No ACPI PMU IRQ for CPU1 Sep 4 23:44:27.453441 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 4 23:44:27.453449 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 23:44:27.453458 kernel: watchdog: Hard watchdog permanently disabled Sep 4 23:44:27.453465 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:44:27.453472 kernel: Segment Routing with IPv6 Sep 4 23:44:27.453479 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:44:27.453487 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:44:27.453505 kernel: Key type dns_resolver registered Sep 4 23:44:27.453512 kernel: registered taskstats version 1 Sep 4 23:44:27.453520 kernel: Loading compiled-in X.509 certificates Sep 4 23:44:27.453527 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 83306acb9da7bc81cc6aa49a1c622f78672939c0' Sep 4 23:44:27.453536 kernel: Key type .fscrypt registered Sep 4 23:44:27.453543 kernel: Key type fscrypt-provisioning registered Sep 4 23:44:27.453551 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:44:27.453558 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:44:27.453565 kernel: ima: No architecture policies found Sep 4 23:44:27.453573 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 23:44:27.453580 kernel: clk: Disabling unused clocks Sep 4 23:44:27.453587 kernel: Freeing unused kernel memory: 38400K Sep 4 23:44:27.453594 kernel: Run /init as init process Sep 4 23:44:27.453603 kernel: with arguments: Sep 4 23:44:27.453610 kernel: /init Sep 4 23:44:27.453617 kernel: with environment: Sep 4 23:44:27.453624 kernel: HOME=/ Sep 4 23:44:27.453631 kernel: TERM=linux Sep 4 23:44:27.453638 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:44:27.453647 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:44:27.453657 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:44:27.453667 systemd[1]: Detected virtualization microsoft. Sep 4 23:44:27.453675 systemd[1]: Detected architecture arm64. Sep 4 23:44:27.453683 systemd[1]: Running in initrd. Sep 4 23:44:27.453690 systemd[1]: No hostname configured, using default hostname. Sep 4 23:44:27.453698 systemd[1]: Hostname set to . Sep 4 23:44:27.453706 systemd[1]: Initializing machine ID from random generator. Sep 4 23:44:27.453714 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:44:27.453722 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:27.453732 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:27.453741 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:44:27.453749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:44:27.453757 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:44:27.453765 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:44:27.453775 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:44:27.453784 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:44:27.453793 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:27.453800 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:27.453808 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:44:27.453817 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:44:27.453824 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:44:27.453832 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:44:27.453840 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:44:27.453848 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:44:27.453857 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:44:27.453865 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:44:27.453873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:27.453881 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:27.453889 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:27.453897 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:44:27.453905 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:44:27.453913 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:44:27.453922 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:44:27.453930 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:44:27.453938 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:44:27.453963 systemd-journald[218]: Collecting audit messages is disabled. Sep 4 23:44:27.453985 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:44:27.453995 systemd-journald[218]: Journal started Sep 4 23:44:27.454014 systemd-journald[218]: Runtime Journal (/run/log/journal/c24b89d2d5b64f9dba8b4ae45e3272c9) is 8M, max 78.5M, 70.5M free. Sep 4 23:44:27.476124 systemd-modules-load[220]: Inserted module 'overlay' Sep 4 23:44:27.483923 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:27.513239 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:44:27.514330 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:44:27.553637 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:44:27.553665 kernel: Bridge firewalling registered Sep 4 23:44:27.543094 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:27.547515 systemd-modules-load[220]: Inserted module 'br_netfilter' Sep 4 23:44:27.558527 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:44:27.571745 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:27.593155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:27.624679 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:27.636719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:44:27.664709 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:44:27.692806 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:44:27.708431 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:27.722780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:27.738235 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:44:27.754923 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:27.788656 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:44:27.802699 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:44:27.814649 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:44:27.841302 dracut-cmdline[253]: dracut-dracut-053 Sep 4 23:44:27.841302 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:27.847209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:27.930410 systemd-resolved[256]: Positive Trust Anchors: Sep 4 23:44:27.930436 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:44:27.930468 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:44:27.932942 systemd-resolved[256]: Defaulting to hostname 'linux'. Sep 4 23:44:27.936286 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:44:27.957031 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:28.058516 kernel: SCSI subsystem initialized Sep 4 23:44:28.066519 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:44:28.078528 kernel: iscsi: registered transport (tcp) Sep 4 23:44:28.098886 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:44:28.098952 kernel: QLogic iSCSI HBA Driver Sep 4 23:44:28.142048 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:44:28.159778 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:44:28.199655 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:44:28.199715 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:44:28.208200 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:44:28.258525 kernel: raid6: neonx8 gen() 15742 MB/s Sep 4 23:44:28.279509 kernel: raid6: neonx4 gen() 15827 MB/s Sep 4 23:44:28.300505 kernel: raid6: neonx2 gen() 13287 MB/s Sep 4 23:44:28.322506 kernel: raid6: neonx1 gen() 10555 MB/s Sep 4 23:44:28.343504 kernel: raid6: int64x8 gen() 6795 MB/s Sep 4 23:44:28.365505 kernel: raid6: int64x4 gen() 7353 MB/s Sep 4 23:44:28.386506 kernel: raid6: int64x2 gen() 6114 MB/s Sep 4 23:44:28.412565 kernel: raid6: int64x1 gen() 5058 MB/s Sep 4 23:44:28.412577 kernel: raid6: using algorithm neonx4 gen() 15827 MB/s Sep 4 23:44:28.439381 kernel: raid6: .... xor() 12442 MB/s, rmw enabled Sep 4 23:44:28.439426 kernel: raid6: using neon recovery algorithm Sep 4 23:44:28.452227 kernel: xor: measuring software checksum speed Sep 4 23:44:28.452244 kernel: 8regs : 21607 MB/sec Sep 4 23:44:28.460412 kernel: 32regs : 20587 MB/sec Sep 4 23:44:28.460427 kernel: arm64_neon : 27870 MB/sec Sep 4 23:44:28.466692 kernel: xor: using function: arm64_neon (27870 MB/sec) Sep 4 23:44:28.518511 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:44:28.529184 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:44:28.553652 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:28.580235 systemd-udevd[439]: Using default interface naming scheme 'v255'. Sep 4 23:44:28.587338 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:28.607741 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:44:28.626820 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Sep 4 23:44:28.655581 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:44:28.679767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:44:28.725534 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:28.749746 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:44:28.790697 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:44:28.800715 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:44:28.821049 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:28.842854 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:44:28.878302 kernel: hv_vmbus: Vmbus version:5.3 Sep 4 23:44:28.881819 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:44:28.927620 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 23:44:28.927645 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 23:44:28.916947 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:44:28.992848 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 23:44:28.992872 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 23:44:28.992892 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 23:44:28.992902 kernel: scsi host0: storvsc_host_t Sep 4 23:44:28.993073 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 23:44:28.993084 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 23:44:28.993094 kernel: scsi host1: storvsc_host_t Sep 4 23:44:28.945932 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:44:29.073602 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 23:44:29.073629 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 23:44:29.073788 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 23:44:29.073878 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 23:44:29.073972 kernel: hv_netvsc 002248ba-ce4b-0022-48ba-ce4b002248ba eth0: VF slot 1 added Sep 4 23:44:28.946099 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:29.035759 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:29.062237 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:29.133415 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 23:44:29.133651 kernel: hv_vmbus: registering driver hv_pci Sep 4 23:44:29.133663 kernel: PTP clock support registered Sep 4 23:44:29.133673 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 23:44:29.133682 kernel: hv_pci 670202d1-a64c-4e60-a827-6a082f6745ae: PCI VMBus probing: Using version 0x10004 Sep 4 23:44:29.062471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:29.098641 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:29.141574 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:29.189059 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 23:44:29.189093 kernel: hv_vmbus: registering driver hv_utils Sep 4 23:44:29.177624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:29.214233 kernel: hv_pci 670202d1-a64c-4e60-a827-6a082f6745ae: PCI host bridge to bus a64c:00 Sep 4 23:44:29.229093 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 23:44:29.229112 kernel: pci_bus a64c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 4 23:44:29.229250 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 23:44:29.205186 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:29.265023 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 23:44:29.265162 kernel: pci_bus a64c:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 23:44:29.265256 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 23:44:29.265266 kernel: pci a64c:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 4 23:44:29.205479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:29.242953 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:29.264555 systemd-resolved[256]: Clock change detected. Flushing caches. Sep 4 23:44:29.335874 kernel: pci a64c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 23:44:29.335926 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 23:44:29.336092 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 23:44:29.336182 kernel: pci a64c:00:02.0: enabling Extended Tags Sep 4 23:44:29.336198 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 23:44:29.302798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:29.350067 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 23:44:29.350266 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 23:44:29.365559 kernel: pci a64c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a64c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 4 23:44:29.376070 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:44:29.366330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:29.423066 kernel: pci_bus a64c:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 23:44:29.423262 kernel: pci a64c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 23:44:29.423394 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 23:44:29.409408 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:29.443482 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:29.473738 kernel: mlx5_core a64c:00:02.0: enabling device (0000 -> 0002) Sep 4 23:44:29.481550 kernel: mlx5_core a64c:00:02.0: firmware version: 16.31.2424 Sep 4 23:44:29.773756 kernel: hv_netvsc 002248ba-ce4b-0022-48ba-ce4b002248ba eth0: VF registering: eth1 Sep 4 23:44:29.773967 kernel: mlx5_core a64c:00:02.0 eth1: joined to eth0 Sep 4 23:44:29.784573 kernel: mlx5_core a64c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 4 23:44:29.797568 kernel: mlx5_core a64c:00:02.0 enP42572s1: renamed from eth1 Sep 4 23:44:30.223326 kernel: BTRFS: device fsid 74a5374f-334b-4c07-8952-82f9f0ad22ae devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (490) Sep 4 23:44:30.237561 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (500) Sep 4 23:44:30.255247 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 23:44:30.280489 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 23:44:30.307783 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 23:44:30.326678 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 23:44:30.345638 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 23:44:30.372782 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:44:30.409859 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:44:31.429565 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:44:31.429767 disk-uuid[608]: The operation has completed successfully. Sep 4 23:44:31.510488 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:44:31.512561 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:44:31.567691 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:44:31.587217 sh[694]: Success Sep 4 23:44:31.622886 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 23:44:32.077365 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:44:32.100762 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:44:32.108584 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:44:32.155224 kernel: BTRFS info (device dm-0): first mount of filesystem 74a5374f-334b-4c07-8952-82f9f0ad22ae Sep 4 23:44:32.155274 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:32.162971 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:44:32.168367 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:44:32.172896 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:44:32.793882 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:44:32.800580 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:44:32.819813 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:44:32.829776 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:44:32.889993 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:32.890059 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:32.896062 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:44:32.949497 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:44:32.974713 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:44:32.999443 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:44:33.010621 kernel: BTRFS info (device sda6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:33.016822 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:44:33.037726 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:44:33.057872 systemd-networkd[869]: lo: Link UP Sep 4 23:44:33.057885 systemd-networkd[869]: lo: Gained carrier Sep 4 23:44:33.059591 systemd-networkd[869]: Enumeration completed Sep 4 23:44:33.062619 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:44:33.070853 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:33.070857 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:44:33.081124 systemd[1]: Reached target network.target - Network. Sep 4 23:44:33.179556 kernel: mlx5_core a64c:00:02.0 enP42572s1: Link up Sep 4 23:44:33.264561 kernel: hv_netvsc 002248ba-ce4b-0022-48ba-ce4b002248ba eth0: Data path switched to VF: enP42572s1 Sep 4 23:44:33.264814 systemd-networkd[869]: enP42572s1: Link UP Sep 4 23:44:33.264905 systemd-networkd[869]: eth0: Link UP Sep 4 23:44:33.265026 systemd-networkd[869]: eth0: Gained carrier Sep 4 23:44:33.265035 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:33.296031 systemd-networkd[869]: enP42572s1: Gained carrier Sep 4 23:44:33.307578 systemd-networkd[869]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:44:34.225095 ignition[876]: Ignition 2.20.0 Sep 4 23:44:34.225107 ignition[876]: Stage: fetch-offline Sep 4 23:44:34.231361 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:44:34.225144 ignition[876]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:34.225152 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:34.225256 ignition[876]: parsed url from cmdline: "" Sep 4 23:44:34.225260 ignition[876]: no config URL provided Sep 4 23:44:34.225264 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:44:34.268860 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:44:34.225271 ignition[876]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:44:34.225277 ignition[876]: failed to fetch config: resource requires networking Sep 4 23:44:34.225454 ignition[876]: Ignition finished successfully Sep 4 23:44:34.293910 ignition[885]: Ignition 2.20.0 Sep 4 23:44:34.293918 ignition[885]: Stage: fetch Sep 4 23:44:34.294152 ignition[885]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:34.294165 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:34.294277 ignition[885]: parsed url from cmdline: "" Sep 4 23:44:34.294281 ignition[885]: no config URL provided Sep 4 23:44:34.294285 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:44:34.294293 ignition[885]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:44:34.294325 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 23:44:34.416079 ignition[885]: GET result: OK Sep 4 23:44:34.416143 ignition[885]: config has been read from IMDS userdata Sep 4 23:44:34.420340 unknown[885]: fetched base config from "system" Sep 4 23:44:34.416187 ignition[885]: parsing config with SHA512: 2e33c3d6c41b841d07de36eaeecc7e09b86a4c5c33a4fdb2d7d1b5779e867c0ae552902a09c86a70f41257d905c20793430cbc9571a5a8b0766a2cc058035909 Sep 4 23:44:34.420347 unknown[885]: fetched base config from "system" Sep 4 23:44:34.420734 ignition[885]: fetch: fetch complete Sep 4 23:44:34.420352 unknown[885]: fetched user config from "azure" Sep 4 23:44:34.420739 ignition[885]: fetch: fetch passed Sep 4 23:44:34.433848 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:44:34.420784 ignition[885]: Ignition finished successfully Sep 4 23:44:34.462835 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:44:34.500384 ignition[891]: Ignition 2.20.0 Sep 4 23:44:34.500397 ignition[891]: Stage: kargs Sep 4 23:44:34.506449 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:44:34.500620 ignition[891]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:34.500630 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:34.501654 ignition[891]: kargs: kargs passed Sep 4 23:44:34.501714 ignition[891]: Ignition finished successfully Sep 4 23:44:34.541772 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:44:34.566611 ignition[898]: Ignition 2.20.0 Sep 4 23:44:34.566622 ignition[898]: Stage: disks Sep 4 23:44:34.571522 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:44:34.566850 ignition[898]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:34.579328 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:44:34.566863 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:34.590608 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:44:34.567899 ignition[898]: disks: disks passed Sep 4 23:44:34.605114 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:44:34.567946 ignition[898]: Ignition finished successfully Sep 4 23:44:34.617275 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:44:34.633275 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:44:34.660770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:44:34.760724 systemd-fsck[906]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 23:44:34.772943 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:44:34.793812 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:44:34.861556 kernel: EXT4-fs (sda9): mounted filesystem 22b06923-f972-4753-b92e-d6b25ef15ca3 r/w with ordered data mode. Quota mode: none. Sep 4 23:44:34.862670 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:44:34.868516 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:44:34.925631 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:44:34.951751 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:44:34.968384 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (917) Sep 4 23:44:34.966777 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 23:44:35.003028 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:35.003077 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:35.002131 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:44:35.027886 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:44:35.002187 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:44:35.015906 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:44:35.047841 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:44:35.071609 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:44:35.072474 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:44:35.149687 systemd-networkd[869]: eth0: Gained IPv6LL Sep 4 23:44:35.723606 coreos-metadata[919]: Sep 04 23:44:35.723 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 23:44:35.734880 coreos-metadata[919]: Sep 04 23:44:35.731 INFO Fetch successful Sep 4 23:44:35.734880 coreos-metadata[919]: Sep 04 23:44:35.731 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 23:44:35.755441 coreos-metadata[919]: Sep 04 23:44:35.744 INFO Fetch successful Sep 4 23:44:35.762135 coreos-metadata[919]: Sep 04 23:44:35.761 INFO wrote hostname ci-4230.2.2-n-c33c3b40b5 to /sysroot/etc/hostname Sep 4 23:44:35.762732 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:44:36.232495 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:44:36.346849 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:44:36.371498 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:44:36.385276 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:44:37.663195 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:44:37.679705 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:44:37.693849 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:44:37.712561 kernel: BTRFS info (device sda6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:37.706774 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:44:37.739581 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:44:37.753102 ignition[1037]: INFO : Ignition 2.20.0 Sep 4 23:44:37.757544 ignition[1037]: INFO : Stage: mount Sep 4 23:44:37.757544 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:37.757544 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:37.774239 ignition[1037]: INFO : mount: mount passed Sep 4 23:44:37.774239 ignition[1037]: INFO : Ignition finished successfully Sep 4 23:44:37.768590 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:44:37.791769 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:44:37.808484 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:44:37.849560 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1047) Sep 4 23:44:37.865555 kernel: BTRFS info (device sda6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:37.865621 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:37.870168 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:44:37.880548 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:44:37.882808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:44:37.912382 ignition[1064]: INFO : Ignition 2.20.0 Sep 4 23:44:37.912382 ignition[1064]: INFO : Stage: files Sep 4 23:44:37.922104 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:37.922104 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:37.922104 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:44:37.949436 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:44:37.949436 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:44:38.017072 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:44:38.027647 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:44:38.027647 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:44:38.017572 unknown[1064]: wrote ssh authorized keys file for user: core Sep 4 23:44:38.052274 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 23:44:38.065865 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 4 23:44:38.098754 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:44:38.420516 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 23:44:38.420516 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:44:38.445572 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 23:44:38.617914 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:44:38.699257 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:44:38.699257 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:44:38.724387 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 4 23:44:39.167613 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:44:39.376104 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:44:39.376104 ignition[1064]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:44:39.443463 ignition[1064]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:44:39.456896 ignition[1064]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:44:39.456896 ignition[1064]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:44:39.456896 ignition[1064]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:44:39.456896 ignition[1064]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:44:39.456896 ignition[1064]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:44:39.456896 ignition[1064]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:44:39.456896 ignition[1064]: INFO : files: files passed Sep 4 23:44:39.456896 ignition[1064]: INFO : Ignition finished successfully Sep 4 23:44:39.457848 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:44:39.504825 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:44:39.534833 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:44:39.595398 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:39.595398 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:39.559585 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:44:39.628857 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:39.559674 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:44:39.665891 kernel: mlx5_core a64c:00:02.0: poll_health:835:(pid 218): device's health compromised - reached miss count Sep 4 23:44:39.571003 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:44:39.588030 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:44:39.629793 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:44:39.684414 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:44:39.684616 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:44:39.706301 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:44:39.721236 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:44:39.735143 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:44:39.754780 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:44:39.777478 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:44:39.796791 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:44:39.825523 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:44:39.830019 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:44:39.844722 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:39.858596 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:39.866406 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:44:39.881138 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:44:39.881218 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:44:39.901224 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:44:39.912968 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:44:39.926836 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:44:39.940947 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:44:39.953023 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:44:39.965339 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:44:39.977445 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:44:39.992803 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:44:40.005991 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:44:40.018760 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:44:40.030433 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:44:40.030524 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:44:40.047214 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:40.059838 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:40.074449 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:44:40.074493 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:40.091601 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:44:40.091680 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:44:40.106436 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:44:40.106501 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:44:40.119924 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:44:40.119984 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:44:40.135838 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 23:44:40.237577 ignition[1118]: INFO : Ignition 2.20.0 Sep 4 23:44:40.237577 ignition[1118]: INFO : Stage: umount Sep 4 23:44:40.237577 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:40.237577 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 23:44:40.237577 ignition[1118]: INFO : umount: umount passed Sep 4 23:44:40.237577 ignition[1118]: INFO : Ignition finished successfully Sep 4 23:44:40.135894 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 23:44:40.174728 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:44:40.188680 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:44:40.188770 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:40.204731 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:44:40.214133 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:44:40.214273 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:40.225820 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:44:40.225903 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:44:40.249146 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:44:40.249256 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:44:40.263460 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:44:40.263524 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:44:40.276666 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:44:40.276720 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:44:40.291971 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:44:40.292023 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:44:40.304036 systemd[1]: Stopped target network.target - Network. Sep 4 23:44:40.310114 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:44:40.310189 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:44:40.323896 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:44:40.338298 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:44:40.350564 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:40.360513 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:44:40.373630 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:44:40.385785 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:44:40.385839 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:44:40.399500 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:44:40.399574 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:44:40.414263 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:44:40.414342 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:44:40.426125 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:44:40.426184 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:44:40.438696 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:44:40.450588 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:44:40.465816 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:44:40.466387 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:44:40.466481 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:44:40.787952 kernel: hv_netvsc 002248ba-ce4b-0022-48ba-ce4b002248ba eth0: Data path switched from VF: enP42572s1 Sep 4 23:44:40.485718 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:44:40.485994 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:44:40.486207 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:44:40.503076 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:44:40.503310 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:44:40.503519 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:44:40.517164 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:44:40.517246 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:40.529470 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:44:40.529559 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:44:40.558733 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:44:40.565854 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:44:40.565927 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:44:40.574333 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:44:40.574389 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:40.594235 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:44:40.594290 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:40.602127 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:44:40.602178 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:40.626508 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:40.636192 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:44:40.636270 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:40.663012 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:44:40.663177 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:40.679039 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:44:40.679121 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:40.692876 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:44:40.692918 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:40.706269 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:44:40.706341 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:44:40.726263 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:44:40.726328 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:44:40.747577 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:44:40.747642 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:40.804791 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:44:40.820908 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:44:40.820988 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:40.842845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:40.842917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:40.856448 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:44:40.856513 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:40.856915 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:44:40.857033 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:44:41.180587 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 4 23:44:40.933018 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:44:40.933159 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:44:40.946391 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:44:40.986812 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:44:41.035071 systemd[1]: Switching root. Sep 4 23:44:41.213249 systemd-journald[218]: Journal stopped Sep 4 23:44:51.161183 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:44:51.161221 kernel: SELinux: policy capability open_perms=1 Sep 4 23:44:51.161232 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:44:51.161240 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:44:51.161252 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:44:51.161262 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:44:51.161271 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:44:51.161279 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:44:51.161287 kernel: audit: type=1403 audit(1757029482.684:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:44:51.161297 systemd[1]: Successfully loaded SELinux policy in 228.081ms. Sep 4 23:44:51.161309 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.548ms. Sep 4 23:44:51.161319 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:44:51.161328 systemd[1]: Detected virtualization microsoft. Sep 4 23:44:51.161337 systemd[1]: Detected architecture arm64. Sep 4 23:44:51.161346 systemd[1]: Detected first boot. Sep 4 23:44:51.161357 systemd[1]: Hostname set to . Sep 4 23:44:51.161366 systemd[1]: Initializing machine ID from random generator. Sep 4 23:44:51.161375 zram_generator::config[1162]: No configuration found. Sep 4 23:44:51.161385 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:44:51.161394 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:44:51.161403 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:44:51.161413 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:44:51.161423 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:44:51.161433 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:44:51.161442 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:44:51.161452 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:44:51.161462 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:44:51.161472 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:44:51.161481 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:44:51.161492 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:44:51.161501 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:44:51.161510 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:44:51.161519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:51.161529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:51.161575 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:44:51.161586 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:44:51.161596 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:44:51.161615 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:44:51.161626 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 23:44:51.161636 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:51.161648 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:44:51.161658 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:44:51.161668 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:44:51.161677 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:44:51.161687 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:51.161698 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:44:51.161707 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:44:51.161718 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:44:51.161727 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:44:51.161737 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:44:51.161746 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:44:51.161758 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:51.161767 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:51.161777 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:51.161786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:44:51.161795 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:44:51.161805 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:44:51.161814 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:44:51.161825 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:44:51.161834 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:44:51.161844 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:44:51.161853 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:44:51.161863 systemd[1]: Reached target machines.target - Containers. Sep 4 23:44:51.161876 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:44:51.161886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:44:51.161896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:44:51.161907 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:44:51.161917 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:44:51.161928 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:44:51.161937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:44:51.161947 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:44:51.161956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:44:51.161966 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:44:51.161975 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:44:51.161987 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:44:51.161996 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:44:51.162005 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:44:51.162015 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:44:51.162025 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:44:51.162034 kernel: fuse: init (API version 7.39) Sep 4 23:44:51.162043 kernel: loop: module loaded Sep 4 23:44:51.162051 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:44:51.162061 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:44:51.162072 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:44:51.162081 kernel: ACPI: bus type drm_connector registered Sep 4 23:44:51.162090 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:44:51.162100 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:44:51.162109 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:44:51.162119 systemd[1]: Stopped verity-setup.service. Sep 4 23:44:51.162164 systemd-journald[1259]: Collecting audit messages is disabled. Sep 4 23:44:51.162189 systemd-journald[1259]: Journal started Sep 4 23:44:51.162210 systemd-journald[1259]: Runtime Journal (/run/log/journal/03aa17d77417439693f99a2d30038280) is 8M, max 78.5M, 70.5M free. Sep 4 23:44:49.904391 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:44:49.909476 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 23:44:49.909993 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:44:49.910433 systemd[1]: systemd-journald.service: Consumed 4.025s CPU time. Sep 4 23:44:51.178151 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:44:51.179376 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:44:51.187241 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:44:51.196806 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:44:51.204430 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:44:51.213789 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:44:51.224123 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:44:51.231739 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:44:51.240903 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:51.250890 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:44:51.251072 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:44:51.261585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:44:51.261762 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:44:51.271340 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:44:51.271521 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:44:51.280137 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:44:51.280307 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:44:51.290417 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:44:51.290594 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:44:51.299244 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:44:51.299432 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:44:51.308594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:51.318603 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:44:51.328797 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:44:51.339221 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:44:51.349618 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:51.368822 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:44:51.381722 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:44:51.394842 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:44:51.402969 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:44:51.403014 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:44:51.411567 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:44:51.421689 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:44:51.430772 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:44:51.439902 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:44:51.441182 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:44:51.450155 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:44:51.458905 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:44:51.461284 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:44:51.468712 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:44:51.470150 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:44:51.479756 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:44:51.491611 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:44:51.504644 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:44:51.515988 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:44:51.528922 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:44:51.539724 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:44:51.558837 udevadm[1306]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 23:44:51.565849 systemd-journald[1259]: Time spent on flushing to /var/log/journal/03aa17d77417439693f99a2d30038280 is 13.693ms for 918 entries. Sep 4 23:44:51.565849 systemd-journald[1259]: System Journal (/var/log/journal/03aa17d77417439693f99a2d30038280) is 8M, max 2.6G, 2.6G free. Sep 4 23:44:51.667111 kernel: loop0: detected capacity change from 0 to 207008 Sep 4 23:44:51.667175 systemd-journald[1259]: Received client request to flush runtime journal. Sep 4 23:44:51.667226 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:44:51.561968 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:44:51.577517 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:44:51.591705 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:44:51.668730 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:44:51.712926 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:51.734606 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:44:51.735947 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:44:51.764570 kernel: loop1: detected capacity change from 0 to 113512 Sep 4 23:44:52.382559 kernel: loop2: detected capacity change from 0 to 28720 Sep 4 23:44:52.478316 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:44:52.490742 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:44:52.729416 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Sep 4 23:44:52.729436 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Sep 4 23:44:52.734015 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:53.185566 kernel: loop3: detected capacity change from 0 to 123192 Sep 4 23:44:53.729462 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:44:53.745795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:53.771476 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Sep 4 23:44:53.883569 kernel: loop4: detected capacity change from 0 to 207008 Sep 4 23:44:53.911559 kernel: loop5: detected capacity change from 0 to 113512 Sep 4 23:44:53.928553 kernel: loop6: detected capacity change from 0 to 28720 Sep 4 23:44:53.945553 kernel: loop7: detected capacity change from 0 to 123192 Sep 4 23:44:53.956046 (sd-merge)[1330]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 23:44:53.956502 (sd-merge)[1330]: Merged extensions into '/usr'. Sep 4 23:44:53.960397 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:44:53.960586 systemd[1]: Reloading... Sep 4 23:44:54.029943 zram_generator::config[1360]: No configuration found. Sep 4 23:44:54.181902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:44:54.252123 systemd[1]: Reloading finished in 291 ms. Sep 4 23:44:54.271582 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:44:54.288977 systemd[1]: Starting ensure-sysext.service... Sep 4 23:44:54.299051 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:44:54.351616 systemd[1]: Reload requested from client PID 1413 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:44:54.351782 systemd[1]: Reloading... Sep 4 23:44:54.378749 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:44:54.378967 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:44:54.380712 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:44:54.380989 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Sep 4 23:44:54.381043 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Sep 4 23:44:54.420830 zram_generator::config[1444]: No configuration found. Sep 4 23:44:54.460852 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:44:54.460864 systemd-tmpfiles[1414]: Skipping /boot Sep 4 23:44:54.470428 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:44:54.470645 systemd-tmpfiles[1414]: Skipping /boot Sep 4 23:44:54.546266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:44:54.620655 systemd[1]: Reloading finished in 268 ms. Sep 4 23:44:54.647966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:54.668808 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:44:54.700315 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:44:54.719179 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:44:54.737023 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:44:54.753509 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:44:54.765419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:54.785551 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 23:44:54.787985 systemd[1]: Finished ensure-sysext.service. Sep 4 23:44:54.797974 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 4 23:44:54.806090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:44:54.812947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:44:54.830772 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:44:54.845744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:44:54.862273 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:44:54.872290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:44:54.872348 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:44:54.875131 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:44:54.889266 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:44:54.900296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:44:54.900505 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:44:54.910298 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:44:54.910477 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:44:54.921966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:44:54.922147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:44:54.932171 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:44:54.932605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:44:54.951164 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:44:54.965333 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:44:54.965416 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:44:54.976192 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:44:55.003821 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 4 23:44:55.049250 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:44:55.053948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:55.063988 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:44:55.092709 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 23:44:55.092815 kernel: hv_vmbus: registering driver hv_balloon Sep 4 23:44:55.092844 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 23:44:55.096575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:55.096982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:55.108198 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 23:44:55.108280 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 23:44:55.108295 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 4 23:44:55.125762 augenrules[1581]: No rules Sep 4 23:44:55.140962 kernel: Console: switching to colour dummy device 80x25 Sep 4 23:44:55.140988 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 23:44:55.140554 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:44:55.140777 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:44:55.163939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:55.175873 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:55.178790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:55.192223 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:44:55.218735 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:55.242639 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1519) Sep 4 23:44:55.335433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 23:44:55.347669 systemd-networkd[1548]: lo: Link UP Sep 4 23:44:55.347678 systemd-networkd[1548]: lo: Gained carrier Sep 4 23:44:55.349878 systemd-networkd[1548]: Enumeration completed Sep 4 23:44:55.357817 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:44:55.358402 systemd-networkd[1548]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:55.358411 systemd-networkd[1548]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:44:55.365275 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:44:55.374178 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:44:55.384801 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:44:55.439562 kernel: mlx5_core a64c:00:02.0 enP42572s1: Link up Sep 4 23:44:55.486634 kernel: hv_netvsc 002248ba-ce4b-0022-48ba-ce4b002248ba eth0: Data path switched to VF: enP42572s1 Sep 4 23:44:55.488365 systemd-networkd[1548]: enP42572s1: Link UP Sep 4 23:44:55.488514 systemd-networkd[1548]: eth0: Link UP Sep 4 23:44:55.488517 systemd-networkd[1548]: eth0: Gained carrier Sep 4 23:44:55.488552 systemd-networkd[1548]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:55.490708 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:44:55.498729 systemd-resolved[1531]: Positive Trust Anchors: Sep 4 23:44:55.499179 systemd-resolved[1531]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:44:55.499269 systemd-resolved[1531]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:44:55.499594 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:44:55.507762 systemd-networkd[1548]: enP42572s1: Gained carrier Sep 4 23:44:55.512676 systemd-networkd[1548]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:44:55.522636 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:44:55.537743 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:44:55.584842 systemd-resolved[1531]: Using system hostname 'ci-4230.2.2-n-c33c3b40b5'. Sep 4 23:44:55.587894 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:44:55.595119 systemd[1]: Reached target network.target - Network. Sep 4 23:44:55.600885 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:55.625564 lvm[1667]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:44:55.665184 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:44:55.677048 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:55.688731 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:44:55.701216 lvm[1669]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:44:55.729347 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:44:56.781659 systemd-networkd[1548]: eth0: Gained IPv6LL Sep 4 23:44:56.783913 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:44:56.793122 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:44:57.291225 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:57.757405 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:44:57.766396 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:45:02.637568 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:45:02.652248 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:45:02.667818 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:45:02.694085 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:45:02.702052 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:45:02.708634 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:45:02.716316 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:45:02.724845 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:45:02.733844 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:45:02.742617 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:45:02.751562 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:45:02.751598 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:45:02.757473 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:45:02.789236 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:45:02.798037 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:45:02.805843 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:45:02.813403 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:45:02.820918 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:45:02.836273 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:45:02.843473 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:45:02.851246 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:45:02.857990 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:45:02.863677 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:45:02.869844 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:45:02.869871 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:45:02.898655 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 23:45:02.906280 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:45:02.916740 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:45:02.932712 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:45:02.942197 (chronyd)[1681]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 23:45:02.942760 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:45:02.950785 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:45:02.957139 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:45:02.957262 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 4 23:45:02.959769 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 4 23:45:02.967068 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 4 23:45:02.970772 jq[1688]: false Sep 4 23:45:02.972717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:02.980855 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:45:02.988693 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:45:02.997724 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:45:02.999908 KVP[1690]: KVP starting; pid is:1690 Sep 4 23:45:03.007998 kernel: hv_utils: KVP IC version 4.0 Sep 4 23:45:03.007599 KVP[1690]: KVP LIC Version: 3.1 Sep 4 23:45:03.009745 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:45:03.011657 chronyd[1698]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 23:45:03.020557 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:45:03.031750 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:45:03.040965 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:45:03.041515 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:45:03.047092 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:45:03.054567 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:45:03.058087 extend-filesystems[1689]: Found loop4 Sep 4 23:45:03.080127 extend-filesystems[1689]: Found loop5 Sep 4 23:45:03.080127 extend-filesystems[1689]: Found loop6 Sep 4 23:45:03.080127 extend-filesystems[1689]: Found loop7 Sep 4 23:45:03.080127 extend-filesystems[1689]: Found sda Sep 4 23:45:03.080127 extend-filesystems[1689]: Found sda1 Sep 4 23:45:03.080127 extend-filesystems[1689]: Found sda2 Sep 4 23:45:03.080127 extend-filesystems[1689]: Found sda3 Sep 4 23:45:03.080127 extend-filesystems[1689]: Found usr Sep 4 23:45:03.080127 extend-filesystems[1689]: Found sda4 Sep 4 23:45:03.080127 extend-filesystems[1689]: Found sda6 Sep 4 23:45:03.080127 extend-filesystems[1689]: Found sda7 Sep 4 23:45:03.080127 extend-filesystems[1689]: Found sda9 Sep 4 23:45:03.080127 extend-filesystems[1689]: Checking size of /dev/sda9 Sep 4 23:45:03.079723 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 23:45:03.078643 chronyd[1698]: Timezone right/UTC failed leap second check, ignoring Sep 4 23:45:03.242610 extend-filesystems[1689]: Old size kept for /dev/sda9 Sep 4 23:45:03.242610 extend-filesystems[1689]: Found sr0 Sep 4 23:45:03.275752 update_engine[1705]: I20250904 23:45:03.168607 1705 main.cc:92] Flatcar Update Engine starting Sep 4 23:45:03.110042 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:45:03.078846 chronyd[1698]: Loaded seccomp filter (level 2) Sep 4 23:45:03.276624 jq[1706]: true Sep 4 23:45:03.110577 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:45:03.111899 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:45:03.112077 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:45:03.130787 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:45:03.277202 jq[1722]: true Sep 4 23:45:03.130995 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:45:03.161653 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:45:03.185831 (ntainerd)[1724]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:45:03.204829 systemd-logind[1702]: New seat seat0. Sep 4 23:45:03.206975 systemd-logind[1702]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 23:45:03.209800 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:45:03.227364 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:45:03.232587 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:45:03.331224 tar[1718]: linux-arm64/LICENSE Sep 4 23:45:03.331224 tar[1718]: linux-arm64/helm Sep 4 23:45:03.343883 bash[1749]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:45:03.346560 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:45:03.361095 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 23:45:03.374621 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1755) Sep 4 23:45:03.468648 dbus-daemon[1684]: [system] SELinux support is enabled Sep 4 23:45:03.471316 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:45:03.484507 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:45:03.486697 update_engine[1705]: I20250904 23:45:03.485842 1705 update_check_scheduler.cc:74] Next update check in 2m58s Sep 4 23:45:03.485562 dbus-daemon[1684]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 23:45:03.484558 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:45:03.497962 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:45:03.497994 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:45:03.511004 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:45:03.548067 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:45:03.648231 coreos-metadata[1683]: Sep 04 23:45:03.646 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 23:45:03.651380 coreos-metadata[1683]: Sep 04 23:45:03.651 INFO Fetch successful Sep 4 23:45:03.651638 coreos-metadata[1683]: Sep 04 23:45:03.651 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 23:45:03.659640 coreos-metadata[1683]: Sep 04 23:45:03.659 INFO Fetch successful Sep 4 23:45:03.660719 coreos-metadata[1683]: Sep 04 23:45:03.660 INFO Fetching http://168.63.129.16/machine/1f1ddb02-d4a6-4cb5-a3ec-1337c23d1dad/c6e08efa%2D4010%2D49b1%2Dbc53%2D6e531c99d4da.%5Fci%2D4230.2.2%2Dn%2Dc33c3b40b5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 23:45:03.665510 coreos-metadata[1683]: Sep 04 23:45:03.665 INFO Fetch successful Sep 4 23:45:03.665825 coreos-metadata[1683]: Sep 04 23:45:03.665 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 23:45:03.678562 coreos-metadata[1683]: Sep 04 23:45:03.678 INFO Fetch successful Sep 4 23:45:03.727116 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:45:03.737035 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:45:03.912498 containerd[1724]: time="2025-09-04T23:45:03.910238060Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:45:03.999570 containerd[1724]: time="2025-09-04T23:45:03.998797300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:04.003602 containerd[1724]: time="2025-09-04T23:45:04.003512180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:04.003602 containerd[1724]: time="2025-09-04T23:45:04.003591060Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:45:04.003749 containerd[1724]: time="2025-09-04T23:45:04.003619820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:45:04.003822 containerd[1724]: time="2025-09-04T23:45:04.003795700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:45:04.003822 containerd[1724]: time="2025-09-04T23:45:04.003819420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:04.003903 containerd[1724]: time="2025-09-04T23:45:04.003882300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:04.003903 containerd[1724]: time="2025-09-04T23:45:04.003898340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:04.004127 containerd[1724]: time="2025-09-04T23:45:04.004101780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:04.004127 containerd[1724]: time="2025-09-04T23:45:04.004122140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:04.004172 containerd[1724]: time="2025-09-04T23:45:04.004134540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:04.004172 containerd[1724]: time="2025-09-04T23:45:04.004145100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:04.004237 containerd[1724]: time="2025-09-04T23:45:04.004217820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:04.004436 containerd[1724]: time="2025-09-04T23:45:04.004412260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:04.004596 containerd[1724]: time="2025-09-04T23:45:04.004575340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:04.004596 containerd[1724]: time="2025-09-04T23:45:04.004593380Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:45:04.004696 containerd[1724]: time="2025-09-04T23:45:04.004675540Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:45:04.004745 containerd[1724]: time="2025-09-04T23:45:04.004727980Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:45:04.018492 locksmithd[1817]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:45:04.023360 containerd[1724]: time="2025-09-04T23:45:04.023305700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:45:04.023470 containerd[1724]: time="2025-09-04T23:45:04.023381260Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:45:04.023470 containerd[1724]: time="2025-09-04T23:45:04.023399020Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:45:04.023470 containerd[1724]: time="2025-09-04T23:45:04.023416700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:45:04.023470 containerd[1724]: time="2025-09-04T23:45:04.023432380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:45:04.023927 containerd[1724]: time="2025-09-04T23:45:04.023645300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:45:04.023927 containerd[1724]: time="2025-09-04T23:45:04.023905900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:45:04.024030 containerd[1724]: time="2025-09-04T23:45:04.024005340Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:45:04.024030 containerd[1724]: time="2025-09-04T23:45:04.024020980Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:45:04.024065 containerd[1724]: time="2025-09-04T23:45:04.024037380Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:45:04.024065 containerd[1724]: time="2025-09-04T23:45:04.024051900Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:45:04.024097 containerd[1724]: time="2025-09-04T23:45:04.024067980Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:45:04.024097 containerd[1724]: time="2025-09-04T23:45:04.024081260Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:45:04.024133 containerd[1724]: time="2025-09-04T23:45:04.024095660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:45:04.024133 containerd[1724]: time="2025-09-04T23:45:04.024112220Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:45:04.024133 containerd[1724]: time="2025-09-04T23:45:04.024124820Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:45:04.024189 containerd[1724]: time="2025-09-04T23:45:04.024137140Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:45:04.024189 containerd[1724]: time="2025-09-04T23:45:04.024149260Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:45:04.024189 containerd[1724]: time="2025-09-04T23:45:04.024171340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024189 containerd[1724]: time="2025-09-04T23:45:04.024184780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024258 containerd[1724]: time="2025-09-04T23:45:04.024197980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024258 containerd[1724]: time="2025-09-04T23:45:04.024211500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024258 containerd[1724]: time="2025-09-04T23:45:04.024224140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024258 containerd[1724]: time="2025-09-04T23:45:04.024236940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024258 containerd[1724]: time="2025-09-04T23:45:04.024248300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024343 containerd[1724]: time="2025-09-04T23:45:04.024261700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024343 containerd[1724]: time="2025-09-04T23:45:04.024279220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024343 containerd[1724]: time="2025-09-04T23:45:04.024294140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024343 containerd[1724]: time="2025-09-04T23:45:04.024304740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024343 containerd[1724]: time="2025-09-04T23:45:04.024316260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024343 containerd[1724]: time="2025-09-04T23:45:04.024328740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024451 containerd[1724]: time="2025-09-04T23:45:04.024343580Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:45:04.024451 containerd[1724]: time="2025-09-04T23:45:04.024364380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024451 containerd[1724]: time="2025-09-04T23:45:04.024376620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024451 containerd[1724]: time="2025-09-04T23:45:04.024389180Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:45:04.024451 containerd[1724]: time="2025-09-04T23:45:04.024438620Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:45:04.024559 containerd[1724]: time="2025-09-04T23:45:04.024457980Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:45:04.024559 containerd[1724]: time="2025-09-04T23:45:04.024469060Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:45:04.024559 containerd[1724]: time="2025-09-04T23:45:04.024482700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:45:04.024559 containerd[1724]: time="2025-09-04T23:45:04.024492820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.024559 containerd[1724]: time="2025-09-04T23:45:04.024505900Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:45:04.024559 containerd[1724]: time="2025-09-04T23:45:04.024515620Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:45:04.024559 containerd[1724]: time="2025-09-04T23:45:04.024527140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:45:04.025690 containerd[1724]: time="2025-09-04T23:45:04.024886700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:45:04.025690 containerd[1724]: time="2025-09-04T23:45:04.024943980Z" level=info msg="Connect containerd service" Sep 4 23:45:04.025690 containerd[1724]: time="2025-09-04T23:45:04.024984940Z" level=info msg="using legacy CRI server" Sep 4 23:45:04.025690 containerd[1724]: time="2025-09-04T23:45:04.024991940Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:45:04.025690 containerd[1724]: time="2025-09-04T23:45:04.025122820Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:45:04.033564 containerd[1724]: time="2025-09-04T23:45:04.033222340Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:45:04.033564 containerd[1724]: time="2025-09-04T23:45:04.033371900Z" level=info msg="Start subscribing containerd event" Sep 4 23:45:04.033564 containerd[1724]: time="2025-09-04T23:45:04.033415300Z" level=info msg="Start recovering state" Sep 4 23:45:04.033564 containerd[1724]: time="2025-09-04T23:45:04.033487940Z" level=info msg="Start event monitor" Sep 4 23:45:04.033564 containerd[1724]: time="2025-09-04T23:45:04.033498780Z" level=info msg="Start snapshots syncer" Sep 4 23:45:04.033564 containerd[1724]: time="2025-09-04T23:45:04.033508060Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:45:04.033564 containerd[1724]: time="2025-09-04T23:45:04.033517460Z" level=info msg="Start streaming server" Sep 4 23:45:04.034944 containerd[1724]: time="2025-09-04T23:45:04.034749060Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:45:04.041947 containerd[1724]: time="2025-09-04T23:45:04.038453020Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:45:04.038698 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:45:04.049092 containerd[1724]: time="2025-09-04T23:45:04.049055580Z" level=info msg="containerd successfully booted in 0.141778s" Sep 4 23:45:04.049246 tar[1718]: linux-arm64/README.md Sep 4 23:45:04.063726 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:45:04.316782 sshd_keygen[1717]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:45:04.339344 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:45:04.357870 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:45:04.366039 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 23:45:04.374925 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:45:04.375134 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:45:04.400857 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:45:04.411934 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 23:45:04.435907 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:45:04.445938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:04.456019 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:04.459921 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:45:04.478887 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 23:45:04.487980 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:45:04.494642 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:45:04.505626 systemd[1]: Startup finished in 757ms (kernel) + 15.669s (initrd) + 22.048s (userspace) = 38.475s. Sep 4 23:45:04.967789 kubelet[1868]: E0904 23:45:04.967687 1868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:04.970017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:04.970168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:04.972625 systemd[1]: kubelet.service: Consumed 729ms CPU time, 256.8M memory peak. Sep 4 23:45:05.200178 login[1870]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 4 23:45:05.219561 login[1871]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:05.232474 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:45:05.236833 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:45:05.239575 systemd-logind[1702]: New session 2 of user core. Sep 4 23:45:05.270476 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:45:05.279856 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:45:05.309326 (systemd)[1884]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:45:05.311632 systemd-logind[1702]: New session c1 of user core. Sep 4 23:45:05.760450 systemd[1884]: Queued start job for default target default.target. Sep 4 23:45:05.767835 systemd[1884]: Created slice app.slice - User Application Slice. Sep 4 23:45:05.768095 systemd[1884]: Reached target paths.target - Paths. Sep 4 23:45:05.768273 systemd[1884]: Reached target timers.target - Timers. Sep 4 23:45:05.769643 systemd[1884]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:45:05.779525 systemd[1884]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:45:05.779618 systemd[1884]: Reached target sockets.target - Sockets. Sep 4 23:45:05.779666 systemd[1884]: Reached target basic.target - Basic System. Sep 4 23:45:05.779694 systemd[1884]: Reached target default.target - Main User Target. Sep 4 23:45:05.779719 systemd[1884]: Startup finished in 461ms. Sep 4 23:45:05.779875 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:45:05.781370 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:45:06.200576 login[1870]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:06.205768 systemd-logind[1702]: New session 1 of user core. Sep 4 23:45:06.212738 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:45:07.215553 waagent[1863]: 2025-09-04T23:45:07.215222Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 4 23:45:07.221852 waagent[1863]: 2025-09-04T23:45:07.221774Z INFO Daemon Daemon OS: flatcar 4230.2.2 Sep 4 23:45:07.227065 waagent[1863]: 2025-09-04T23:45:07.226998Z INFO Daemon Daemon Python: 3.11.11 Sep 4 23:45:07.233562 waagent[1863]: 2025-09-04T23:45:07.233478Z INFO Daemon Daemon Run daemon Sep 4 23:45:07.238487 waagent[1863]: 2025-09-04T23:45:07.238434Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.2' Sep 4 23:45:07.249939 waagent[1863]: 2025-09-04T23:45:07.249855Z INFO Daemon Daemon Using waagent for provisioning Sep 4 23:45:07.256043 waagent[1863]: 2025-09-04T23:45:07.255992Z INFO Daemon Daemon Activate resource disk Sep 4 23:45:07.261510 waagent[1863]: 2025-09-04T23:45:07.261451Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 23:45:07.275921 waagent[1863]: 2025-09-04T23:45:07.275839Z INFO Daemon Daemon Found device: None Sep 4 23:45:07.282216 waagent[1863]: 2025-09-04T23:45:07.282155Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 23:45:07.292464 waagent[1863]: 2025-09-04T23:45:07.292399Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 23:45:07.308574 waagent[1863]: 2025-09-04T23:45:07.307792Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 23:45:07.314767 waagent[1863]: 2025-09-04T23:45:07.314703Z INFO Daemon Daemon Running default provisioning handler Sep 4 23:45:07.327122 waagent[1863]: 2025-09-04T23:45:07.327015Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 23:45:07.344411 waagent[1863]: 2025-09-04T23:45:07.344333Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 23:45:07.355664 waagent[1863]: 2025-09-04T23:45:07.355595Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 23:45:07.361431 waagent[1863]: 2025-09-04T23:45:07.361366Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 23:45:07.475293 waagent[1863]: 2025-09-04T23:45:07.475134Z INFO Daemon Daemon Successfully mounted dvd Sep 4 23:45:07.508201 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 23:45:07.510541 waagent[1863]: 2025-09-04T23:45:07.510451Z INFO Daemon Daemon Detect protocol endpoint Sep 4 23:45:07.516184 waagent[1863]: 2025-09-04T23:45:07.516115Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 23:45:07.522281 waagent[1863]: 2025-09-04T23:45:07.522223Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 23:45:07.531156 waagent[1863]: 2025-09-04T23:45:07.531094Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 23:45:07.537793 waagent[1863]: 2025-09-04T23:45:07.537738Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 23:45:07.543962 waagent[1863]: 2025-09-04T23:45:07.543904Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 23:45:07.600708 waagent[1863]: 2025-09-04T23:45:07.600655Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 23:45:07.607918 waagent[1863]: 2025-09-04T23:45:07.607886Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 23:45:07.613571 waagent[1863]: 2025-09-04T23:45:07.613503Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 23:45:08.121579 waagent[1863]: 2025-09-04T23:45:08.121445Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 23:45:08.128995 waagent[1863]: 2025-09-04T23:45:08.128918Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 23:45:08.141169 waagent[1863]: 2025-09-04T23:45:08.141114Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 23:45:08.210135 waagent[1863]: 2025-09-04T23:45:08.210073Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 4 23:45:08.216291 waagent[1863]: 2025-09-04T23:45:08.216241Z INFO Daemon Sep 4 23:45:08.219287 waagent[1863]: 2025-09-04T23:45:08.219234Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 39b00b58-5f53-4034-aa19-804295abcff4 eTag: 15087624109517987932 source: Fabric] Sep 4 23:45:08.232362 waagent[1863]: 2025-09-04T23:45:08.231989Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 23:45:08.240014 waagent[1863]: 2025-09-04T23:45:08.239966Z INFO Daemon Sep 4 23:45:08.243025 waagent[1863]: 2025-09-04T23:45:08.242976Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 23:45:08.254450 waagent[1863]: 2025-09-04T23:45:08.254412Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 23:45:08.334981 waagent[1863]: 2025-09-04T23:45:08.334877Z INFO Daemon Downloaded certificate {'thumbprint': '6D0D8150B6B9758E0FF32B6C032C2BB2E3272782', 'hasPrivateKey': True} Sep 4 23:45:08.347039 waagent[1863]: 2025-09-04T23:45:08.346977Z INFO Daemon Fetch goal state completed Sep 4 23:45:08.359330 waagent[1863]: 2025-09-04T23:45:08.359278Z INFO Daemon Daemon Starting provisioning Sep 4 23:45:08.364878 waagent[1863]: 2025-09-04T23:45:08.364804Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 23:45:08.370657 waagent[1863]: 2025-09-04T23:45:08.370594Z INFO Daemon Daemon Set hostname [ci-4230.2.2-n-c33c3b40b5] Sep 4 23:45:08.545557 waagent[1863]: 2025-09-04T23:45:08.545432Z INFO Daemon Daemon Publish hostname [ci-4230.2.2-n-c33c3b40b5] Sep 4 23:45:08.552126 waagent[1863]: 2025-09-04T23:45:08.552054Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 23:45:08.559156 waagent[1863]: 2025-09-04T23:45:08.559095Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 23:45:08.571524 systemd-networkd[1548]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:08.571556 systemd-networkd[1548]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:08.571587 systemd-networkd[1548]: eth0: DHCP lease lost Sep 4 23:45:08.572962 waagent[1863]: 2025-09-04T23:45:08.572874Z INFO Daemon Daemon Create user account if not exists Sep 4 23:45:08.579093 waagent[1863]: 2025-09-04T23:45:08.579026Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 23:45:08.585124 waagent[1863]: 2025-09-04T23:45:08.585063Z INFO Daemon Daemon Configure sudoer Sep 4 23:45:08.590140 waagent[1863]: 2025-09-04T23:45:08.590072Z INFO Daemon Daemon Configure sshd Sep 4 23:45:08.595121 waagent[1863]: 2025-09-04T23:45:08.595060Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 23:45:08.610967 waagent[1863]: 2025-09-04T23:45:08.610891Z INFO Daemon Daemon Deploy ssh public key. Sep 4 23:45:08.629601 systemd-networkd[1548]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 23:45:09.970734 waagent[1863]: 2025-09-04T23:45:09.970668Z INFO Daemon Daemon Provisioning complete Sep 4 23:45:09.990741 waagent[1863]: 2025-09-04T23:45:09.990692Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 23:45:09.997583 waagent[1863]: 2025-09-04T23:45:09.997500Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 23:45:10.008086 waagent[1863]: 2025-09-04T23:45:10.008025Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 4 23:45:10.150143 waagent[1934]: 2025-09-04T23:45:10.149579Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 4 23:45:10.150143 waagent[1934]: 2025-09-04T23:45:10.149749Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.2 Sep 4 23:45:10.150143 waagent[1934]: 2025-09-04T23:45:10.149803Z INFO ExtHandler ExtHandler Python: 3.11.11 Sep 4 23:45:10.258656 waagent[1934]: 2025-09-04T23:45:10.258483Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 4 23:45:10.259007 waagent[1934]: 2025-09-04T23:45:10.258958Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 23:45:10.259152 waagent[1934]: 2025-09-04T23:45:10.259118Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 23:45:10.269522 waagent[1934]: 2025-09-04T23:45:10.269436Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 23:45:10.276786 waagent[1934]: 2025-09-04T23:45:10.276727Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 4 23:45:10.277566 waagent[1934]: 2025-09-04T23:45:10.277471Z INFO ExtHandler Sep 4 23:45:10.277672 waagent[1934]: 2025-09-04T23:45:10.277600Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a0715c11-2d28-4529-8f0a-affa5dc056db eTag: 15087624109517987932 source: Fabric] Sep 4 23:45:10.277977 waagent[1934]: 2025-09-04T23:45:10.277928Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 23:45:10.278616 waagent[1934]: 2025-09-04T23:45:10.278566Z INFO ExtHandler Sep 4 23:45:10.278690 waagent[1934]: 2025-09-04T23:45:10.278659Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 23:45:10.283584 waagent[1934]: 2025-09-04T23:45:10.283524Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 23:45:10.359234 waagent[1934]: 2025-09-04T23:45:10.359129Z INFO ExtHandler Downloaded certificate {'thumbprint': '6D0D8150B6B9758E0FF32B6C032C2BB2E3272782', 'hasPrivateKey': True} Sep 4 23:45:10.359817 waagent[1934]: 2025-09-04T23:45:10.359764Z INFO ExtHandler Fetch goal state completed Sep 4 23:45:10.376438 waagent[1934]: 2025-09-04T23:45:10.376365Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1934 Sep 4 23:45:10.376638 waagent[1934]: 2025-09-04T23:45:10.376598Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 23:45:10.378365 waagent[1934]: 2025-09-04T23:45:10.378306Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.2', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 23:45:10.378773 waagent[1934]: 2025-09-04T23:45:10.378730Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 4 23:45:10.467931 waagent[1934]: 2025-09-04T23:45:10.467883Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 4 23:45:10.468140 waagent[1934]: 2025-09-04T23:45:10.468098Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 4 23:45:10.474319 waagent[1934]: 2025-09-04T23:45:10.474256Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 4 23:45:10.480940 systemd[1]: Reload requested from client PID 1947 ('systemctl') (unit waagent.service)... Sep 4 23:45:10.480958 systemd[1]: Reloading... Sep 4 23:45:10.587418 zram_generator::config[1995]: No configuration found. Sep 4 23:45:10.683065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:10.790407 systemd[1]: Reloading finished in 309 ms. Sep 4 23:45:10.808736 waagent[1934]: 2025-09-04T23:45:10.808314Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 4 23:45:10.815303 systemd[1]: Reload requested from client PID 2042 ('systemctl') (unit waagent.service)... Sep 4 23:45:10.815319 systemd[1]: Reloading... Sep 4 23:45:10.909568 zram_generator::config[2084]: No configuration found. Sep 4 23:45:11.012581 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:11.111797 systemd[1]: Reloading finished in 296 ms. Sep 4 23:45:11.125573 waagent[1934]: 2025-09-04T23:45:11.124907Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 4 23:45:11.125573 waagent[1934]: 2025-09-04T23:45:11.125080Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 4 23:45:11.711578 waagent[1934]: 2025-09-04T23:45:11.711182Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 4 23:45:11.711924 waagent[1934]: 2025-09-04T23:45:11.711840Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 4 23:45:11.712782 waagent[1934]: 2025-09-04T23:45:11.712686Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 4 23:45:11.713256 waagent[1934]: 2025-09-04T23:45:11.713118Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 4 23:45:11.714296 waagent[1934]: 2025-09-04T23:45:11.713476Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 23:45:11.714296 waagent[1934]: 2025-09-04T23:45:11.713598Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 23:45:11.714296 waagent[1934]: 2025-09-04T23:45:11.713814Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 4 23:45:11.714296 waagent[1934]: 2025-09-04T23:45:11.714001Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 4 23:45:11.714296 waagent[1934]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 4 23:45:11.714296 waagent[1934]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 4 23:45:11.714296 waagent[1934]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 4 23:45:11.714296 waagent[1934]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 4 23:45:11.714296 waagent[1934]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 23:45:11.714296 waagent[1934]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 23:45:11.714697 waagent[1934]: 2025-09-04T23:45:11.714644Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 23:45:11.714867 waagent[1934]: 2025-09-04T23:45:11.714821Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 4 23:45:11.714932 waagent[1934]: 2025-09-04T23:45:11.714879Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 4 23:45:11.715377 waagent[1934]: 2025-09-04T23:45:11.715315Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 4 23:45:11.715546 waagent[1934]: 2025-09-04T23:45:11.715487Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 4 23:45:11.716377 waagent[1934]: 2025-09-04T23:45:11.716346Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 23:45:11.716709 waagent[1934]: 2025-09-04T23:45:11.716663Z INFO EnvHandler ExtHandler Configure routes Sep 4 23:45:11.716762 waagent[1934]: 2025-09-04T23:45:11.716253Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 4 23:45:11.717780 waagent[1934]: 2025-09-04T23:45:11.717735Z INFO EnvHandler ExtHandler Gateway:None Sep 4 23:45:11.717986 waagent[1934]: 2025-09-04T23:45:11.717960Z INFO EnvHandler ExtHandler Routes:None Sep 4 23:45:11.722225 waagent[1934]: 2025-09-04T23:45:11.722163Z INFO ExtHandler ExtHandler Sep 4 23:45:11.722779 waagent[1934]: 2025-09-04T23:45:11.722650Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 9a1996d4-6c97-467a-a90d-1d7d37fad4b9 correlation 6279fab5-230a-4912-a3a8-c9da1f1578b7 created: 2025-09-04T23:43:26.764259Z] Sep 4 23:45:11.723773 waagent[1934]: 2025-09-04T23:45:11.723692Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 4 23:45:11.726658 waagent[1934]: 2025-09-04T23:45:11.726325Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Sep 4 23:45:11.764319 waagent[1934]: 2025-09-04T23:45:11.764254Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 10CD37E5-E185-42C2-9F28-47E070F2F441;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 4 23:45:11.815070 waagent[1934]: 2025-09-04T23:45:11.814993Z INFO MonitorHandler ExtHandler Network interfaces: Sep 4 23:45:11.815070 waagent[1934]: Executing ['ip', '-a', '-o', 'link']: Sep 4 23:45:11.815070 waagent[1934]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 4 23:45:11.815070 waagent[1934]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:ba:ce:4b brd ff:ff:ff:ff:ff:ff Sep 4 23:45:11.815070 waagent[1934]: 3: enP42572s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:ba:ce:4b brd ff:ff:ff:ff:ff:ff\ altname enP42572p0s2 Sep 4 23:45:11.815070 waagent[1934]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 4 23:45:11.815070 waagent[1934]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 4 23:45:11.815070 waagent[1934]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 4 23:45:11.815070 waagent[1934]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 4 23:45:11.815070 waagent[1934]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 4 23:45:11.815070 waagent[1934]: 2: eth0 inet6 fe80::222:48ff:feba:ce4b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 23:45:11.911666 waagent[1934]: 2025-09-04T23:45:11.910755Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 4 23:45:11.911666 waagent[1934]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:11.911666 waagent[1934]: pkts bytes target prot opt in out source destination Sep 4 23:45:11.911666 waagent[1934]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:11.911666 waagent[1934]: pkts bytes target prot opt in out source destination Sep 4 23:45:11.911666 waagent[1934]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:11.911666 waagent[1934]: pkts bytes target prot opt in out source destination Sep 4 23:45:11.911666 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 23:45:11.911666 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 23:45:11.911666 waagent[1934]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 23:45:11.914025 waagent[1934]: 2025-09-04T23:45:11.913944Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 4 23:45:11.914025 waagent[1934]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:11.914025 waagent[1934]: pkts bytes target prot opt in out source destination Sep 4 23:45:11.914025 waagent[1934]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:11.914025 waagent[1934]: pkts bytes target prot opt in out source destination Sep 4 23:45:11.914025 waagent[1934]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 23:45:11.914025 waagent[1934]: pkts bytes target prot opt in out source destination Sep 4 23:45:11.914025 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 23:45:11.914025 waagent[1934]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 23:45:11.914025 waagent[1934]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 23:45:11.914288 waagent[1934]: 2025-09-04T23:45:11.914251Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 4 23:45:13.823525 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:45:13.828809 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:43026.service - OpenSSH per-connection server daemon (10.200.16.10:43026). Sep 4 23:45:14.436175 sshd[2167]: Accepted publickey for core from 10.200.16.10 port 43026 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:14.437509 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:14.442991 systemd-logind[1702]: New session 3 of user core. Sep 4 23:45:14.449782 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:45:14.876971 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:43028.service - OpenSSH per-connection server daemon (10.200.16.10:43028). Sep 4 23:45:15.045797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:45:15.054379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:15.163164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:15.167384 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:15.310617 kubelet[2182]: E0904 23:45:15.310526 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:15.313927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:15.314234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:15.314812 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.2M memory peak. Sep 4 23:45:15.377565 sshd[2172]: Accepted publickey for core from 10.200.16.10 port 43028 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:15.378234 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:15.382486 systemd-logind[1702]: New session 4 of user core. Sep 4 23:45:15.389696 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:45:15.749485 sshd[2189]: Connection closed by 10.200.16.10 port 43028 Sep 4 23:45:15.748521 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:15.751516 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:43028.service: Deactivated successfully. Sep 4 23:45:15.753247 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:45:15.755790 systemd-logind[1702]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:45:15.757009 systemd-logind[1702]: Removed session 4. Sep 4 23:45:15.845903 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:43034.service - OpenSSH per-connection server daemon (10.200.16.10:43034). Sep 4 23:45:16.337912 sshd[2195]: Accepted publickey for core from 10.200.16.10 port 43034 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:16.339231 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:16.344599 systemd-logind[1702]: New session 5 of user core. Sep 4 23:45:16.349731 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:45:16.682209 sshd[2197]: Connection closed by 10.200.16.10 port 43034 Sep 4 23:45:16.682807 sshd-session[2195]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:16.686742 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:43034.service: Deactivated successfully. Sep 4 23:45:16.688691 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:45:16.689661 systemd-logind[1702]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:45:16.690419 systemd-logind[1702]: Removed session 5. Sep 4 23:45:16.771528 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:43042.service - OpenSSH per-connection server daemon (10.200.16.10:43042). Sep 4 23:45:17.266377 sshd[2203]: Accepted publickey for core from 10.200.16.10 port 43042 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:17.267637 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:17.272707 systemd-logind[1702]: New session 6 of user core. Sep 4 23:45:17.278785 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:45:17.620563 sshd[2205]: Connection closed by 10.200.16.10 port 43042 Sep 4 23:45:17.621090 sshd-session[2203]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:17.625977 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:43042.service: Deactivated successfully. Sep 4 23:45:17.627761 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:45:17.628428 systemd-logind[1702]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:45:17.629375 systemd-logind[1702]: Removed session 6. Sep 4 23:45:17.708805 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:43058.service - OpenSSH per-connection server daemon (10.200.16.10:43058). Sep 4 23:45:18.160998 sshd[2211]: Accepted publickey for core from 10.200.16.10 port 43058 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:18.162345 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:18.167701 systemd-logind[1702]: New session 7 of user core. Sep 4 23:45:18.174746 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:45:18.580850 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:45:18.581126 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:18.615485 sudo[2214]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:18.707739 sshd[2213]: Connection closed by 10.200.16.10 port 43058 Sep 4 23:45:18.708425 sshd-session[2211]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:18.712220 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:43058.service: Deactivated successfully. Sep 4 23:45:18.714928 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:45:18.716100 systemd-logind[1702]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:45:18.717346 systemd-logind[1702]: Removed session 7. Sep 4 23:45:18.791567 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:43072.service - OpenSSH per-connection server daemon (10.200.16.10:43072). Sep 4 23:45:19.249845 sshd[2220]: Accepted publickey for core from 10.200.16.10 port 43072 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:19.251236 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:19.257125 systemd-logind[1702]: New session 8 of user core. Sep 4 23:45:19.262749 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:45:19.506668 sudo[2224]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:45:19.507474 sudo[2224]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:19.510816 sudo[2224]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:19.515557 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:45:19.515834 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:19.527835 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:45:19.549802 augenrules[2246]: No rules Sep 4 23:45:19.551205 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:45:19.551399 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:45:19.554799 sudo[2223]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:19.642144 sshd[2222]: Connection closed by 10.200.16.10 port 43072 Sep 4 23:45:19.642737 sshd-session[2220]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:19.646429 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:43072.service: Deactivated successfully. Sep 4 23:45:19.648215 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:45:19.649085 systemd-logind[1702]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:45:19.649931 systemd-logind[1702]: Removed session 8. Sep 4 23:45:19.731977 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:43088.service - OpenSSH per-connection server daemon (10.200.16.10:43088). Sep 4 23:45:20.183708 sshd[2255]: Accepted publickey for core from 10.200.16.10 port 43088 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:45:20.184980 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:20.190334 systemd-logind[1702]: New session 9 of user core. Sep 4 23:45:20.195744 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:45:20.440861 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:45:20.441143 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:22.393847 (dockerd)[2276]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:45:22.394481 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:45:23.529566 dockerd[2276]: time="2025-09-04T23:45:23.529242860Z" level=info msg="Starting up" Sep 4 23:45:23.974205 dockerd[2276]: time="2025-09-04T23:45:23.974153020Z" level=info msg="Loading containers: start." Sep 4 23:45:24.235563 kernel: Initializing XFRM netlink socket Sep 4 23:45:24.488508 systemd-networkd[1548]: docker0: Link UP Sep 4 23:45:24.521828 dockerd[2276]: time="2025-09-04T23:45:24.521775700Z" level=info msg="Loading containers: done." Sep 4 23:45:24.544338 dockerd[2276]: time="2025-09-04T23:45:24.544285380Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:45:24.544733 dockerd[2276]: time="2025-09-04T23:45:24.544404580Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:45:24.544733 dockerd[2276]: time="2025-09-04T23:45:24.544546820Z" level=info msg="Daemon has completed initialization" Sep 4 23:45:24.610825 dockerd[2276]: time="2025-09-04T23:45:24.610346020Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:45:24.610576 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:45:25.545789 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:45:25.552813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:25.626041 containerd[1724]: time="2025-09-04T23:45:25.625937260Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 23:45:25.919464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:25.939033 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:25.976310 kubelet[2469]: E0904 23:45:25.976234 2469 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:25.978817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:25.978964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:25.979497 systemd[1]: kubelet.service: Consumed 135ms CPU time, 106.9M memory peak. Sep 4 23:45:26.878390 chronyd[1698]: Selected source PHC0 Sep 4 23:45:27.160235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2811024935.mount: Deactivated successfully. Sep 4 23:45:28.750373 containerd[1724]: time="2025-09-04T23:45:28.750322532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:28.753486 containerd[1724]: time="2025-09-04T23:45:28.753416732Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328357" Sep 4 23:45:28.768282 containerd[1724]: time="2025-09-04T23:45:28.768229052Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:28.774820 containerd[1724]: time="2025-09-04T23:45:28.774774852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:28.775907 containerd[1724]: time="2025-09-04T23:45:28.775857292Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 3.149872992s" Sep 4 23:45:28.775980 containerd[1724]: time="2025-09-04T23:45:28.775907932Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 4 23:45:28.777331 containerd[1724]: time="2025-09-04T23:45:28.777124972Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 23:45:30.209065 containerd[1724]: time="2025-09-04T23:45:30.208016454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:30.211499 containerd[1724]: time="2025-09-04T23:45:30.211441174Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528552" Sep 4 23:45:30.215385 containerd[1724]: time="2025-09-04T23:45:30.215334374Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:30.221847 containerd[1724]: time="2025-09-04T23:45:30.221778974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:30.223332 containerd[1724]: time="2025-09-04T23:45:30.222851294Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.445688802s" Sep 4 23:45:30.223332 containerd[1724]: time="2025-09-04T23:45:30.222893494Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 4 23:45:30.224201 containerd[1724]: time="2025-09-04T23:45:30.223964654Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 23:45:31.480251 containerd[1724]: time="2025-09-04T23:45:31.480184894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:31.483394 containerd[1724]: time="2025-09-04T23:45:31.483207134Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483527" Sep 4 23:45:31.487548 containerd[1724]: time="2025-09-04T23:45:31.487473854Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:31.493304 containerd[1724]: time="2025-09-04T23:45:31.492762974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:31.494072 containerd[1724]: time="2025-09-04T23:45:31.494030814Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.27002308s" Sep 4 23:45:31.494072 containerd[1724]: time="2025-09-04T23:45:31.494068014Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 4 23:45:31.494554 containerd[1724]: time="2025-09-04T23:45:31.494515294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 23:45:32.820716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397005367.mount: Deactivated successfully. Sep 4 23:45:33.197664 containerd[1724]: time="2025-09-04T23:45:33.196964774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:33.200685 containerd[1724]: time="2025-09-04T23:45:33.200366654Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376724" Sep 4 23:45:33.206828 containerd[1724]: time="2025-09-04T23:45:33.206772614Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:33.213374 containerd[1724]: time="2025-09-04T23:45:33.213313814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:33.214160 containerd[1724]: time="2025-09-04T23:45:33.214035294Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.71946792s" Sep 4 23:45:33.214160 containerd[1724]: time="2025-09-04T23:45:33.214066294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 4 23:45:33.215104 containerd[1724]: time="2025-09-04T23:45:33.215076534Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:45:33.962514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3392517146.mount: Deactivated successfully. Sep 4 23:45:35.947560 containerd[1724]: time="2025-09-04T23:45:35.946405824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:35.949599 containerd[1724]: time="2025-09-04T23:45:35.949550739Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 4 23:45:35.953873 containerd[1724]: time="2025-09-04T23:45:35.953841412Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:35.959820 containerd[1724]: time="2025-09-04T23:45:35.959763083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:35.961214 containerd[1724]: time="2025-09-04T23:45:35.961182401Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.746069827s" Sep 4 23:45:35.961349 containerd[1724]: time="2025-09-04T23:45:35.961332240Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 4 23:45:35.962158 containerd[1724]: time="2025-09-04T23:45:35.962127599Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:45:36.045806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 23:45:36.055072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:36.230178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:36.241254 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:36.296965 kubelet[2606]: E0904 23:45:36.296879 2606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:36.299618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:36.299906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:36.300278 systemd[1]: kubelet.service: Consumed 137ms CPU time, 109.1M memory peak. Sep 4 23:45:37.070500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2369849071.mount: Deactivated successfully. Sep 4 23:45:37.093339 containerd[1724]: time="2025-09-04T23:45:37.093282172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:37.096483 containerd[1724]: time="2025-09-04T23:45:37.096419767Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 4 23:45:37.099850 containerd[1724]: time="2025-09-04T23:45:37.099794962Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:37.104614 containerd[1724]: time="2025-09-04T23:45:37.104556634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:37.105465 containerd[1724]: time="2025-09-04T23:45:37.105243273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.143077914s" Sep 4 23:45:37.105465 containerd[1724]: time="2025-09-04T23:45:37.105276833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 4 23:45:37.106198 containerd[1724]: time="2025-09-04T23:45:37.105843632Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 23:45:37.857341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174101241.mount: Deactivated successfully. Sep 4 23:45:41.020812 containerd[1724]: time="2025-09-04T23:45:41.020746418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:41.024513 containerd[1724]: time="2025-09-04T23:45:41.024144536Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 4 23:45:41.027897 containerd[1724]: time="2025-09-04T23:45:41.027837374Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:41.033863 containerd[1724]: time="2025-09-04T23:45:41.033771970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:41.036575 containerd[1724]: time="2025-09-04T23:45:41.035222490Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.929340138s" Sep 4 23:45:41.036575 containerd[1724]: time="2025-09-04T23:45:41.035267570Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 4 23:45:43.232560 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 4 23:45:45.987032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:45.987181 systemd[1]: kubelet.service: Consumed 137ms CPU time, 109.1M memory peak. Sep 4 23:45:45.993782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:46.029883 systemd[1]: Reload requested from client PID 2698 ('systemctl') (unit session-9.scope)... Sep 4 23:45:46.029907 systemd[1]: Reloading... Sep 4 23:45:46.159661 zram_generator::config[2760]: No configuration found. Sep 4 23:45:46.259132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:46.362992 systemd[1]: Reloading finished in 332 ms. Sep 4 23:45:46.404724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:46.409678 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:46.413117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:46.413421 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:45:46.413949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:46.414004 systemd[1]: kubelet.service: Consumed 99ms CPU time, 96.2M memory peak. Sep 4 23:45:46.420194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:46.532395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:46.543903 (kubelet)[2816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:46.585877 kubelet[2816]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:46.585877 kubelet[2816]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:45:46.585877 kubelet[2816]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:46.585877 kubelet[2816]: I0904 23:45:46.585219 2816 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:45:47.583258 kubelet[2816]: I0904 23:45:47.583207 2816 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:45:47.583258 kubelet[2816]: I0904 23:45:47.583246 2816 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:45:47.585558 kubelet[2816]: I0904 23:45:47.583940 2816 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:45:47.611417 kubelet[2816]: E0904 23:45:47.611372 2816 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:47.614127 kubelet[2816]: I0904 23:45:47.614088 2816 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:45:47.622332 kubelet[2816]: E0904 23:45:47.620915 2816 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:45:47.622332 kubelet[2816]: I0904 23:45:47.620949 2816 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:45:47.624348 kubelet[2816]: I0904 23:45:47.624320 2816 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:45:47.626450 kubelet[2816]: I0904 23:45:47.626399 2816 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:45:47.626793 kubelet[2816]: I0904 23:45:47.626598 2816 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-c33c3b40b5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:45:47.626964 kubelet[2816]: I0904 23:45:47.626948 2816 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:45:47.627022 kubelet[2816]: I0904 23:45:47.627013 2816 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:45:47.627219 kubelet[2816]: I0904 23:45:47.627204 2816 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:47.632179 kubelet[2816]: I0904 23:45:47.632147 2816 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:45:47.632338 kubelet[2816]: I0904 23:45:47.632326 2816 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:45:47.632400 kubelet[2816]: I0904 23:45:47.632392 2816 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:45:47.632465 kubelet[2816]: I0904 23:45:47.632455 2816 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:45:47.635456 kubelet[2816]: W0904 23:45:47.635378 2816 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-c33c3b40b5&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Sep 4 23:45:47.635456 kubelet[2816]: E0904 23:45:47.635448 2816 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-c33c3b40b5&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:47.635637 kubelet[2816]: I0904 23:45:47.635592 2816 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:45:47.636121 kubelet[2816]: I0904 23:45:47.636085 2816 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:45:47.636182 kubelet[2816]: W0904 23:45:47.636145 2816 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:45:47.637844 kubelet[2816]: I0904 23:45:47.637812 2816 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:45:47.637923 kubelet[2816]: I0904 23:45:47.637856 2816 server.go:1287] "Started kubelet" Sep 4 23:45:47.644276 kubelet[2816]: I0904 23:45:47.643814 2816 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:45:47.644276 kubelet[2816]: I0904 23:45:47.644092 2816 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:45:47.645198 kubelet[2816]: I0904 23:45:47.645150 2816 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:45:47.647916 kubelet[2816]: I0904 23:45:47.647829 2816 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:45:47.648132 kubelet[2816]: I0904 23:45:47.648106 2816 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:45:47.648452 kubelet[2816]: E0904 23:45:47.648322 2816 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-n-c33c3b40b5.186239107ed17e37 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-n-c33c3b40b5,UID:ci-4230.2.2-n-c33c3b40b5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-n-c33c3b40b5,},FirstTimestamp:2025-09-04 23:45:47.637833271 +0000 UTC m=+1.090767630,LastTimestamp:2025-09-04 23:45:47.637833271 +0000 UTC m=+1.090767630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-n-c33c3b40b5,}" Sep 4 23:45:47.652263 kubelet[2816]: W0904 23:45:47.651917 2816 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Sep 4 23:45:47.652263 kubelet[2816]: E0904 23:45:47.651985 2816 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:47.654488 kubelet[2816]: I0904 23:45:47.653905 2816 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:45:47.654488 kubelet[2816]: E0904 23:45:47.653982 2816 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" Sep 4 23:45:47.655375 kubelet[2816]: I0904 23:45:47.655336 2816 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:45:47.655466 kubelet[2816]: I0904 23:45:47.655448 2816 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:45:47.658113 kubelet[2816]: W0904 23:45:47.657732 2816 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Sep 4 23:45:47.658302 kubelet[2816]: E0904 23:45:47.658274 2816 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:47.658521 kubelet[2816]: I0904 23:45:47.658480 2816 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:45:47.659463 kubelet[2816]: E0904 23:45:47.659230 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-c33c3b40b5?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Sep 4 23:45:47.659808 kubelet[2816]: I0904 23:45:47.659784 2816 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:45:47.660001 kubelet[2816]: I0904 23:45:47.659983 2816 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:45:47.662164 kubelet[2816]: I0904 23:45:47.662138 2816 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:45:47.670022 kubelet[2816]: E0904 23:45:47.669993 2816 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:45:47.676919 kubelet[2816]: I0904 23:45:47.676890 2816 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:45:47.676919 kubelet[2816]: I0904 23:45:47.676910 2816 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:45:47.677076 kubelet[2816]: I0904 23:45:47.676934 2816 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:47.688130 kubelet[2816]: I0904 23:45:47.688092 2816 policy_none.go:49] "None policy: Start" Sep 4 23:45:47.688130 kubelet[2816]: I0904 23:45:47.688134 2816 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:45:47.688258 kubelet[2816]: I0904 23:45:47.688162 2816 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:45:47.697694 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:45:47.707436 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:45:47.710769 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:45:47.721722 kubelet[2816]: I0904 23:45:47.721669 2816 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:45:47.722043 kubelet[2816]: I0904 23:45:47.721898 2816 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:45:47.722043 kubelet[2816]: I0904 23:45:47.721917 2816 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:45:47.723191 kubelet[2816]: I0904 23:45:47.722972 2816 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:45:47.723741 kubelet[2816]: E0904 23:45:47.723222 2816 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:45:47.723741 kubelet[2816]: E0904 23:45:47.723264 2816 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-n-c33c3b40b5\" not found" Sep 4 23:45:47.733696 kubelet[2816]: I0904 23:45:47.733652 2816 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:45:47.735011 kubelet[2816]: I0904 23:45:47.734952 2816 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:45:47.735011 kubelet[2816]: I0904 23:45:47.735013 2816 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:45:47.735147 kubelet[2816]: I0904 23:45:47.735035 2816 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:45:47.735147 kubelet[2816]: I0904 23:45:47.735041 2816 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:45:47.735147 kubelet[2816]: E0904 23:45:47.735086 2816 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 4 23:45:47.740250 kubelet[2816]: W0904 23:45:47.740084 2816 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Sep 4 23:45:47.740380 kubelet[2816]: E0904 23:45:47.740350 2816 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:47.824751 kubelet[2816]: I0904 23:45:47.824716 2816 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.825126 kubelet[2816]: E0904 23:45:47.825094 2816 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.846788 systemd[1]: Created slice kubepods-burstable-podd6f9440f0e2ee4e8090fc10e10f1bf3b.slice - libcontainer container kubepods-burstable-podd6f9440f0e2ee4e8090fc10e10f1bf3b.slice. Sep 4 23:45:47.856916 kubelet[2816]: E0904 23:45:47.856713 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.861100 kubelet[2816]: I0904 23:45:47.860015 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f4f117128f8dc07e3398f2f1f5bf018-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-c33c3b40b5\" (UID: \"9f4f117128f8dc07e3398f2f1f5bf018\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.861100 kubelet[2816]: I0904 23:45:47.860079 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f9ce86160845ff852eaff2fdd5adb70-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-c33c3b40b5\" (UID: \"0f9ce86160845ff852eaff2fdd5adb70\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.861100 kubelet[2816]: I0904 23:45:47.860144 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f9ce86160845ff852eaff2fdd5adb70-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-c33c3b40b5\" (UID: \"0f9ce86160845ff852eaff2fdd5adb70\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.861100 kubelet[2816]: I0904 23:45:47.860162 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f9ce86160845ff852eaff2fdd5adb70-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-c33c3b40b5\" (UID: \"0f9ce86160845ff852eaff2fdd5adb70\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.861100 kubelet[2816]: I0904 23:45:47.860181 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f9ce86160845ff852eaff2fdd5adb70-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-c33c3b40b5\" (UID: \"0f9ce86160845ff852eaff2fdd5adb70\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.861444 kubelet[2816]: I0904 23:45:47.860200 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f9ce86160845ff852eaff2fdd5adb70-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-c33c3b40b5\" (UID: \"0f9ce86160845ff852eaff2fdd5adb70\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.861444 kubelet[2816]: I0904 23:45:47.860219 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6f9440f0e2ee4e8090fc10e10f1bf3b-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-c33c3b40b5\" (UID: \"d6f9440f0e2ee4e8090fc10e10f1bf3b\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.861444 kubelet[2816]: I0904 23:45:47.860237 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f4f117128f8dc07e3398f2f1f5bf018-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-c33c3b40b5\" (UID: \"9f4f117128f8dc07e3398f2f1f5bf018\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.861444 kubelet[2816]: I0904 23:45:47.860257 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f4f117128f8dc07e3398f2f1f5bf018-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-c33c3b40b5\" (UID: \"9f4f117128f8dc07e3398f2f1f5bf018\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.861444 kubelet[2816]: E0904 23:45:47.860269 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-c33c3b40b5?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Sep 4 23:45:47.862741 systemd[1]: Created slice kubepods-burstable-pod9f4f117128f8dc07e3398f2f1f5bf018.slice - libcontainer container kubepods-burstable-pod9f4f117128f8dc07e3398f2f1f5bf018.slice. Sep 4 23:45:47.875199 kubelet[2816]: E0904 23:45:47.875147 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:47.878596 systemd[1]: Created slice kubepods-burstable-pod0f9ce86160845ff852eaff2fdd5adb70.slice - libcontainer container kubepods-burstable-pod0f9ce86160845ff852eaff2fdd5adb70.slice. Sep 4 23:45:47.881430 kubelet[2816]: E0904 23:45:47.881387 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:48.027120 kubelet[2816]: I0904 23:45:48.027086 2816 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:48.027560 kubelet[2816]: E0904 23:45:48.027519 2816 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:48.158417 containerd[1724]: time="2025-09-04T23:45:48.158303152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-c33c3b40b5,Uid:d6f9440f0e2ee4e8090fc10e10f1bf3b,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:48.176371 containerd[1724]: time="2025-09-04T23:45:48.176088307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-c33c3b40b5,Uid:9f4f117128f8dc07e3398f2f1f5bf018,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:48.183557 containerd[1724]: time="2025-09-04T23:45:48.182764905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-c33c3b40b5,Uid:0f9ce86160845ff852eaff2fdd5adb70,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:48.260845 kubelet[2816]: E0904 23:45:48.260797 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-c33c3b40b5?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Sep 4 23:45:48.423506 kubelet[2816]: E0904 23:45:48.423131 2816 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-n-c33c3b40b5.186239107ed17e37 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-n-c33c3b40b5,UID:ci-4230.2.2-n-c33c3b40b5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-n-c33c3b40b5,},FirstTimestamp:2025-09-04 23:45:47.637833271 +0000 UTC m=+1.090767630,LastTimestamp:2025-09-04 23:45:47.637833271 +0000 UTC m=+1.090767630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-n-c33c3b40b5,}" Sep 4 23:45:48.429699 kubelet[2816]: I0904 23:45:48.429668 2816 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:48.430112 kubelet[2816]: E0904 23:45:48.430083 2816 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:48.651176 kubelet[2816]: W0904 23:45:48.651075 2816 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-c33c3b40b5&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Sep 4 23:45:48.651176 kubelet[2816]: E0904 23:45:48.651139 2816 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-c33c3b40b5&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:48.701313 kubelet[2816]: W0904 23:45:48.701199 2816 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Sep 4 23:45:48.701313 kubelet[2816]: E0904 23:45:48.701251 2816 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:48.833820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1180806907.mount: Deactivated successfully. Sep 4 23:45:48.864298 containerd[1724]: time="2025-09-04T23:45:48.864239338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:48.881438 containerd[1724]: time="2025-09-04T23:45:48.881374132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 23:45:48.888149 containerd[1724]: time="2025-09-04T23:45:48.888106930Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:48.893562 containerd[1724]: time="2025-09-04T23:45:48.892675889Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:48.899043 containerd[1724]: time="2025-09-04T23:45:48.898877767Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:45:48.902940 containerd[1724]: time="2025-09-04T23:45:48.902158926Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:48.905598 containerd[1724]: time="2025-09-04T23:45:48.905560925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:48.906480 containerd[1724]: time="2025-09-04T23:45:48.906433565Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 748.046773ms" Sep 4 23:45:48.908550 containerd[1724]: time="2025-09-04T23:45:48.908338844Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:45:48.910499 containerd[1724]: time="2025-09-04T23:45:48.910465723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 727.626458ms" Sep 4 23:45:48.912311 containerd[1724]: time="2025-09-04T23:45:48.912283643Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 736.117656ms" Sep 4 23:45:49.032471 update_engine[1705]: I20250904 23:45:49.032300 1705 update_attempter.cc:509] Updating boot flags... Sep 4 23:45:49.062156 kubelet[2816]: E0904 23:45:49.062115 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-c33c3b40b5?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="1.6s" Sep 4 23:45:49.115687 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2869) Sep 4 23:45:49.118315 kubelet[2816]: W0904 23:45:49.118232 2816 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Sep 4 23:45:49.118315 kubelet[2816]: E0904 23:45:49.118279 2816 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:49.232717 kubelet[2816]: I0904 23:45:49.232677 2816 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:49.233112 kubelet[2816]: E0904 23:45:49.233082 2816 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:49.274790 kubelet[2816]: W0904 23:45:49.274678 2816 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Sep 4 23:45:49.274790 kubelet[2816]: E0904 23:45:49.274749 2816 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:49.650185 kubelet[2816]: E0904 23:45:49.650136 2816 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:49.935148 containerd[1724]: time="2025-09-04T23:45:49.934850567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:49.935967 containerd[1724]: time="2025-09-04T23:45:49.935052767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:49.936076 containerd[1724]: time="2025-09-04T23:45:49.935806087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:49.936313 containerd[1724]: time="2025-09-04T23:45:49.936221887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:49.941594 containerd[1724]: time="2025-09-04T23:45:49.941302724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:49.941594 containerd[1724]: time="2025-09-04T23:45:49.941365244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:49.941594 containerd[1724]: time="2025-09-04T23:45:49.941385684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:49.941594 containerd[1724]: time="2025-09-04T23:45:49.941464684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:49.955724 containerd[1724]: time="2025-09-04T23:45:49.955608437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:49.955724 containerd[1724]: time="2025-09-04T23:45:49.955679117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:49.955724 containerd[1724]: time="2025-09-04T23:45:49.955698317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:49.955913 containerd[1724]: time="2025-09-04T23:45:49.955782197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:49.997797 systemd[1]: Started cri-containerd-6f7b6972f5a201757cdcda54fa8e76008db71921f99799ca04b9a5f3697ccfc8.scope - libcontainer container 6f7b6972f5a201757cdcda54fa8e76008db71921f99799ca04b9a5f3697ccfc8. Sep 4 23:45:50.000264 systemd[1]: Started cri-containerd-73df69ac4c3645689756b00c34f9bd8d6e24b6935d007707e04d0d0625e52a0d.scope - libcontainer container 73df69ac4c3645689756b00c34f9bd8d6e24b6935d007707e04d0d0625e52a0d. Sep 4 23:45:50.003446 systemd[1]: Started cri-containerd-e498470da17d5729636bb9378dafef8bc9def95bafaaf1c827d62fe6d39305cd.scope - libcontainer container e498470da17d5729636bb9378dafef8bc9def95bafaaf1c827d62fe6d39305cd. Sep 4 23:45:50.043787 containerd[1724]: time="2025-09-04T23:45:50.043650671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-c33c3b40b5,Uid:9f4f117128f8dc07e3398f2f1f5bf018,Namespace:kube-system,Attempt:0,} returns sandbox id \"73df69ac4c3645689756b00c34f9bd8d6e24b6935d007707e04d0d0625e52a0d\"" Sep 4 23:45:50.047614 containerd[1724]: time="2025-09-04T23:45:50.047335989Z" level=info msg="CreateContainer within sandbox \"73df69ac4c3645689756b00c34f9bd8d6e24b6935d007707e04d0d0625e52a0d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:45:50.067114 containerd[1724]: time="2025-09-04T23:45:50.066420899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-c33c3b40b5,Uid:0f9ce86160845ff852eaff2fdd5adb70,Namespace:kube-system,Attempt:0,} returns sandbox id \"e498470da17d5729636bb9378dafef8bc9def95bafaaf1c827d62fe6d39305cd\"" Sep 4 23:45:50.070841 containerd[1724]: time="2025-09-04T23:45:50.070784497Z" level=info msg="CreateContainer within sandbox \"e498470da17d5729636bb9378dafef8bc9def95bafaaf1c827d62fe6d39305cd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:45:50.073489 containerd[1724]: time="2025-09-04T23:45:50.073415696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-c33c3b40b5,Uid:d6f9440f0e2ee4e8090fc10e10f1bf3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f7b6972f5a201757cdcda54fa8e76008db71921f99799ca04b9a5f3697ccfc8\"" Sep 4 23:45:50.076856 containerd[1724]: time="2025-09-04T23:45:50.076817694Z" level=info msg="CreateContainer within sandbox \"6f7b6972f5a201757cdcda54fa8e76008db71921f99799ca04b9a5f3697ccfc8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:45:50.662663 kubelet[2816]: E0904 23:45:50.662615 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-c33c3b40b5?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="3.2s" Sep 4 23:45:50.783143 containerd[1724]: time="2025-09-04T23:45:50.782900448Z" level=info msg="CreateContainer within sandbox \"73df69ac4c3645689756b00c34f9bd8d6e24b6935d007707e04d0d0625e52a0d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f08c00e519226f126c9ad0e5aa211acd571e6508185c447b7a3b2e658982897f\"" Sep 4 23:45:50.789760 containerd[1724]: time="2025-09-04T23:45:50.789708445Z" level=info msg="CreateContainer within sandbox \"e498470da17d5729636bb9378dafef8bc9def95bafaaf1c827d62fe6d39305cd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f6e5ec8e67658dd06be940319ba05b92fab8be5d6ac79abe78bd01a284242b22\"" Sep 4 23:45:50.790184 containerd[1724]: time="2025-09-04T23:45:50.790059645Z" level=info msg="StartContainer for \"f08c00e519226f126c9ad0e5aa211acd571e6508185c447b7a3b2e658982897f\"" Sep 4 23:45:50.797273 containerd[1724]: time="2025-09-04T23:45:50.796946441Z" level=info msg="CreateContainer within sandbox \"6f7b6972f5a201757cdcda54fa8e76008db71921f99799ca04b9a5f3697ccfc8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d3ff00c41ed241babe4bd9440ff84cb2dd004141ad80727701a30370f79f5ca\"" Sep 4 23:45:50.797273 containerd[1724]: time="2025-09-04T23:45:50.797153121Z" level=info msg="StartContainer for \"f6e5ec8e67658dd06be940319ba05b92fab8be5d6ac79abe78bd01a284242b22\"" Sep 4 23:45:50.798566 containerd[1724]: time="2025-09-04T23:45:50.798157200Z" level=info msg="StartContainer for \"3d3ff00c41ed241babe4bd9440ff84cb2dd004141ad80727701a30370f79f5ca\"" Sep 4 23:45:50.819769 systemd[1]: Started cri-containerd-f08c00e519226f126c9ad0e5aa211acd571e6508185c447b7a3b2e658982897f.scope - libcontainer container f08c00e519226f126c9ad0e5aa211acd571e6508185c447b7a3b2e658982897f. Sep 4 23:45:50.838149 kubelet[2816]: I0904 23:45:50.838120 2816 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:50.838252 systemd[1]: Started cri-containerd-3d3ff00c41ed241babe4bd9440ff84cb2dd004141ad80727701a30370f79f5ca.scope - libcontainer container 3d3ff00c41ed241babe4bd9440ff84cb2dd004141ad80727701a30370f79f5ca. Sep 4 23:45:50.839690 kubelet[2816]: E0904 23:45:50.839637 2816 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:50.846752 systemd[1]: Started cri-containerd-f6e5ec8e67658dd06be940319ba05b92fab8be5d6ac79abe78bd01a284242b22.scope - libcontainer container f6e5ec8e67658dd06be940319ba05b92fab8be5d6ac79abe78bd01a284242b22. Sep 4 23:45:50.865683 kubelet[2816]: W0904 23:45:50.865568 2816 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-c33c3b40b5&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Sep 4 23:45:50.865683 kubelet[2816]: E0904 23:45:50.865638 2816 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-c33c3b40b5&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:50.896597 containerd[1724]: time="2025-09-04T23:45:50.895648590Z" level=info msg="StartContainer for \"f08c00e519226f126c9ad0e5aa211acd571e6508185c447b7a3b2e658982897f\" returns successfully" Sep 4 23:45:50.914035 containerd[1724]: time="2025-09-04T23:45:50.913745341Z" level=info msg="StartContainer for \"3d3ff00c41ed241babe4bd9440ff84cb2dd004141ad80727701a30370f79f5ca\" returns successfully" Sep 4 23:45:50.936741 containerd[1724]: time="2025-09-04T23:45:50.936672048Z" level=info msg="StartContainer for \"f6e5ec8e67658dd06be940319ba05b92fab8be5d6ac79abe78bd01a284242b22\" returns successfully" Sep 4 23:45:51.754099 kubelet[2816]: E0904 23:45:51.753872 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:51.757983 kubelet[2816]: E0904 23:45:51.757953 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:51.761342 kubelet[2816]: E0904 23:45:51.761089 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:52.764834 kubelet[2816]: E0904 23:45:52.764804 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:52.768391 kubelet[2816]: E0904 23:45:52.768031 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:52.768391 kubelet[2816]: E0904 23:45:52.768216 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:53.751585 kubelet[2816]: E0904 23:45:53.750462 2816 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.2-n-c33c3b40b5" not found Sep 4 23:45:53.764980 kubelet[2816]: E0904 23:45:53.764363 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:53.764980 kubelet[2816]: E0904 23:45:53.764431 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:53.764980 kubelet[2816]: E0904 23:45:53.764843 2816 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:53.891199 kubelet[2816]: E0904 23:45:53.891142 2816 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.2-n-c33c3b40b5\" not found" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:54.042467 kubelet[2816]: I0904 23:45:54.041772 2816 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:54.056585 kubelet[2816]: I0904 23:45:54.056511 2816 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:54.056585 kubelet[2816]: E0904 23:45:54.056584 2816 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.2-n-c33c3b40b5\": node \"ci-4230.2.2-n-c33c3b40b5\" not found" Sep 4 23:45:54.146945 kubelet[2816]: E0904 23:45:54.146895 2816 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" Sep 4 23:45:54.247412 kubelet[2816]: E0904 23:45:54.247359 2816 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" Sep 4 23:45:54.354529 kubelet[2816]: I0904 23:45:54.354484 2816 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:54.393376 kubelet[2816]: W0904 23:45:54.393335 2816 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:54.393639 kubelet[2816]: I0904 23:45:54.393506 2816 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:54.438686 kubelet[2816]: W0904 23:45:54.438614 2816 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:54.438915 kubelet[2816]: I0904 23:45:54.438752 2816 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:54.486766 kubelet[2816]: W0904 23:45:54.486717 2816 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:54.646288 kubelet[2816]: I0904 23:45:54.646161 2816 apiserver.go:52] "Watching apiserver" Sep 4 23:45:54.656111 kubelet[2816]: I0904 23:45:54.656068 2816 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:45:54.766086 kubelet[2816]: I0904 23:45:54.765251 2816 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:54.799289 kubelet[2816]: W0904 23:45:54.799208 2816 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:54.799289 kubelet[2816]: E0904 23:45:54.799272 2816 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-c33c3b40b5\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:56.084651 systemd[1]: Reload requested from client PID 3154 ('systemctl') (unit session-9.scope)... Sep 4 23:45:56.084668 systemd[1]: Reloading... Sep 4 23:45:56.185572 zram_generator::config[3205]: No configuration found. Sep 4 23:45:56.337057 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:56.459041 systemd[1]: Reloading finished in 373 ms. Sep 4 23:45:56.481871 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:56.482592 kubelet[2816]: I0904 23:45:56.482120 2816 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:45:56.502100 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:45:56.502389 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:56.502466 systemd[1]: kubelet.service: Consumed 1.481s CPU time, 129.6M memory peak. Sep 4 23:45:56.509847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:56.635631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:56.645869 (kubelet)[3265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:56.775355 kubelet[3265]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:56.775355 kubelet[3265]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:45:56.775355 kubelet[3265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:56.777078 kubelet[3265]: I0904 23:45:56.775451 3265 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:45:56.786628 kubelet[3265]: I0904 23:45:56.784667 3265 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:45:56.786628 kubelet[3265]: I0904 23:45:56.784701 3265 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:45:56.786628 kubelet[3265]: I0904 23:45:56.785173 3265 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:45:56.787766 kubelet[3265]: I0904 23:45:56.787732 3265 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:45:56.790676 kubelet[3265]: I0904 23:45:56.790642 3265 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:45:56.796224 kubelet[3265]: E0904 23:45:56.796161 3265 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:45:56.796224 kubelet[3265]: I0904 23:45:56.796215 3265 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:45:56.800500 kubelet[3265]: I0904 23:45:56.800459 3265 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:45:56.800718 kubelet[3265]: I0904 23:45:56.800679 3265 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:45:56.800890 kubelet[3265]: I0904 23:45:56.800711 3265 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-c33c3b40b5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:45:56.800975 kubelet[3265]: I0904 23:45:56.800898 3265 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:45:56.800975 kubelet[3265]: I0904 23:45:56.800907 3265 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:45:56.800975 kubelet[3265]: I0904 23:45:56.800951 3265 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:56.801078 kubelet[3265]: I0904 23:45:56.801056 3265 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:45:56.801078 kubelet[3265]: I0904 23:45:56.801075 3265 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:45:56.801133 kubelet[3265]: I0904 23:45:56.801093 3265 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:45:56.801133 kubelet[3265]: I0904 23:45:56.801102 3265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:45:56.804329 kubelet[3265]: I0904 23:45:56.804297 3265 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:45:56.804864 kubelet[3265]: I0904 23:45:56.804825 3265 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:45:56.805376 kubelet[3265]: I0904 23:45:56.805256 3265 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:45:56.805376 kubelet[3265]: I0904 23:45:56.805295 3265 server.go:1287] "Started kubelet" Sep 4 23:45:56.809653 kubelet[3265]: I0904 23:45:56.809421 3265 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:45:56.810750 kubelet[3265]: I0904 23:45:56.810723 3265 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:45:56.814568 kubelet[3265]: I0904 23:45:56.814350 3265 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:45:56.816774 kubelet[3265]: I0904 23:45:56.816701 3265 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:45:56.818333 kubelet[3265]: I0904 23:45:56.818305 3265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:45:56.831573 kubelet[3265]: I0904 23:45:56.828914 3265 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:45:56.831573 kubelet[3265]: I0904 23:45:56.830022 3265 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:45:56.831573 kubelet[3265]: E0904 23:45:56.830252 3265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-c33c3b40b5\" not found" Sep 4 23:45:56.833853 kubelet[3265]: I0904 23:45:56.833812 3265 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:45:56.833987 kubelet[3265]: I0904 23:45:56.833965 3265 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:45:56.838610 kubelet[3265]: I0904 23:45:56.836821 3265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:45:56.838610 kubelet[3265]: I0904 23:45:56.837682 3265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:45:56.838610 kubelet[3265]: I0904 23:45:56.837704 3265 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:45:56.838610 kubelet[3265]: I0904 23:45:56.837723 3265 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:45:56.838610 kubelet[3265]: I0904 23:45:56.837729 3265 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:45:56.838610 kubelet[3265]: E0904 23:45:56.837935 3265 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:45:56.851571 kubelet[3265]: I0904 23:45:56.850730 3265 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:45:56.851571 kubelet[3265]: I0904 23:45:56.850948 3265 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:45:56.852218 kubelet[3265]: E0904 23:45:56.852185 3265 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:45:56.858669 kubelet[3265]: I0904 23:45:56.858625 3265 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:45:56.911494 kubelet[3265]: I0904 23:45:56.910671 3265 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:45:56.911739 kubelet[3265]: I0904 23:45:56.911725 3265 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:45:56.911813 kubelet[3265]: I0904 23:45:56.911805 3265 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:56.912055 kubelet[3265]: I0904 23:45:56.912037 3265 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:45:56.912321 kubelet[3265]: I0904 23:45:56.912289 3265 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:45:56.912436 kubelet[3265]: I0904 23:45:56.912426 3265 policy_none.go:49] "None policy: Start" Sep 4 23:45:56.912506 kubelet[3265]: I0904 23:45:56.912486 3265 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:45:56.912576 kubelet[3265]: I0904 23:45:56.912567 3265 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:45:56.912765 kubelet[3265]: I0904 23:45:56.912754 3265 state_mem.go:75] "Updated machine memory state" Sep 4 23:45:56.918171 kubelet[3265]: I0904 23:45:56.918145 3265 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:45:56.919354 kubelet[3265]: I0904 23:45:56.918706 3265 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:45:56.919354 kubelet[3265]: I0904 23:45:56.918729 3265 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:45:56.919354 kubelet[3265]: I0904 23:45:56.919066 3265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:45:56.921448 kubelet[3265]: E0904 23:45:56.921363 3265 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:45:56.939620 kubelet[3265]: I0904 23:45:56.939007 3265 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:56.939620 kubelet[3265]: I0904 23:45:56.939429 3265 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:56.941933 kubelet[3265]: I0904 23:45:56.941909 3265 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:56.953595 kubelet[3265]: W0904 23:45:56.953563 3265 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:56.953848 kubelet[3265]: E0904 23:45:56.953814 3265 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-n-c33c3b40b5\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:56.957570 kubelet[3265]: W0904 23:45:56.957507 3265 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:56.957675 kubelet[3265]: E0904 23:45:56.957587 3265 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.2-n-c33c3b40b5\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:56.957675 kubelet[3265]: W0904 23:45:56.957638 3265 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:56.957675 kubelet[3265]: E0904 23:45:56.957662 3265 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-c33c3b40b5\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.022298 kubelet[3265]: I0904 23:45:57.022271 3265 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.035441 kubelet[3265]: I0904 23:45:57.035171 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f4f117128f8dc07e3398f2f1f5bf018-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-c33c3b40b5\" (UID: \"9f4f117128f8dc07e3398f2f1f5bf018\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.035441 kubelet[3265]: I0904 23:45:57.035211 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f9ce86160845ff852eaff2fdd5adb70-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-c33c3b40b5\" (UID: \"0f9ce86160845ff852eaff2fdd5adb70\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.035441 kubelet[3265]: I0904 23:45:57.035246 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f9ce86160845ff852eaff2fdd5adb70-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-c33c3b40b5\" (UID: \"0f9ce86160845ff852eaff2fdd5adb70\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.035441 kubelet[3265]: I0904 23:45:57.035262 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6f9440f0e2ee4e8090fc10e10f1bf3b-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-c33c3b40b5\" (UID: \"d6f9440f0e2ee4e8090fc10e10f1bf3b\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.035441 kubelet[3265]: I0904 23:45:57.035277 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f4f117128f8dc07e3398f2f1f5bf018-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-c33c3b40b5\" (UID: \"9f4f117128f8dc07e3398f2f1f5bf018\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.035700 kubelet[3265]: I0904 23:45:57.035292 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f9ce86160845ff852eaff2fdd5adb70-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-c33c3b40b5\" (UID: \"0f9ce86160845ff852eaff2fdd5adb70\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.035700 kubelet[3265]: I0904 23:45:57.035308 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f9ce86160845ff852eaff2fdd5adb70-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-c33c3b40b5\" (UID: \"0f9ce86160845ff852eaff2fdd5adb70\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.035700 kubelet[3265]: I0904 23:45:57.035324 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f9ce86160845ff852eaff2fdd5adb70-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-c33c3b40b5\" (UID: \"0f9ce86160845ff852eaff2fdd5adb70\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.035700 kubelet[3265]: I0904 23:45:57.035366 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f4f117128f8dc07e3398f2f1f5bf018-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-c33c3b40b5\" (UID: \"9f4f117128f8dc07e3398f2f1f5bf018\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.038777 kubelet[3265]: I0904 23:45:57.038197 3265 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.038777 kubelet[3265]: I0904 23:45:57.038290 3265 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.136780 sudo[3300]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:45:57.137442 sudo[3300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:45:57.599079 sudo[3300]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:57.809612 kubelet[3265]: I0904 23:45:57.809460 3265 apiserver.go:52] "Watching apiserver" Sep 4 23:45:57.834702 kubelet[3265]: I0904 23:45:57.834655 3265 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:45:57.886967 kubelet[3265]: I0904 23:45:57.885848 3265 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.886967 kubelet[3265]: I0904 23:45:57.886147 3265 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.899198 kubelet[3265]: W0904 23:45:57.898677 3265 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:57.899198 kubelet[3265]: E0904 23:45:57.898765 3265 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-c33c3b40b5\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.899933 kubelet[3265]: W0904 23:45:57.899719 3265 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 23:45:57.899933 kubelet[3265]: E0904 23:45:57.899861 3265 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-n-c33c3b40b5\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-n-c33c3b40b5" Sep 4 23:45:57.925074 kubelet[3265]: I0904 23:45:57.924660 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-n-c33c3b40b5" podStartSLOduration=3.924529136 podStartE2EDuration="3.924529136s" podCreationTimestamp="2025-09-04 23:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:57.913686262 +0000 UTC m=+1.263327973" watchObservedRunningTime="2025-09-04 23:45:57.924529136 +0000 UTC m=+1.274170847" Sep 4 23:45:57.938236 kubelet[3265]: I0904 23:45:57.937731 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-c33c3b40b5" podStartSLOduration=3.937715329 podStartE2EDuration="3.937715329s" podCreationTimestamp="2025-09-04 23:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:57.937700969 +0000 UTC m=+1.287342680" watchObservedRunningTime="2025-09-04 23:45:57.937715329 +0000 UTC m=+1.287357040" Sep 4 23:45:57.938236 kubelet[3265]: I0904 23:45:57.937863 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-n-c33c3b40b5" podStartSLOduration=3.937858249 podStartE2EDuration="3.937858249s" podCreationTimestamp="2025-09-04 23:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:57.925796376 +0000 UTC m=+1.275438087" watchObservedRunningTime="2025-09-04 23:45:57.937858249 +0000 UTC m=+1.287499920" Sep 4 23:45:59.595740 sudo[2258]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:59.680586 sshd[2257]: Connection closed by 10.200.16.10 port 43088 Sep 4 23:45:59.681191 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:59.685092 systemd-logind[1702]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:45:59.686105 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:43088.service: Deactivated successfully. Sep 4 23:45:59.689032 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:45:59.689410 systemd[1]: session-9.scope: Consumed 6.782s CPU time, 263.9M memory peak. Sep 4 23:45:59.692181 systemd-logind[1702]: Removed session 9. Sep 4 23:46:00.771845 kubelet[3265]: I0904 23:46:00.771773 3265 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:46:00.772560 containerd[1724]: time="2025-09-04T23:46:00.772506578Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:46:00.773660 kubelet[3265]: I0904 23:46:00.772723 3265 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:46:01.882259 systemd[1]: Created slice kubepods-besteffort-podb7264a99_b8a4_4fd3_86a2_5befd85e1753.slice - libcontainer container kubepods-besteffort-podb7264a99_b8a4_4fd3_86a2_5befd85e1753.slice. Sep 4 23:46:01.894868 kubelet[3265]: W0904 23:46:01.894519 3265 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230.2.2-n-c33c3b40b5" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-n-c33c3b40b5' and this object Sep 4 23:46:01.894868 kubelet[3265]: E0904 23:46:01.894595 3265 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230.2.2-n-c33c3b40b5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-n-c33c3b40b5' and this object" logger="UnhandledError" Sep 4 23:46:01.894868 kubelet[3265]: W0904 23:46:01.894648 3265 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230.2.2-n-c33c3b40b5" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-n-c33c3b40b5' and this object Sep 4 23:46:01.894868 kubelet[3265]: E0904 23:46:01.894659 3265 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230.2.2-n-c33c3b40b5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-n-c33c3b40b5' and this object" logger="UnhandledError" Sep 4 23:46:01.894868 kubelet[3265]: W0904 23:46:01.894687 3265 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230.2.2-n-c33c3b40b5" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.2-n-c33c3b40b5' and this object Sep 4 23:46:01.895308 kubelet[3265]: E0904 23:46:01.894704 3265 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230.2.2-n-c33c3b40b5\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-n-c33c3b40b5' and this object" logger="UnhandledError" Sep 4 23:46:01.902799 systemd[1]: Created slice kubepods-burstable-podc596f358_2e27_46a5_9f45_d97714fe7111.slice - libcontainer container kubepods-burstable-podc596f358_2e27_46a5_9f45_d97714fe7111.slice. Sep 4 23:46:01.967186 kubelet[3265]: I0904 23:46:01.966960 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-host-proc-sys-net\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967186 kubelet[3265]: I0904 23:46:01.967000 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-cgroup\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967186 kubelet[3265]: I0904 23:46:01.967016 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c596f358-2e27-46a5-9f45-d97714fe7111-clustermesh-secrets\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967186 kubelet[3265]: I0904 23:46:01.967036 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c596f358-2e27-46a5-9f45-d97714fe7111-hubble-tls\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967186 kubelet[3265]: I0904 23:46:01.967065 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-bpf-maps\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967186 kubelet[3265]: I0904 23:46:01.967085 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-etc-cni-netd\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967437 kubelet[3265]: I0904 23:46:01.967100 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx58p\" (UniqueName: \"kubernetes.io/projected/c596f358-2e27-46a5-9f45-d97714fe7111-kube-api-access-fx58p\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967437 kubelet[3265]: I0904 23:46:01.967160 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7264a99-b8a4-4fd3-86a2-5befd85e1753-lib-modules\") pod \"kube-proxy-qfkxj\" (UID: \"b7264a99-b8a4-4fd3-86a2-5befd85e1753\") " pod="kube-system/kube-proxy-qfkxj" Sep 4 23:46:01.967437 kubelet[3265]: I0904 23:46:01.967195 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-lib-modules\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967437 kubelet[3265]: I0904 23:46:01.967212 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-xtables-lock\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967437 kubelet[3265]: I0904 23:46:01.967247 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-config-path\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967437 kubelet[3265]: I0904 23:46:01.967263 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cni-path\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967594 kubelet[3265]: I0904 23:46:01.967279 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-host-proc-sys-kernel\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967594 kubelet[3265]: I0904 23:46:01.967295 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7264a99-b8a4-4fd3-86a2-5befd85e1753-xtables-lock\") pod \"kube-proxy-qfkxj\" (UID: \"b7264a99-b8a4-4fd3-86a2-5befd85e1753\") " pod="kube-system/kube-proxy-qfkxj" Sep 4 23:46:01.967594 kubelet[3265]: I0904 23:46:01.967322 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-run\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967594 kubelet[3265]: I0904 23:46:01.967336 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-hostproc\") pod \"cilium-794ch\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " pod="kube-system/cilium-794ch" Sep 4 23:46:01.967594 kubelet[3265]: I0904 23:46:01.967352 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7264a99-b8a4-4fd3-86a2-5befd85e1753-kube-proxy\") pod \"kube-proxy-qfkxj\" (UID: \"b7264a99-b8a4-4fd3-86a2-5befd85e1753\") " pod="kube-system/kube-proxy-qfkxj" Sep 4 23:46:01.967697 kubelet[3265]: I0904 23:46:01.967366 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxgc2\" (UniqueName: \"kubernetes.io/projected/b7264a99-b8a4-4fd3-86a2-5befd85e1753-kube-api-access-xxgc2\") pod \"kube-proxy-qfkxj\" (UID: \"b7264a99-b8a4-4fd3-86a2-5befd85e1753\") " pod="kube-system/kube-proxy-qfkxj" Sep 4 23:46:01.992611 systemd[1]: Created slice kubepods-besteffort-pod2322a9d1_4c14_45a9_9d0d_d37bcf8be8e4.slice - libcontainer container kubepods-besteffort-pod2322a9d1_4c14_45a9_9d0d_d37bcf8be8e4.slice. Sep 4 23:46:02.070562 kubelet[3265]: I0904 23:46:02.068009 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv8fg\" (UniqueName: \"kubernetes.io/projected/2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4-kube-api-access-kv8fg\") pod \"cilium-operator-6c4d7847fc-7hb7q\" (UID: \"2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4\") " pod="kube-system/cilium-operator-6c4d7847fc-7hb7q" Sep 4 23:46:02.070562 kubelet[3265]: I0904 23:46:02.068081 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7hb7q\" (UID: \"2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4\") " pod="kube-system/cilium-operator-6c4d7847fc-7hb7q" Sep 4 23:46:02.194734 containerd[1724]: time="2025-09-04T23:46:02.193295144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qfkxj,Uid:b7264a99-b8a4-4fd3-86a2-5befd85e1753,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:02.260574 containerd[1724]: time="2025-09-04T23:46:02.260173942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:02.260574 containerd[1724]: time="2025-09-04T23:46:02.260237742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:02.260574 containerd[1724]: time="2025-09-04T23:46:02.260252902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:02.260970 containerd[1724]: time="2025-09-04T23:46:02.260506902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:02.290059 systemd[1]: Started cri-containerd-b068f0be9230f6342486a895739fee9d68af48b87fed89d84095f3a29ce9c2f1.scope - libcontainer container b068f0be9230f6342486a895739fee9d68af48b87fed89d84095f3a29ce9c2f1. Sep 4 23:46:02.316479 containerd[1724]: time="2025-09-04T23:46:02.316430593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qfkxj,Uid:b7264a99-b8a4-4fd3-86a2-5befd85e1753,Namespace:kube-system,Attempt:0,} returns sandbox id \"b068f0be9230f6342486a895739fee9d68af48b87fed89d84095f3a29ce9c2f1\"" Sep 4 23:46:02.322136 containerd[1724]: time="2025-09-04T23:46:02.321837106Z" level=info msg="CreateContainer within sandbox \"b068f0be9230f6342486a895739fee9d68af48b87fed89d84095f3a29ce9c2f1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:46:02.527203 containerd[1724]: time="2025-09-04T23:46:02.527067613Z" level=info msg="CreateContainer within sandbox \"b068f0be9230f6342486a895739fee9d68af48b87fed89d84095f3a29ce9c2f1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0322d4bb6960454547c146ace3e5897706419c7fd0ee023895ba7cee751c550e\"" Sep 4 23:46:02.528424 containerd[1724]: time="2025-09-04T23:46:02.528228451Z" level=info msg="StartContainer for \"0322d4bb6960454547c146ace3e5897706419c7fd0ee023895ba7cee751c550e\"" Sep 4 23:46:02.555775 systemd[1]: Started cri-containerd-0322d4bb6960454547c146ace3e5897706419c7fd0ee023895ba7cee751c550e.scope - libcontainer container 0322d4bb6960454547c146ace3e5897706419c7fd0ee023895ba7cee751c550e. Sep 4 23:46:02.591660 containerd[1724]: time="2025-09-04T23:46:02.591604653Z" level=info msg="StartContainer for \"0322d4bb6960454547c146ace3e5897706419c7fd0ee023895ba7cee751c550e\" returns successfully" Sep 4 23:46:02.983039 kubelet[3265]: I0904 23:46:02.982968 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qfkxj" podStartSLOduration=1.98294937 podStartE2EDuration="1.98294937s" podCreationTimestamp="2025-09-04 23:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:02.914696054 +0000 UTC m=+6.264337725" watchObservedRunningTime="2025-09-04 23:46:02.98294937 +0000 UTC m=+6.332591081" Sep 4 23:46:03.068870 kubelet[3265]: E0904 23:46:03.068490 3265 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 4 23:46:03.068870 kubelet[3265]: E0904 23:46:03.068587 3265 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-config-path podName:c596f358-2e27-46a5-9f45-d97714fe7111 nodeName:}" failed. No retries permitted until 2025-09-04 23:46:03.568566264 +0000 UTC m=+6.918207975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-config-path") pod "cilium-794ch" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111") : failed to sync configmap cache: timed out waiting for the condition Sep 4 23:46:03.068870 kubelet[3265]: E0904 23:46:03.068627 3265 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 4 23:46:03.068870 kubelet[3265]: E0904 23:46:03.068636 3265 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-794ch: failed to sync secret cache: timed out waiting for the condition Sep 4 23:46:03.068870 kubelet[3265]: E0904 23:46:03.068661 3265 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c596f358-2e27-46a5-9f45-d97714fe7111-hubble-tls podName:c596f358-2e27-46a5-9f45-d97714fe7111 nodeName:}" failed. No retries permitted until 2025-09-04 23:46:03.568654024 +0000 UTC m=+6.918295735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/c596f358-2e27-46a5-9f45-d97714fe7111-hubble-tls") pod "cilium-794ch" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111") : failed to sync secret cache: timed out waiting for the condition Sep 4 23:46:03.169338 kubelet[3265]: E0904 23:46:03.169303 3265 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 4 23:46:03.169953 kubelet[3265]: E0904 23:46:03.169572 3265 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4-cilium-config-path podName:2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4 nodeName:}" failed. No retries permitted until 2025-09-04 23:46:03.66952038 +0000 UTC m=+7.019162051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4-cilium-config-path") pod "cilium-operator-6c4d7847fc-7hb7q" (UID: "2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4") : failed to sync configmap cache: timed out waiting for the condition Sep 4 23:46:03.711725 containerd[1724]: time="2025-09-04T23:46:03.711622273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-794ch,Uid:c596f358-2e27-46a5-9f45-d97714fe7111,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:03.758909 containerd[1724]: time="2025-09-04T23:46:03.758513485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:03.758909 containerd[1724]: time="2025-09-04T23:46:03.758600885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:03.758909 containerd[1724]: time="2025-09-04T23:46:03.758611925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:03.758909 containerd[1724]: time="2025-09-04T23:46:03.758682165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:03.784752 systemd[1]: Started cri-containerd-22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4.scope - libcontainer container 22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4. Sep 4 23:46:03.797050 containerd[1724]: time="2025-09-04T23:46:03.796997303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7hb7q,Uid:2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:03.807036 containerd[1724]: time="2025-09-04T23:46:03.806898857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-794ch,Uid:c596f358-2e27-46a5-9f45-d97714fe7111,Namespace:kube-system,Attempt:0,} returns sandbox id \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\"" Sep 4 23:46:03.808981 containerd[1724]: time="2025-09-04T23:46:03.808925335Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:46:03.839939 containerd[1724]: time="2025-09-04T23:46:03.839718317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:03.839939 containerd[1724]: time="2025-09-04T23:46:03.839771357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:03.839939 containerd[1724]: time="2025-09-04T23:46:03.839782157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:03.839939 containerd[1724]: time="2025-09-04T23:46:03.839853437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:03.857731 systemd[1]: Started cri-containerd-37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d.scope - libcontainer container 37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d. Sep 4 23:46:03.892107 containerd[1724]: time="2025-09-04T23:46:03.891991926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7hb7q,Uid:2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d\"" Sep 4 23:46:09.315916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2407780500.mount: Deactivated successfully. Sep 4 23:46:10.865820 containerd[1724]: time="2025-09-04T23:46:10.865741377Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:10.870169 containerd[1724]: time="2025-09-04T23:46:10.869509495Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 4 23:46:10.880841 containerd[1724]: time="2025-09-04T23:46:10.880776968Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:10.884325 containerd[1724]: time="2025-09-04T23:46:10.884108726Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.075134991s" Sep 4 23:46:10.884325 containerd[1724]: time="2025-09-04T23:46:10.884166886Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 23:46:10.888304 containerd[1724]: time="2025-09-04T23:46:10.886579204Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:46:10.891335 containerd[1724]: time="2025-09-04T23:46:10.891281442Z" level=info msg="CreateContainer within sandbox \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:46:11.424946 containerd[1724]: time="2025-09-04T23:46:11.424894404Z" level=info msg="CreateContainer within sandbox \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1\"" Sep 4 23:46:11.426383 containerd[1724]: time="2025-09-04T23:46:11.425598763Z" level=info msg="StartContainer for \"6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1\"" Sep 4 23:46:11.462731 systemd[1]: Started cri-containerd-6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1.scope - libcontainer container 6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1. Sep 4 23:46:11.491212 containerd[1724]: time="2025-09-04T23:46:11.491083684Z" level=info msg="StartContainer for \"6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1\" returns successfully" Sep 4 23:46:11.496955 systemd[1]: cri-containerd-6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1.scope: Deactivated successfully. Sep 4 23:46:12.409464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1-rootfs.mount: Deactivated successfully. Sep 4 23:46:12.748326 containerd[1724]: time="2025-09-04T23:46:12.748036335Z" level=info msg="shim disconnected" id=6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1 namespace=k8s.io Sep 4 23:46:12.748326 containerd[1724]: time="2025-09-04T23:46:12.748092135Z" level=warning msg="cleaning up after shim disconnected" id=6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1 namespace=k8s.io Sep 4 23:46:12.748326 containerd[1724]: time="2025-09-04T23:46:12.748100295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:12.939454 containerd[1724]: time="2025-09-04T23:46:12.939375621Z" level=info msg="CreateContainer within sandbox \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:46:13.005107 containerd[1724]: time="2025-09-04T23:46:13.004922702Z" level=info msg="CreateContainer within sandbox \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4\"" Sep 4 23:46:13.006410 containerd[1724]: time="2025-09-04T23:46:13.006375101Z" level=info msg="StartContainer for \"a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4\"" Sep 4 23:46:13.035769 systemd[1]: Started cri-containerd-a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4.scope - libcontainer container a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4. Sep 4 23:46:13.064592 containerd[1724]: time="2025-09-04T23:46:13.064123026Z" level=info msg="StartContainer for \"a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4\" returns successfully" Sep 4 23:46:13.075253 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:46:13.076087 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:13.076290 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:13.082496 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:13.082764 systemd[1]: cri-containerd-a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4.scope: Deactivated successfully. Sep 4 23:46:13.112761 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:13.121254 containerd[1724]: time="2025-09-04T23:46:13.121177032Z" level=info msg="shim disconnected" id=a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4 namespace=k8s.io Sep 4 23:46:13.121492 containerd[1724]: time="2025-09-04T23:46:13.121306512Z" level=warning msg="cleaning up after shim disconnected" id=a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4 namespace=k8s.io Sep 4 23:46:13.121492 containerd[1724]: time="2025-09-04T23:46:13.121317192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:13.409645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4-rootfs.mount: Deactivated successfully. Sep 4 23:46:13.497689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239961062.mount: Deactivated successfully. Sep 4 23:46:13.939518 containerd[1724]: time="2025-09-04T23:46:13.939477585Z" level=info msg="CreateContainer within sandbox \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:46:13.992889 containerd[1724]: time="2025-09-04T23:46:13.992444553Z" level=info msg="CreateContainer within sandbox \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504\"" Sep 4 23:46:13.995767 containerd[1724]: time="2025-09-04T23:46:13.995733311Z" level=info msg="StartContainer for \"d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504\"" Sep 4 23:46:14.052750 systemd[1]: Started cri-containerd-d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504.scope - libcontainer container d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504. Sep 4 23:46:14.102413 systemd[1]: cri-containerd-d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504.scope: Deactivated successfully. Sep 4 23:46:14.105824 containerd[1724]: time="2025-09-04T23:46:14.105781606Z" level=info msg="StartContainer for \"d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504\" returns successfully" Sep 4 23:46:14.291026 containerd[1724]: time="2025-09-04T23:46:14.290901295Z" level=info msg="shim disconnected" id=d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504 namespace=k8s.io Sep 4 23:46:14.292260 containerd[1724]: time="2025-09-04T23:46:14.292007375Z" level=warning msg="cleaning up after shim disconnected" id=d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504 namespace=k8s.io Sep 4 23:46:14.292260 containerd[1724]: time="2025-09-04T23:46:14.292114494Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:14.446885 containerd[1724]: time="2025-09-04T23:46:14.446830762Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:14.450197 containerd[1724]: time="2025-09-04T23:46:14.450132920Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 4 23:46:14.453570 containerd[1724]: time="2025-09-04T23:46:14.453501678Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:14.455247 containerd[1724]: time="2025-09-04T23:46:14.455104437Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.568478153s" Sep 4 23:46:14.455247 containerd[1724]: time="2025-09-04T23:46:14.455144157Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 23:46:14.458066 containerd[1724]: time="2025-09-04T23:46:14.458026796Z" level=info msg="CreateContainer within sandbox \"37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:46:14.482335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698648872.mount: Deactivated successfully. Sep 4 23:46:14.492757 containerd[1724]: time="2025-09-04T23:46:14.492700055Z" level=info msg="CreateContainer within sandbox \"37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8\"" Sep 4 23:46:14.493515 containerd[1724]: time="2025-09-04T23:46:14.493349015Z" level=info msg="StartContainer for \"5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8\"" Sep 4 23:46:14.526753 systemd[1]: Started cri-containerd-5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8.scope - libcontainer container 5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8. Sep 4 23:46:14.558066 containerd[1724]: time="2025-09-04T23:46:14.558012216Z" level=info msg="StartContainer for \"5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8\" returns successfully" Sep 4 23:46:14.945835 containerd[1724]: time="2025-09-04T23:46:14.945718905Z" level=info msg="CreateContainer within sandbox \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:46:14.982779 containerd[1724]: time="2025-09-04T23:46:14.982711801Z" level=info msg="CreateContainer within sandbox \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb\"" Sep 4 23:46:14.984422 containerd[1724]: time="2025-09-04T23:46:14.983460320Z" level=info msg="StartContainer for \"f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb\"" Sep 4 23:46:14.997351 kubelet[3265]: I0904 23:46:14.997282 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7hb7q" podStartSLOduration=3.434608359 podStartE2EDuration="13.997263511s" podCreationTimestamp="2025-09-04 23:46:01 +0000 UTC" firstStartedPulling="2025-09-04 23:46:03.893301685 +0000 UTC m=+7.242943356" lastFinishedPulling="2025-09-04 23:46:14.455956797 +0000 UTC m=+17.805598508" observedRunningTime="2025-09-04 23:46:14.960650655 +0000 UTC m=+18.310292326" watchObservedRunningTime="2025-09-04 23:46:14.997263511 +0000 UTC m=+18.346905182" Sep 4 23:46:15.023773 systemd[1]: Started cri-containerd-f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb.scope - libcontainer container f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb. Sep 4 23:46:15.057696 systemd[1]: cri-containerd-f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb.scope: Deactivated successfully. Sep 4 23:46:15.062382 containerd[1724]: time="2025-09-04T23:46:15.062111189Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc596f358_2e27_46a5_9f45_d97714fe7111.slice/cri-containerd-f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb.scope/memory.events\": no such file or directory" Sep 4 23:46:15.068805 containerd[1724]: time="2025-09-04T23:46:15.068752824Z" level=info msg="StartContainer for \"f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb\" returns successfully" Sep 4 23:46:15.197323 containerd[1724]: time="2025-09-04T23:46:15.197160780Z" level=info msg="shim disconnected" id=f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb namespace=k8s.io Sep 4 23:46:15.197323 containerd[1724]: time="2025-09-04T23:46:15.197218740Z" level=warning msg="cleaning up after shim disconnected" id=f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb namespace=k8s.io Sep 4 23:46:15.197323 containerd[1724]: time="2025-09-04T23:46:15.197226860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:15.952439 containerd[1724]: time="2025-09-04T23:46:15.952318724Z" level=info msg="CreateContainer within sandbox \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:46:15.984978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2552103722.mount: Deactivated successfully. Sep 4 23:46:16.004892 containerd[1724]: time="2025-09-04T23:46:16.004765049Z" level=info msg="CreateContainer within sandbox \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25\"" Sep 4 23:46:16.005737 containerd[1724]: time="2025-09-04T23:46:16.005693689Z" level=info msg="StartContainer for \"bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25\"" Sep 4 23:46:16.035754 systemd[1]: Started cri-containerd-bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25.scope - libcontainer container bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25. Sep 4 23:46:16.071373 containerd[1724]: time="2025-09-04T23:46:16.071319165Z" level=info msg="StartContainer for \"bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25\" returns successfully" Sep 4 23:46:16.203665 kubelet[3265]: I0904 23:46:16.202820 3265 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:46:16.245048 systemd[1]: Created slice kubepods-burstable-pode05b5322_93b8_4593_8cc6_3c64df7865dc.slice - libcontainer container kubepods-burstable-pode05b5322_93b8_4593_8cc6_3c64df7865dc.slice. Sep 4 23:46:16.255142 systemd[1]: Created slice kubepods-burstable-pod346a8623_aab7_4487_9e12_e94979bd8505.slice - libcontainer container kubepods-burstable-pod346a8623_aab7_4487_9e12_e94979bd8505.slice. Sep 4 23:46:16.270017 kubelet[3265]: I0904 23:46:16.269979 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e05b5322-93b8-4593-8cc6-3c64df7865dc-config-volume\") pod \"coredns-668d6bf9bc-bjhkg\" (UID: \"e05b5322-93b8-4593-8cc6-3c64df7865dc\") " pod="kube-system/coredns-668d6bf9bc-bjhkg" Sep 4 23:46:16.270701 kubelet[3265]: I0904 23:46:16.270649 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/346a8623-aab7-4487-9e12-e94979bd8505-config-volume\") pod \"coredns-668d6bf9bc-6p524\" (UID: \"346a8623-aab7-4487-9e12-e94979bd8505\") " pod="kube-system/coredns-668d6bf9bc-6p524" Sep 4 23:46:16.270877 kubelet[3265]: I0904 23:46:16.270805 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsfv2\" (UniqueName: \"kubernetes.io/projected/e05b5322-93b8-4593-8cc6-3c64df7865dc-kube-api-access-lsfv2\") pod \"coredns-668d6bf9bc-bjhkg\" (UID: \"e05b5322-93b8-4593-8cc6-3c64df7865dc\") " pod="kube-system/coredns-668d6bf9bc-bjhkg" Sep 4 23:46:16.270877 kubelet[3265]: I0904 23:46:16.270841 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5wj7\" (UniqueName: \"kubernetes.io/projected/346a8623-aab7-4487-9e12-e94979bd8505-kube-api-access-c5wj7\") pod \"coredns-668d6bf9bc-6p524\" (UID: \"346a8623-aab7-4487-9e12-e94979bd8505\") " pod="kube-system/coredns-668d6bf9bc-6p524" Sep 4 23:46:16.549078 containerd[1724]: time="2025-09-04T23:46:16.548725732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bjhkg,Uid:e05b5322-93b8-4593-8cc6-3c64df7865dc,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:16.558319 containerd[1724]: time="2025-09-04T23:46:16.558262125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6p524,Uid:346a8623-aab7-4487-9e12-e94979bd8505,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:18.465826 systemd-networkd[1548]: cilium_host: Link UP Sep 4 23:46:18.465932 systemd-networkd[1548]: cilium_net: Link UP Sep 4 23:46:18.465936 systemd-networkd[1548]: cilium_net: Gained carrier Sep 4 23:46:18.466063 systemd-networkd[1548]: cilium_host: Gained carrier Sep 4 23:46:18.573711 systemd-networkd[1548]: cilium_host: Gained IPv6LL Sep 4 23:46:18.663042 systemd-networkd[1548]: cilium_vxlan: Link UP Sep 4 23:46:18.663051 systemd-networkd[1548]: cilium_vxlan: Gained carrier Sep 4 23:46:18.941733 systemd-networkd[1548]: cilium_net: Gained IPv6LL Sep 4 23:46:18.996645 kernel: NET: Registered PF_ALG protocol family Sep 4 23:46:19.860882 systemd-networkd[1548]: lxc_health: Link UP Sep 4 23:46:19.861170 systemd-networkd[1548]: lxc_health: Gained carrier Sep 4 23:46:20.141567 systemd-networkd[1548]: lxcd089f1f7f44a: Link UP Sep 4 23:46:20.153585 kernel: eth0: renamed from tmp57e18 Sep 4 23:46:20.158695 systemd-networkd[1548]: lxcd089f1f7f44a: Gained carrier Sep 4 23:46:20.177937 systemd-networkd[1548]: lxcfffb68089c97: Link UP Sep 4 23:46:20.186572 kernel: eth0: renamed from tmpf0bee Sep 4 23:46:20.192221 systemd-networkd[1548]: lxcfffb68089c97: Gained carrier Sep 4 23:46:20.429742 systemd-networkd[1548]: cilium_vxlan: Gained IPv6LL Sep 4 23:46:21.325740 systemd-networkd[1548]: lxc_health: Gained IPv6LL Sep 4 23:46:21.742765 kubelet[3265]: I0904 23:46:21.742595 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-794ch" podStartSLOduration=13.665462089 podStartE2EDuration="20.742575518s" podCreationTimestamp="2025-09-04 23:46:01 +0000 UTC" firstStartedPulling="2025-09-04 23:46:03.808338216 +0000 UTC m=+7.157979927" lastFinishedPulling="2025-09-04 23:46:10.885451565 +0000 UTC m=+14.235093356" observedRunningTime="2025-09-04 23:46:16.999515835 +0000 UTC m=+20.349157546" watchObservedRunningTime="2025-09-04 23:46:21.742575518 +0000 UTC m=+25.092217229" Sep 4 23:46:21.774698 systemd-networkd[1548]: lxcfffb68089c97: Gained IPv6LL Sep 4 23:46:22.158757 systemd-networkd[1548]: lxcd089f1f7f44a: Gained IPv6LL Sep 4 23:46:24.022245 containerd[1724]: time="2025-09-04T23:46:24.022154353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:24.023028 containerd[1724]: time="2025-09-04T23:46:24.022678233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:24.023028 containerd[1724]: time="2025-09-04T23:46:24.022733433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:24.023028 containerd[1724]: time="2025-09-04T23:46:24.022841273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:24.050565 containerd[1724]: time="2025-09-04T23:46:24.050307134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:24.050565 containerd[1724]: time="2025-09-04T23:46:24.050378014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:24.053144 containerd[1724]: time="2025-09-04T23:46:24.050393534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:24.053144 containerd[1724]: time="2025-09-04T23:46:24.052825732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:24.083770 systemd[1]: Started cri-containerd-57e18e07075d9115534886a0522e0449962c5f24e1be9b50a1dbfe6ec2752904.scope - libcontainer container 57e18e07075d9115534886a0522e0449962c5f24e1be9b50a1dbfe6ec2752904. Sep 4 23:46:24.089070 systemd[1]: Started cri-containerd-f0beea6c52f5eb65eb4bc0fa3960ffff5b296cfae44bd558f374f024d3a36dd2.scope - libcontainer container f0beea6c52f5eb65eb4bc0fa3960ffff5b296cfae44bd558f374f024d3a36dd2. Sep 4 23:46:24.140964 containerd[1724]: time="2025-09-04T23:46:24.140898232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6p524,Uid:346a8623-aab7-4487-9e12-e94979bd8505,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0beea6c52f5eb65eb4bc0fa3960ffff5b296cfae44bd558f374f024d3a36dd2\"" Sep 4 23:46:24.146892 containerd[1724]: time="2025-09-04T23:46:24.146441468Z" level=info msg="CreateContainer within sandbox \"f0beea6c52f5eb65eb4bc0fa3960ffff5b296cfae44bd558f374f024d3a36dd2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:24.155977 containerd[1724]: time="2025-09-04T23:46:24.155886102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bjhkg,Uid:e05b5322-93b8-4593-8cc6-3c64df7865dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"57e18e07075d9115534886a0522e0449962c5f24e1be9b50a1dbfe6ec2752904\"" Sep 4 23:46:24.162917 containerd[1724]: time="2025-09-04T23:46:24.162844977Z" level=info msg="CreateContainer within sandbox \"57e18e07075d9115534886a0522e0449962c5f24e1be9b50a1dbfe6ec2752904\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:24.186137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2760073308.mount: Deactivated successfully. Sep 4 23:46:24.202575 containerd[1724]: time="2025-09-04T23:46:24.202484390Z" level=info msg="CreateContainer within sandbox \"f0beea6c52f5eb65eb4bc0fa3960ffff5b296cfae44bd558f374f024d3a36dd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37035d606784e223232ebf70f9b259bed2e725fd85756ddb551a618aac3f4ae6\"" Sep 4 23:46:24.203264 containerd[1724]: time="2025-09-04T23:46:24.203223949Z" level=info msg="StartContainer for \"37035d606784e223232ebf70f9b259bed2e725fd85756ddb551a618aac3f4ae6\"" Sep 4 23:46:24.236006 containerd[1724]: time="2025-09-04T23:46:24.235728047Z" level=info msg="CreateContainer within sandbox \"57e18e07075d9115534886a0522e0449962c5f24e1be9b50a1dbfe6ec2752904\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ca60b7ce46afb1824e797a1120442bafabe8e1c96df8aff40988d51a3be2ac2\"" Sep 4 23:46:24.238883 containerd[1724]: time="2025-09-04T23:46:24.238663445Z" level=info msg="StartContainer for \"7ca60b7ce46afb1824e797a1120442bafabe8e1c96df8aff40988d51a3be2ac2\"" Sep 4 23:46:24.252011 systemd[1]: Started cri-containerd-37035d606784e223232ebf70f9b259bed2e725fd85756ddb551a618aac3f4ae6.scope - libcontainer container 37035d606784e223232ebf70f9b259bed2e725fd85756ddb551a618aac3f4ae6. Sep 4 23:46:24.274999 systemd[1]: Started cri-containerd-7ca60b7ce46afb1824e797a1120442bafabe8e1c96df8aff40988d51a3be2ac2.scope - libcontainer container 7ca60b7ce46afb1824e797a1120442bafabe8e1c96df8aff40988d51a3be2ac2. Sep 4 23:46:24.298123 containerd[1724]: time="2025-09-04T23:46:24.297794365Z" level=info msg="StartContainer for \"37035d606784e223232ebf70f9b259bed2e725fd85756ddb551a618aac3f4ae6\" returns successfully" Sep 4 23:46:24.330241 containerd[1724]: time="2025-09-04T23:46:24.330064263Z" level=info msg="StartContainer for \"7ca60b7ce46afb1824e797a1120442bafabe8e1c96df8aff40988d51a3be2ac2\" returns successfully" Sep 4 23:46:25.015771 kubelet[3265]: I0904 23:46:25.015124 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bjhkg" podStartSLOduration=24.015105995 podStartE2EDuration="24.015105995s" podCreationTimestamp="2025-09-04 23:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:25.013776396 +0000 UTC m=+28.363418107" watchObservedRunningTime="2025-09-04 23:46:25.015105995 +0000 UTC m=+28.364747706" Sep 4 23:46:25.015771 kubelet[3265]: I0904 23:46:25.015215 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6p524" podStartSLOduration=24.015211875 podStartE2EDuration="24.015211875s" podCreationTimestamp="2025-09-04 23:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:24.995602088 +0000 UTC m=+28.345243799" watchObservedRunningTime="2025-09-04 23:46:25.015211875 +0000 UTC m=+28.364853586" Sep 4 23:48:02.129021 update_engine[1705]: I20250904 23:48:02.128907 1705 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 4 23:48:02.129021 update_engine[1705]: I20250904 23:48:02.128963 1705 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 4 23:48:02.129434 update_engine[1705]: I20250904 23:48:02.129156 1705 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 4 23:48:02.129687 update_engine[1705]: I20250904 23:48:02.129513 1705 omaha_request_params.cc:62] Current group set to stable Sep 4 23:48:02.129687 update_engine[1705]: I20250904 23:48:02.129654 1705 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 4 23:48:02.129687 update_engine[1705]: I20250904 23:48:02.129664 1705 update_attempter.cc:643] Scheduling an action processor start. Sep 4 23:48:02.129687 update_engine[1705]: I20250904 23:48:02.129683 1705 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 23:48:02.129797 update_engine[1705]: I20250904 23:48:02.129714 1705 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 4 23:48:02.129797 update_engine[1705]: I20250904 23:48:02.129762 1705 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 23:48:02.129797 update_engine[1705]: I20250904 23:48:02.129770 1705 omaha_request_action.cc:272] Request: Sep 4 23:48:02.129797 update_engine[1705]: Sep 4 23:48:02.129797 update_engine[1705]: Sep 4 23:48:02.129797 update_engine[1705]: Sep 4 23:48:02.129797 update_engine[1705]: Sep 4 23:48:02.129797 update_engine[1705]: Sep 4 23:48:02.129797 update_engine[1705]: Sep 4 23:48:02.129797 update_engine[1705]: Sep 4 23:48:02.129797 update_engine[1705]: Sep 4 23:48:02.129797 update_engine[1705]: I20250904 23:48:02.129776 1705 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 23:48:02.130357 locksmithd[1817]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 4 23:48:02.130999 update_engine[1705]: I20250904 23:48:02.130965 1705 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 23:48:02.131371 update_engine[1705]: I20250904 23:48:02.131335 1705 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 23:48:02.237280 update_engine[1705]: E20250904 23:48:02.237218 1705 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 23:48:02.237406 update_engine[1705]: I20250904 23:48:02.237326 1705 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 4 23:48:03.460752 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:46104.service - OpenSSH per-connection server daemon (10.200.16.10:46104). Sep 4 23:48:03.963652 sshd[4658]: Accepted publickey for core from 10.200.16.10 port 46104 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:03.965137 sshd-session[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:03.970244 systemd-logind[1702]: New session 10 of user core. Sep 4 23:48:03.979699 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:48:04.403567 sshd[4661]: Connection closed by 10.200.16.10 port 46104 Sep 4 23:48:04.404150 sshd-session[4658]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:04.407570 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:46104.service: Deactivated successfully. Sep 4 23:48:04.409739 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:48:04.412236 systemd-logind[1702]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:48:04.413325 systemd-logind[1702]: Removed session 10. Sep 4 23:48:09.504841 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:46120.service - OpenSSH per-connection server daemon (10.200.16.10:46120). Sep 4 23:48:09.999454 sshd[4674]: Accepted publickey for core from 10.200.16.10 port 46120 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:10.000883 sshd-session[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:10.006429 systemd-logind[1702]: New session 11 of user core. Sep 4 23:48:10.016896 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:48:10.413708 sshd[4676]: Connection closed by 10.200.16.10 port 46120 Sep 4 23:48:10.413201 sshd-session[4674]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:10.416065 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:46120.service: Deactivated successfully. Sep 4 23:48:10.418324 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:48:10.419994 systemd-logind[1702]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:48:10.421327 systemd-logind[1702]: Removed session 11. Sep 4 23:48:12.033585 update_engine[1705]: I20250904 23:48:12.033170 1705 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 23:48:12.033585 update_engine[1705]: I20250904 23:48:12.033416 1705 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 23:48:12.034066 update_engine[1705]: I20250904 23:48:12.033696 1705 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 23:48:12.071112 update_engine[1705]: E20250904 23:48:12.071054 1705 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 23:48:12.071247 update_engine[1705]: I20250904 23:48:12.071142 1705 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 4 23:48:15.499698 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:42868.service - OpenSSH per-connection server daemon (10.200.16.10:42868). Sep 4 23:48:15.957810 sshd[4688]: Accepted publickey for core from 10.200.16.10 port 42868 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:15.959110 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:15.963550 systemd-logind[1702]: New session 12 of user core. Sep 4 23:48:15.966718 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:48:16.377610 sshd[4690]: Connection closed by 10.200.16.10 port 42868 Sep 4 23:48:16.378192 sshd-session[4688]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:16.381672 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:42868.service: Deactivated successfully. Sep 4 23:48:16.384095 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:48:16.385175 systemd-logind[1702]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:48:16.386318 systemd-logind[1702]: Removed session 12. Sep 4 23:48:21.470853 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:44820.service - OpenSSH per-connection server daemon (10.200.16.10:44820). Sep 4 23:48:21.977617 sshd[4703]: Accepted publickey for core from 10.200.16.10 port 44820 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:21.979106 sshd-session[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:21.984367 systemd-logind[1702]: New session 13 of user core. Sep 4 23:48:21.991747 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:48:22.031888 update_engine[1705]: I20250904 23:48:22.031340 1705 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 23:48:22.031888 update_engine[1705]: I20250904 23:48:22.031597 1705 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 23:48:22.031888 update_engine[1705]: I20250904 23:48:22.031842 1705 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 23:48:22.045385 update_engine[1705]: E20250904 23:48:22.045271 1705 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 23:48:22.045385 update_engine[1705]: I20250904 23:48:22.045359 1705 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 4 23:48:22.399393 sshd[4705]: Connection closed by 10.200.16.10 port 44820 Sep 4 23:48:22.399993 sshd-session[4703]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:22.403489 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:44820.service: Deactivated successfully. Sep 4 23:48:22.405868 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:48:22.407465 systemd-logind[1702]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:48:22.408521 systemd-logind[1702]: Removed session 13. Sep 4 23:48:22.485893 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:44826.service - OpenSSH per-connection server daemon (10.200.16.10:44826). Sep 4 23:48:22.943824 sshd[4718]: Accepted publickey for core from 10.200.16.10 port 44826 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:22.945272 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:22.950597 systemd-logind[1702]: New session 14 of user core. Sep 4 23:48:22.952882 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:48:23.389783 sshd[4720]: Connection closed by 10.200.16.10 port 44826 Sep 4 23:48:23.389271 sshd-session[4718]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:23.392094 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:44826.service: Deactivated successfully. Sep 4 23:48:23.395242 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:48:23.397294 systemd-logind[1702]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:48:23.398710 systemd-logind[1702]: Removed session 14. Sep 4 23:48:23.473649 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:44842.service - OpenSSH per-connection server daemon (10.200.16.10:44842). Sep 4 23:48:23.941580 sshd[4730]: Accepted publickey for core from 10.200.16.10 port 44842 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:23.943030 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:23.947723 systemd-logind[1702]: New session 15 of user core. Sep 4 23:48:23.955753 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:48:24.363566 sshd[4732]: Connection closed by 10.200.16.10 port 44842 Sep 4 23:48:24.364174 sshd-session[4730]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:24.368467 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:44842.service: Deactivated successfully. Sep 4 23:48:24.371542 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:48:24.372488 systemd-logind[1702]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:48:24.373658 systemd-logind[1702]: Removed session 15. Sep 4 23:48:29.455824 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:44846.service - OpenSSH per-connection server daemon (10.200.16.10:44846). Sep 4 23:48:29.911181 sshd[4744]: Accepted publickey for core from 10.200.16.10 port 44846 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:29.912582 sshd-session[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:29.916832 systemd-logind[1702]: New session 16 of user core. Sep 4 23:48:29.932719 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:48:30.316980 sshd[4746]: Connection closed by 10.200.16.10 port 44846 Sep 4 23:48:30.317783 sshd-session[4744]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:30.321329 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:44846.service: Deactivated successfully. Sep 4 23:48:30.323306 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:48:30.324387 systemd-logind[1702]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:48:30.325300 systemd-logind[1702]: Removed session 16. Sep 4 23:48:32.032542 update_engine[1705]: I20250904 23:48:32.032472 1705 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 23:48:32.032896 update_engine[1705]: I20250904 23:48:32.032723 1705 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 23:48:32.033017 update_engine[1705]: I20250904 23:48:32.032984 1705 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 23:48:32.040695 update_engine[1705]: E20250904 23:48:32.040655 1705 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 23:48:32.040802 update_engine[1705]: I20250904 23:48:32.040720 1705 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 23:48:32.040802 update_engine[1705]: I20250904 23:48:32.040730 1705 omaha_request_action.cc:617] Omaha request response: Sep 4 23:48:32.040848 update_engine[1705]: E20250904 23:48:32.040812 1705 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 4 23:48:32.040848 update_engine[1705]: I20250904 23:48:32.040834 1705 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 4 23:48:32.040848 update_engine[1705]: I20250904 23:48:32.040841 1705 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 23:48:32.040848 update_engine[1705]: I20250904 23:48:32.040845 1705 update_attempter.cc:306] Processing Done. Sep 4 23:48:32.040919 update_engine[1705]: E20250904 23:48:32.040861 1705 update_attempter.cc:619] Update failed. Sep 4 23:48:32.040919 update_engine[1705]: I20250904 23:48:32.040866 1705 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 4 23:48:32.040919 update_engine[1705]: I20250904 23:48:32.040871 1705 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 4 23:48:32.040919 update_engine[1705]: I20250904 23:48:32.040876 1705 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 4 23:48:32.041015 update_engine[1705]: I20250904 23:48:32.040942 1705 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 23:48:32.041015 update_engine[1705]: I20250904 23:48:32.040966 1705 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 23:48:32.041015 update_engine[1705]: I20250904 23:48:32.040971 1705 omaha_request_action.cc:272] Request: Sep 4 23:48:32.041015 update_engine[1705]: Sep 4 23:48:32.041015 update_engine[1705]: Sep 4 23:48:32.041015 update_engine[1705]: Sep 4 23:48:32.041015 update_engine[1705]: Sep 4 23:48:32.041015 update_engine[1705]: Sep 4 23:48:32.041015 update_engine[1705]: Sep 4 23:48:32.041015 update_engine[1705]: I20250904 23:48:32.040977 1705 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 23:48:32.041182 update_engine[1705]: I20250904 23:48:32.041111 1705 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 23:48:32.041466 locksmithd[1817]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 4 23:48:32.041773 update_engine[1705]: I20250904 23:48:32.041307 1705 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 23:48:32.086720 update_engine[1705]: E20250904 23:48:32.086663 1705 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 23:48:32.086861 update_engine[1705]: I20250904 23:48:32.086747 1705 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 23:48:32.086861 update_engine[1705]: I20250904 23:48:32.086756 1705 omaha_request_action.cc:617] Omaha request response: Sep 4 23:48:32.086861 update_engine[1705]: I20250904 23:48:32.086763 1705 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 23:48:32.086861 update_engine[1705]: I20250904 23:48:32.086767 1705 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 23:48:32.086861 update_engine[1705]: I20250904 23:48:32.086771 1705 update_attempter.cc:306] Processing Done. Sep 4 23:48:32.086861 update_engine[1705]: I20250904 23:48:32.086777 1705 update_attempter.cc:310] Error event sent. Sep 4 23:48:32.086861 update_engine[1705]: I20250904 23:48:32.086787 1705 update_check_scheduler.cc:74] Next update check in 43m41s Sep 4 23:48:32.087118 locksmithd[1817]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 4 23:48:35.418957 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:51204.service - OpenSSH per-connection server daemon (10.200.16.10:51204). Sep 4 23:48:35.917428 sshd[4759]: Accepted publickey for core from 10.200.16.10 port 51204 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:35.918843 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:35.923408 systemd-logind[1702]: New session 17 of user core. Sep 4 23:48:35.938771 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:48:36.337655 sshd[4761]: Connection closed by 10.200.16.10 port 51204 Sep 4 23:48:36.338225 sshd-session[4759]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:36.341701 systemd-logind[1702]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:48:36.342525 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:51204.service: Deactivated successfully. Sep 4 23:48:36.345030 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:48:36.349033 systemd-logind[1702]: Removed session 17. Sep 4 23:48:36.422198 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:51218.service - OpenSSH per-connection server daemon (10.200.16.10:51218). Sep 4 23:48:36.890414 sshd[4773]: Accepted publickey for core from 10.200.16.10 port 51218 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:36.891783 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:36.896864 systemd-logind[1702]: New session 18 of user core. Sep 4 23:48:36.903702 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:48:37.333622 sshd[4775]: Connection closed by 10.200.16.10 port 51218 Sep 4 23:48:37.334197 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:37.338020 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:51218.service: Deactivated successfully. Sep 4 23:48:37.340003 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:48:37.340928 systemd-logind[1702]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:48:37.341867 systemd-logind[1702]: Removed session 18. Sep 4 23:48:37.420849 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:51222.service - OpenSSH per-connection server daemon (10.200.16.10:51222). Sep 4 23:48:37.875840 sshd[4785]: Accepted publickey for core from 10.200.16.10 port 51222 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:37.877159 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:37.882347 systemd-logind[1702]: New session 19 of user core. Sep 4 23:48:37.892772 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:48:38.677583 sshd[4787]: Connection closed by 10.200.16.10 port 51222 Sep 4 23:48:38.678142 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:38.681441 systemd-logind[1702]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:48:38.682075 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:51222.service: Deactivated successfully. Sep 4 23:48:38.685295 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:48:38.686495 systemd-logind[1702]: Removed session 19. Sep 4 23:48:38.761282 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:51236.service - OpenSSH per-connection server daemon (10.200.16.10:51236). Sep 4 23:48:39.220516 sshd[4805]: Accepted publickey for core from 10.200.16.10 port 51236 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:39.221986 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:39.226483 systemd-logind[1702]: New session 20 of user core. Sep 4 23:48:39.232773 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:48:39.758547 sshd[4807]: Connection closed by 10.200.16.10 port 51236 Sep 4 23:48:39.758965 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:39.763381 systemd-logind[1702]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:48:39.764315 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:51236.service: Deactivated successfully. Sep 4 23:48:39.768640 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:48:39.770356 systemd-logind[1702]: Removed session 20. Sep 4 23:48:39.848206 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:41728.service - OpenSSH per-connection server daemon (10.200.16.10:41728). Sep 4 23:48:40.303061 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 41728 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:40.304599 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:40.309906 systemd-logind[1702]: New session 21 of user core. Sep 4 23:48:40.317736 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:48:40.710781 sshd[4818]: Connection closed by 10.200.16.10 port 41728 Sep 4 23:48:40.711355 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:40.715244 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:41728.service: Deactivated successfully. Sep 4 23:48:40.718959 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:48:40.720359 systemd-logind[1702]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:48:40.721518 systemd-logind[1702]: Removed session 21. Sep 4 23:48:45.810833 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:41740.service - OpenSSH per-connection server daemon (10.200.16.10:41740). Sep 4 23:48:46.307121 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 41740 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:46.308496 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:46.312816 systemd-logind[1702]: New session 22 of user core. Sep 4 23:48:46.324740 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:48:46.733939 sshd[4833]: Connection closed by 10.200.16.10 port 41740 Sep 4 23:48:46.734839 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:46.738372 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:41740.service: Deactivated successfully. Sep 4 23:48:46.741249 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:48:46.744402 systemd-logind[1702]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:48:46.745447 systemd-logind[1702]: Removed session 22. Sep 4 23:48:51.817611 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:52520.service - OpenSSH per-connection server daemon (10.200.16.10:52520). Sep 4 23:48:52.277508 sshd[4845]: Accepted publickey for core from 10.200.16.10 port 52520 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:52.278852 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:52.283226 systemd-logind[1702]: New session 23 of user core. Sep 4 23:48:52.292758 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:48:52.687571 sshd[4847]: Connection closed by 10.200.16.10 port 52520 Sep 4 23:48:52.687958 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:52.692419 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:52520.service: Deactivated successfully. Sep 4 23:48:52.695377 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:48:52.696675 systemd-logind[1702]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:48:52.697675 systemd-logind[1702]: Removed session 23. Sep 4 23:48:57.785153 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:52532.service - OpenSSH per-connection server daemon (10.200.16.10:52532). Sep 4 23:48:58.241223 sshd[4860]: Accepted publickey for core from 10.200.16.10 port 52532 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:58.242615 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:58.247476 systemd-logind[1702]: New session 24 of user core. Sep 4 23:48:58.251694 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:48:58.647172 sshd[4862]: Connection closed by 10.200.16.10 port 52532 Sep 4 23:48:58.647784 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:58.651216 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:52532.service: Deactivated successfully. Sep 4 23:48:58.653941 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:48:58.655198 systemd-logind[1702]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:48:58.658259 systemd-logind[1702]: Removed session 24. Sep 4 23:48:58.740008 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.16.10:52538.service - OpenSSH per-connection server daemon (10.200.16.10:52538). Sep 4 23:48:59.195590 sshd[4873]: Accepted publickey for core from 10.200.16.10 port 52538 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:48:59.197092 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:59.202287 systemd-logind[1702]: New session 25 of user core. Sep 4 23:48:59.208714 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:49:01.153863 containerd[1724]: time="2025-09-04T23:49:01.153801301Z" level=info msg="StopContainer for \"5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8\" with timeout 30 (s)" Sep 4 23:49:01.154737 containerd[1724]: time="2025-09-04T23:49:01.154701221Z" level=info msg="Stop container \"5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8\" with signal terminated" Sep 4 23:49:01.160030 containerd[1724]: time="2025-09-04T23:49:01.159244858Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:49:01.167937 containerd[1724]: time="2025-09-04T23:49:01.167506535Z" level=info msg="StopContainer for \"bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25\" with timeout 2 (s)" Sep 4 23:49:01.168316 containerd[1724]: time="2025-09-04T23:49:01.168276614Z" level=info msg="Stop container \"bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25\" with signal terminated" Sep 4 23:49:01.168892 systemd[1]: cri-containerd-5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8.scope: Deactivated successfully. Sep 4 23:49:01.184723 systemd-networkd[1548]: lxc_health: Link DOWN Sep 4 23:49:01.184732 systemd-networkd[1548]: lxc_health: Lost carrier Sep 4 23:49:01.207043 systemd[1]: cri-containerd-bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25.scope: Deactivated successfully. Sep 4 23:49:01.207367 systemd[1]: cri-containerd-bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25.scope: Consumed 6.715s CPU time, 124.7M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:49:01.216018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8-rootfs.mount: Deactivated successfully. Sep 4 23:49:01.233755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25-rootfs.mount: Deactivated successfully. Sep 4 23:49:01.968184 kubelet[3265]: E0904 23:49:01.968119 3265 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:49:03.160570 sshd[4875]: Connection closed by 10.200.16.10 port 52538 Sep 4 23:49:03.161225 sshd-session[4873]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:03.165318 systemd[1]: sshd@22-10.200.20.37:22-10.200.16.10:52538.service: Deactivated successfully. Sep 4 23:49:03.168001 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:49:03.168328 systemd[1]: session-25.scope: Consumed 1.031s CPU time, 23M memory peak. Sep 4 23:49:03.169199 systemd-logind[1702]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:49:03.170213 systemd-logind[1702]: Removed session 25. Sep 4 23:49:03.190935 containerd[1724]: time="2025-09-04T23:49:03.190885468Z" level=info msg="Kill container \"bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25\"" Sep 4 23:49:03.259041 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.16.10:39180.service - OpenSSH per-connection server daemon (10.200.16.10:39180). Sep 4 23:49:03.752281 sshd[4947]: Accepted publickey for core from 10.200.16.10 port 39180 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:49:03.753737 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:03.758871 systemd-logind[1702]: New session 26 of user core. Sep 4 23:49:03.764753 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:49:03.779717 containerd[1724]: time="2025-09-04T23:49:03.779650707Z" level=info msg="shim disconnected" id=bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25 namespace=k8s.io Sep 4 23:49:03.779717 containerd[1724]: time="2025-09-04T23:49:03.779704267Z" level=warning msg="cleaning up after shim disconnected" id=bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25 namespace=k8s.io Sep 4 23:49:03.779717 containerd[1724]: time="2025-09-04T23:49:03.779714227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:03.780389 containerd[1724]: time="2025-09-04T23:49:03.780343867Z" level=info msg="shim disconnected" id=5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8 namespace=k8s.io Sep 4 23:49:03.780579 containerd[1724]: time="2025-09-04T23:49:03.780433787Z" level=warning msg="cleaning up after shim disconnected" id=5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8 namespace=k8s.io Sep 4 23:49:03.780579 containerd[1724]: time="2025-09-04T23:49:03.780445387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:03.828293 containerd[1724]: time="2025-09-04T23:49:03.828240367Z" level=info msg="StopContainer for \"bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25\" returns successfully" Sep 4 23:49:03.829067 containerd[1724]: time="2025-09-04T23:49:03.829028207Z" level=info msg="StopPodSandbox for \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\"" Sep 4 23:49:03.830426 containerd[1724]: time="2025-09-04T23:49:03.829072327Z" level=info msg="Container to stop \"a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:03.830426 containerd[1724]: time="2025-09-04T23:49:03.829085167Z" level=info msg="Container to stop \"bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:03.830426 containerd[1724]: time="2025-09-04T23:49:03.829094447Z" level=info msg="Container to stop \"6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:03.830426 containerd[1724]: time="2025-09-04T23:49:03.829104327Z" level=info msg="Container to stop \"d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:03.830426 containerd[1724]: time="2025-09-04T23:49:03.829112967Z" level=info msg="Container to stop \"f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:03.832353 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4-shm.mount: Deactivated successfully. Sep 4 23:49:03.833512 containerd[1724]: time="2025-09-04T23:49:03.832828045Z" level=info msg="StopContainer for \"5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8\" returns successfully" Sep 4 23:49:03.834202 containerd[1724]: time="2025-09-04T23:49:03.834068965Z" level=info msg="StopPodSandbox for \"37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d\"" Sep 4 23:49:03.834269 containerd[1724]: time="2025-09-04T23:49:03.834223205Z" level=info msg="Container to stop \"5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:49:03.838958 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d-shm.mount: Deactivated successfully. Sep 4 23:49:03.840313 systemd[1]: cri-containerd-22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4.scope: Deactivated successfully. Sep 4 23:49:03.852040 systemd[1]: cri-containerd-37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d.scope: Deactivated successfully. Sep 4 23:49:03.871715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4-rootfs.mount: Deactivated successfully. Sep 4 23:49:03.880175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d-rootfs.mount: Deactivated successfully. Sep 4 23:49:05.031617 containerd[1724]: time="2025-09-04T23:49:05.030994075Z" level=info msg="shim disconnected" id=37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d namespace=k8s.io Sep 4 23:49:05.032093 containerd[1724]: time="2025-09-04T23:49:05.032064194Z" level=warning msg="cleaning up after shim disconnected" id=37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d namespace=k8s.io Sep 4 23:49:05.032160 containerd[1724]: time="2025-09-04T23:49:05.032147034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:05.032979 containerd[1724]: time="2025-09-04T23:49:05.032054034Z" level=info msg="shim disconnected" id=22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4 namespace=k8s.io Sep 4 23:49:05.033171 containerd[1724]: time="2025-09-04T23:49:05.033117394Z" level=warning msg="cleaning up after shim disconnected" id=22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4 namespace=k8s.io Sep 4 23:49:05.033330 containerd[1724]: time="2025-09-04T23:49:05.033311954Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:05.066480 containerd[1724]: time="2025-09-04T23:49:05.066387060Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:49:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:49:05.067442 containerd[1724]: time="2025-09-04T23:49:05.067394260Z" level=info msg="TearDown network for sandbox \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" successfully" Sep 4 23:49:05.067442 containerd[1724]: time="2025-09-04T23:49:05.067425940Z" level=info msg="StopPodSandbox for \"22d8425f7ddab6c69dac69c06ad866ed7684627a04a273d2718789356bb562d4\" returns successfully" Sep 4 23:49:05.068474 containerd[1724]: time="2025-09-04T23:49:05.068445099Z" level=info msg="TearDown network for sandbox \"37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d\" successfully" Sep 4 23:49:05.069016 containerd[1724]: time="2025-09-04T23:49:05.068990419Z" level=info msg="StopPodSandbox for \"37ba80bebbdec21afd871a0ffaf6d51dc70b1e0abef11fe7c43ef1bc8aecc49d\" returns successfully" Sep 4 23:49:05.163879 kubelet[3265]: I0904 23:49:05.163086 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-host-proc-sys-net\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.163879 kubelet[3265]: I0904 23:49:05.163134 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-run\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.163879 kubelet[3265]: I0904 23:49:05.163163 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx58p\" (UniqueName: \"kubernetes.io/projected/c596f358-2e27-46a5-9f45-d97714fe7111-kube-api-access-fx58p\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.163879 kubelet[3265]: I0904 23:49:05.163178 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-lib-modules\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.163879 kubelet[3265]: I0904 23:49:05.163197 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-config-path\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.163879 kubelet[3265]: I0904 23:49:05.163215 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c596f358-2e27-46a5-9f45-d97714fe7111-clustermesh-secrets\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.164628 kubelet[3265]: I0904 23:49:05.163234 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-hostproc\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.164628 kubelet[3265]: I0904 23:49:05.163248 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-host-proc-sys-kernel\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.164628 kubelet[3265]: I0904 23:49:05.163294 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv8fg\" (UniqueName: \"kubernetes.io/projected/2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4-kube-api-access-kv8fg\") pod \"2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4\" (UID: \"2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4\") " Sep 4 23:49:05.164628 kubelet[3265]: I0904 23:49:05.163312 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-cgroup\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.164628 kubelet[3265]: I0904 23:49:05.163325 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-bpf-maps\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.164628 kubelet[3265]: I0904 23:49:05.163339 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cni-path\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.164792 kubelet[3265]: I0904 23:49:05.163357 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4-cilium-config-path\") pod \"2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4\" (UID: \"2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4\") " Sep 4 23:49:05.164792 kubelet[3265]: I0904 23:49:05.163374 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c596f358-2e27-46a5-9f45-d97714fe7111-hubble-tls\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.164792 kubelet[3265]: I0904 23:49:05.163395 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-etc-cni-netd\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.164792 kubelet[3265]: I0904 23:49:05.163411 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-xtables-lock\") pod \"c596f358-2e27-46a5-9f45-d97714fe7111\" (UID: \"c596f358-2e27-46a5-9f45-d97714fe7111\") " Sep 4 23:49:05.164792 kubelet[3265]: I0904 23:49:05.163487 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:05.164926 kubelet[3265]: I0904 23:49:05.163522 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:05.164926 kubelet[3265]: I0904 23:49:05.163560 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:05.173561 kubelet[3265]: I0904 23:49:05.168295 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4-kube-api-access-kv8fg" (OuterVolumeSpecName: "kube-api-access-kv8fg") pod "2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4" (UID: "2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4"). InnerVolumeSpecName "kube-api-access-kv8fg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:49:05.173561 kubelet[3265]: I0904 23:49:05.168378 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:05.173561 kubelet[3265]: I0904 23:49:05.170285 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:05.173561 kubelet[3265]: I0904 23:49:05.170361 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:05.173561 kubelet[3265]: I0904 23:49:05.170381 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cni-path" (OuterVolumeSpecName: "cni-path") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:05.173097 systemd[1]: var-lib-kubelet-pods-2322a9d1\x2d4c14\x2d45a9\x2d9d0d\x2dd37bcf8be8e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkv8fg.mount: Deactivated successfully. Sep 4 23:49:05.173886 sshd[4949]: Connection closed by 10.200.16.10 port 39180 Sep 4 23:49:05.173219 systemd[1]: var-lib-kubelet-pods-c596f358\x2d2e27\x2d46a5\x2d9f45\x2dd97714fe7111-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfx58p.mount: Deactivated successfully. Sep 4 23:49:05.175581 kubelet[3265]: I0904 23:49:05.174710 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:49:05.175259 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:05.183797 kubelet[3265]: I0904 23:49:05.183022 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:05.184793 kubelet[3265]: I0904 23:49:05.184662 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c596f358-2e27-46a5-9f45-d97714fe7111-kube-api-access-fx58p" (OuterVolumeSpecName: "kube-api-access-fx58p") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "kube-api-access-fx58p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:49:05.185207 kubelet[3265]: I0904 23:49:05.185176 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-hostproc" (OuterVolumeSpecName: "hostproc") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:05.185473 kubelet[3265]: I0904 23:49:05.185417 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:49:05.188022 kubelet[3265]: I0904 23:49:05.186286 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4" (UID: "2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:49:05.190052 systemd[1]: var-lib-kubelet-pods-c596f358\x2d2e27\x2d46a5\x2d9f45\x2dd97714fe7111-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:49:05.193611 kubelet[3265]: I0904 23:49:05.192761 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c596f358-2e27-46a5-9f45-d97714fe7111-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:49:05.193611 kubelet[3265]: I0904 23:49:05.192880 3265 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c596f358-2e27-46a5-9f45-d97714fe7111-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c596f358-2e27-46a5-9f45-d97714fe7111" (UID: "c596f358-2e27-46a5-9f45-d97714fe7111"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:49:05.193296 systemd[1]: var-lib-kubelet-pods-c596f358\x2d2e27\x2d46a5\x2d9f45\x2dd97714fe7111-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:49:05.194978 systemd[1]: sshd@23-10.200.20.37:22-10.200.16.10:39180.service: Deactivated successfully. Sep 4 23:49:05.200211 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:49:05.201784 systemd-logind[1702]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:49:05.204580 systemd-logind[1702]: Removed session 26. Sep 4 23:49:05.264908 kubelet[3265]: I0904 23:49:05.264720 3265 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-host-proc-sys-net\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.264908 kubelet[3265]: I0904 23:49:05.264755 3265 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-run\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.264908 kubelet[3265]: I0904 23:49:05.264765 3265 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fx58p\" (UniqueName: \"kubernetes.io/projected/c596f358-2e27-46a5-9f45-d97714fe7111-kube-api-access-fx58p\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.264908 kubelet[3265]: I0904 23:49:05.264776 3265 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-lib-modules\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.264908 kubelet[3265]: I0904 23:49:05.264786 3265 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-config-path\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.264908 kubelet[3265]: I0904 23:49:05.264795 3265 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c596f358-2e27-46a5-9f45-d97714fe7111-clustermesh-secrets\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.264908 kubelet[3265]: I0904 23:49:05.264805 3265 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-hostproc\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.264908 kubelet[3265]: I0904 23:49:05.264814 3265 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-bpf-maps\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.265181 kubelet[3265]: I0904 23:49:05.264822 3265 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cni-path\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.265181 kubelet[3265]: I0904 23:49:05.264830 3265 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-host-proc-sys-kernel\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.265181 kubelet[3265]: I0904 23:49:05.264839 3265 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kv8fg\" (UniqueName: \"kubernetes.io/projected/2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4-kube-api-access-kv8fg\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.265181 kubelet[3265]: I0904 23:49:05.264850 3265 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-cilium-cgroup\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.265181 kubelet[3265]: I0904 23:49:05.264859 3265 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c596f358-2e27-46a5-9f45-d97714fe7111-hubble-tls\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.265181 kubelet[3265]: I0904 23:49:05.264868 3265 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4-cilium-config-path\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.265181 kubelet[3265]: I0904 23:49:05.264877 3265 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-etc-cni-netd\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.265181 kubelet[3265]: I0904 23:49:05.264886 3265 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c596f358-2e27-46a5-9f45-d97714fe7111-xtables-lock\") on node \"ci-4230.2.2-n-c33c3b40b5\" DevicePath \"\"" Sep 4 23:49:05.266339 systemd[1]: Started sshd@24-10.200.20.37:22-10.200.16.10:39186.service - OpenSSH per-connection server daemon (10.200.16.10:39186). Sep 4 23:49:05.289813 kubelet[3265]: I0904 23:49:05.289770 3265 scope.go:117] "RemoveContainer" containerID="5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8" Sep 4 23:49:05.294268 containerd[1724]: time="2025-09-04T23:49:05.294165047Z" level=info msg="RemoveContainer for \"5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8\"" Sep 4 23:49:05.302431 systemd[1]: Removed slice kubepods-besteffort-pod2322a9d1_4c14_45a9_9d0d_d37bcf8be8e4.slice - libcontainer container kubepods-besteffort-pod2322a9d1_4c14_45a9_9d0d_d37bcf8be8e4.slice. Sep 4 23:49:05.349844 systemd[1]: Removed slice kubepods-burstable-podc596f358_2e27_46a5_9f45_d97714fe7111.slice - libcontainer container kubepods-burstable-podc596f358_2e27_46a5_9f45_d97714fe7111.slice. Sep 4 23:49:05.349956 systemd[1]: kubepods-burstable-podc596f358_2e27_46a5_9f45_d97714fe7111.slice: Consumed 6.788s CPU time, 125.1M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:49:05.354445 containerd[1724]: time="2025-09-04T23:49:05.354373262Z" level=info msg="RemoveContainer for \"5fbca89d2dc23e15e371cabc6f111622fe6352417086b5e08c1ff783b6cb4be8\" returns successfully" Sep 4 23:49:05.355643 kubelet[3265]: I0904 23:49:05.354694 3265 scope.go:117] "RemoveContainer" containerID="bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25" Sep 4 23:49:05.356457 containerd[1724]: time="2025-09-04T23:49:05.356416942Z" level=info msg="RemoveContainer for \"bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25\"" Sep 4 23:49:05.423193 containerd[1724]: time="2025-09-04T23:49:05.423071474Z" level=info msg="RemoveContainer for \"bf04995fbc54645877742e18351cfd9c80e48695a690d654993d81f2b8622d25\" returns successfully" Sep 4 23:49:05.424922 kubelet[3265]: I0904 23:49:05.424784 3265 scope.go:117] "RemoveContainer" containerID="f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb" Sep 4 23:49:05.427682 containerd[1724]: time="2025-09-04T23:49:05.427611072Z" level=info msg="RemoveContainer for \"f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb\"" Sep 4 23:49:05.659379 containerd[1724]: time="2025-09-04T23:49:05.659328178Z" level=info msg="RemoveContainer for \"f7a1248c957c2be8ed0beedaafb26e693b823cf5b9df50f77a111b9e3eada1fb\" returns successfully" Sep 4 23:49:05.660190 kubelet[3265]: I0904 23:49:05.659694 3265 scope.go:117] "RemoveContainer" containerID="d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504" Sep 4 23:49:05.661466 containerd[1724]: time="2025-09-04T23:49:05.661423057Z" level=info msg="RemoveContainer for \"d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504\"" Sep 4 23:49:05.786308 sshd[5053]: Accepted publickey for core from 10.200.16.10 port 39186 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:49:05.787810 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:05.792337 systemd-logind[1702]: New session 27 of user core. Sep 4 23:49:05.797765 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:49:05.847439 containerd[1724]: time="2025-09-04T23:49:05.847315781Z" level=info msg="RemoveContainer for \"d6167233441fffe6d948b46d3f0e3033332953277a2c6dca516e95698b1b4504\" returns successfully" Sep 4 23:49:05.847795 kubelet[3265]: I0904 23:49:05.847754 3265 scope.go:117] "RemoveContainer" containerID="a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4" Sep 4 23:49:05.848985 containerd[1724]: time="2025-09-04T23:49:05.848925820Z" level=info msg="RemoveContainer for \"a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4\"" Sep 4 23:49:05.912664 containerd[1724]: time="2025-09-04T23:49:05.912617034Z" level=info msg="RemoveContainer for \"a87308bb2c8108961ed042ca007fd5960fe28756bb54af17990800b238093fb4\" returns successfully" Sep 4 23:49:05.913120 kubelet[3265]: I0904 23:49:05.913077 3265 scope.go:117] "RemoveContainer" containerID="6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1" Sep 4 23:49:05.914961 containerd[1724]: time="2025-09-04T23:49:05.914666153Z" level=info msg="RemoveContainer for \"6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1\"" Sep 4 23:49:06.009296 containerd[1724]: time="2025-09-04T23:49:06.009175914Z" level=info msg="RemoveContainer for \"6d99c3bb1e2d4e88e596216edae2c91dd096b41d85ffbf53a5ac88a1b850d4e1\" returns successfully" Sep 4 23:49:06.139800 sshd[5055]: Connection closed by 10.200.16.10 port 39186 Sep 4 23:49:06.139712 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:06.143667 systemd[1]: sshd@24-10.200.20.37:22-10.200.16.10:39186.service: Deactivated successfully. Sep 4 23:49:06.146020 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:49:06.147072 systemd-logind[1702]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:49:06.148044 systemd-logind[1702]: Removed session 27. Sep 4 23:49:06.242860 systemd[1]: Started sshd@25-10.200.20.37:22-10.200.16.10:39196.service - OpenSSH per-connection server daemon (10.200.16.10:39196). Sep 4 23:49:06.741103 sshd[5062]: Accepted publickey for core from 10.200.16.10 port 39196 ssh2: RSA SHA256:DRrHAgQqx3oBUlYQZEsd5Kl9D+4+NLroFV87inTd0+U Sep 4 23:49:06.742461 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:06.747212 systemd-logind[1702]: New session 28 of user core. Sep 4 23:49:06.756769 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 23:49:06.841282 kubelet[3265]: I0904 23:49:06.841235 3265 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4" path="/var/lib/kubelet/pods/2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4/volumes" Sep 4 23:49:06.841710 kubelet[3265]: I0904 23:49:06.841656 3265 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c596f358-2e27-46a5-9f45-d97714fe7111" path="/var/lib/kubelet/pods/c596f358-2e27-46a5-9f45-d97714fe7111/volumes" Sep 4 23:49:06.969368 kubelet[3265]: E0904 23:49:06.969325 3265 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:49:06.978301 kubelet[3265]: I0904 23:49:06.977084 3265 memory_manager.go:355] "RemoveStaleState removing state" podUID="c596f358-2e27-46a5-9f45-d97714fe7111" containerName="cilium-agent" Sep 4 23:49:06.978301 kubelet[3265]: I0904 23:49:06.977123 3265 memory_manager.go:355] "RemoveStaleState removing state" podUID="2322a9d1-4c14-45a9-9d0d-d37bcf8be8e4" containerName="cilium-operator" Sep 4 23:49:06.989348 systemd[1]: Created slice kubepods-burstable-pod50fc52f0_0ee7_47b2_8f28_aae26351125f.slice - libcontainer container kubepods-burstable-pod50fc52f0_0ee7_47b2_8f28_aae26351125f.slice. Sep 4 23:49:07.075202 kubelet[3265]: I0904 23:49:07.075090 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50fc52f0-0ee7-47b2-8f28-aae26351125f-hostproc\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075202 kubelet[3265]: I0904 23:49:07.075136 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50fc52f0-0ee7-47b2-8f28-aae26351125f-cilium-config-path\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075202 kubelet[3265]: I0904 23:49:07.075159 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50fc52f0-0ee7-47b2-8f28-aae26351125f-host-proc-sys-net\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075202 kubelet[3265]: I0904 23:49:07.075177 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50fc52f0-0ee7-47b2-8f28-aae26351125f-lib-modules\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075202 kubelet[3265]: I0904 23:49:07.075195 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50fc52f0-0ee7-47b2-8f28-aae26351125f-cilium-cgroup\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075202 kubelet[3265]: I0904 23:49:07.075209 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50fc52f0-0ee7-47b2-8f28-aae26351125f-cni-path\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075467 kubelet[3265]: I0904 23:49:07.075225 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50fc52f0-0ee7-47b2-8f28-aae26351125f-clustermesh-secrets\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075467 kubelet[3265]: I0904 23:49:07.075240 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/50fc52f0-0ee7-47b2-8f28-aae26351125f-cilium-ipsec-secrets\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075467 kubelet[3265]: I0904 23:49:07.075254 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50fc52f0-0ee7-47b2-8f28-aae26351125f-cilium-run\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075467 kubelet[3265]: I0904 23:49:07.075270 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50fc52f0-0ee7-47b2-8f28-aae26351125f-bpf-maps\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075467 kubelet[3265]: I0904 23:49:07.075284 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50fc52f0-0ee7-47b2-8f28-aae26351125f-xtables-lock\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075467 kubelet[3265]: I0904 23:49:07.075299 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50fc52f0-0ee7-47b2-8f28-aae26351125f-hubble-tls\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075635 kubelet[3265]: I0904 23:49:07.075314 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50fc52f0-0ee7-47b2-8f28-aae26351125f-etc-cni-netd\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075635 kubelet[3265]: I0904 23:49:07.075329 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50fc52f0-0ee7-47b2-8f28-aae26351125f-host-proc-sys-kernel\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.075635 kubelet[3265]: I0904 23:49:07.075343 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-src4x\" (UniqueName: \"kubernetes.io/projected/50fc52f0-0ee7-47b2-8f28-aae26351125f-kube-api-access-src4x\") pod \"cilium-jqfjf\" (UID: \"50fc52f0-0ee7-47b2-8f28-aae26351125f\") " pod="kube-system/cilium-jqfjf" Sep 4 23:49:07.294232 containerd[1724]: time="2025-09-04T23:49:07.293827068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jqfjf,Uid:50fc52f0-0ee7-47b2-8f28-aae26351125f,Namespace:kube-system,Attempt:0,}" Sep 4 23:49:07.478221 containerd[1724]: time="2025-09-04T23:49:07.475995594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:07.478221 containerd[1724]: time="2025-09-04T23:49:07.476048674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:07.478221 containerd[1724]: time="2025-09-04T23:49:07.476058714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:07.478221 containerd[1724]: time="2025-09-04T23:49:07.476133714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:07.498761 systemd[1]: Started cri-containerd-44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8.scope - libcontainer container 44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8. Sep 4 23:49:07.519569 containerd[1724]: time="2025-09-04T23:49:07.519482696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jqfjf,Uid:50fc52f0-0ee7-47b2-8f28-aae26351125f,Namespace:kube-system,Attempt:0,} returns sandbox id \"44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8\"" Sep 4 23:49:07.523873 containerd[1724]: time="2025-09-04T23:49:07.523736134Z" level=info msg="CreateContainer within sandbox \"44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:49:07.765140 containerd[1724]: time="2025-09-04T23:49:07.764923115Z" level=info msg="CreateContainer within sandbox \"44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1993dc97d81bf8055214ee1d252b5b1a9d138a36f7c796d2a7e86537b9a09bf1\"" Sep 4 23:49:07.766021 containerd[1724]: time="2025-09-04T23:49:07.765818995Z" level=info msg="StartContainer for \"1993dc97d81bf8055214ee1d252b5b1a9d138a36f7c796d2a7e86537b9a09bf1\"" Sep 4 23:49:07.789728 systemd[1]: Started cri-containerd-1993dc97d81bf8055214ee1d252b5b1a9d138a36f7c796d2a7e86537b9a09bf1.scope - libcontainer container 1993dc97d81bf8055214ee1d252b5b1a9d138a36f7c796d2a7e86537b9a09bf1. Sep 4 23:49:07.819962 containerd[1724]: time="2025-09-04T23:49:07.819843573Z" level=info msg="StartContainer for \"1993dc97d81bf8055214ee1d252b5b1a9d138a36f7c796d2a7e86537b9a09bf1\" returns successfully" Sep 4 23:49:07.822481 systemd[1]: cri-containerd-1993dc97d81bf8055214ee1d252b5b1a9d138a36f7c796d2a7e86537b9a09bf1.scope: Deactivated successfully. Sep 4 23:49:08.366107 containerd[1724]: time="2025-09-04T23:49:08.365885869Z" level=info msg="shim disconnected" id=1993dc97d81bf8055214ee1d252b5b1a9d138a36f7c796d2a7e86537b9a09bf1 namespace=k8s.io Sep 4 23:49:08.366107 containerd[1724]: time="2025-09-04T23:49:08.365943349Z" level=warning msg="cleaning up after shim disconnected" id=1993dc97d81bf8055214ee1d252b5b1a9d138a36f7c796d2a7e86537b9a09bf1 namespace=k8s.io Sep 4 23:49:08.366107 containerd[1724]: time="2025-09-04T23:49:08.365951749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:08.840704 kubelet[3265]: E0904 23:49:08.840663 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-bjhkg" podUID="e05b5322-93b8-4593-8cc6-3c64df7865dc" Sep 4 23:49:09.321336 containerd[1724]: time="2025-09-04T23:49:09.321246078Z" level=info msg="CreateContainer within sandbox \"44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:49:09.357683 containerd[1724]: time="2025-09-04T23:49:09.357640063Z" level=info msg="CreateContainer within sandbox \"44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4976da5a36143f5278d136e0dc31f80be5bc5dea35a9e08bd22f9c93198262ca\"" Sep 4 23:49:09.358364 containerd[1724]: time="2025-09-04T23:49:09.358311463Z" level=info msg="StartContainer for \"4976da5a36143f5278d136e0dc31f80be5bc5dea35a9e08bd22f9c93198262ca\"" Sep 4 23:49:09.390744 systemd[1]: Started cri-containerd-4976da5a36143f5278d136e0dc31f80be5bc5dea35a9e08bd22f9c93198262ca.scope - libcontainer container 4976da5a36143f5278d136e0dc31f80be5bc5dea35a9e08bd22f9c93198262ca. Sep 4 23:49:09.422714 containerd[1724]: time="2025-09-04T23:49:09.421764477Z" level=info msg="StartContainer for \"4976da5a36143f5278d136e0dc31f80be5bc5dea35a9e08bd22f9c93198262ca\" returns successfully" Sep 4 23:49:09.422577 systemd[1]: cri-containerd-4976da5a36143f5278d136e0dc31f80be5bc5dea35a9e08bd22f9c93198262ca.scope: Deactivated successfully. Sep 4 23:49:09.449327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4976da5a36143f5278d136e0dc31f80be5bc5dea35a9e08bd22f9c93198262ca-rootfs.mount: Deactivated successfully. Sep 4 23:49:09.470917 containerd[1724]: time="2025-09-04T23:49:09.470850057Z" level=info msg="shim disconnected" id=4976da5a36143f5278d136e0dc31f80be5bc5dea35a9e08bd22f9c93198262ca namespace=k8s.io Sep 4 23:49:09.470917 containerd[1724]: time="2025-09-04T23:49:09.470911577Z" level=warning msg="cleaning up after shim disconnected" id=4976da5a36143f5278d136e0dc31f80be5bc5dea35a9e08bd22f9c93198262ca namespace=k8s.io Sep 4 23:49:09.470917 containerd[1724]: time="2025-09-04T23:49:09.470921057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:10.326836 containerd[1724]: time="2025-09-04T23:49:10.326788347Z" level=info msg="CreateContainer within sandbox \"44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:49:10.361058 containerd[1724]: time="2025-09-04T23:49:10.361007053Z" level=info msg="CreateContainer within sandbox \"44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d77b90ea65a0ce71d19d729339e32840dd5b98a3265c11dd27e57a2739ddd687\"" Sep 4 23:49:10.364726 containerd[1724]: time="2025-09-04T23:49:10.362498132Z" level=info msg="StartContainer for \"d77b90ea65a0ce71d19d729339e32840dd5b98a3265c11dd27e57a2739ddd687\"" Sep 4 23:49:10.391757 systemd[1]: Started cri-containerd-d77b90ea65a0ce71d19d729339e32840dd5b98a3265c11dd27e57a2739ddd687.scope - libcontainer container d77b90ea65a0ce71d19d729339e32840dd5b98a3265c11dd27e57a2739ddd687. Sep 4 23:49:10.421867 systemd[1]: cri-containerd-d77b90ea65a0ce71d19d729339e32840dd5b98a3265c11dd27e57a2739ddd687.scope: Deactivated successfully. Sep 4 23:49:10.425956 containerd[1724]: time="2025-09-04T23:49:10.425519906Z" level=info msg="StartContainer for \"d77b90ea65a0ce71d19d729339e32840dd5b98a3265c11dd27e57a2739ddd687\" returns successfully" Sep 4 23:49:10.436828 kubelet[3265]: I0904 23:49:10.436775 3265 setters.go:602] "Node became not ready" node="ci-4230.2.2-n-c33c3b40b5" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T23:49:10Z","lastTransitionTime":"2025-09-04T23:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 23:49:10.471867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d77b90ea65a0ce71d19d729339e32840dd5b98a3265c11dd27e57a2739ddd687-rootfs.mount: Deactivated successfully. Sep 4 23:49:10.488412 containerd[1724]: time="2025-09-04T23:49:10.488187201Z" level=info msg="shim disconnected" id=d77b90ea65a0ce71d19d729339e32840dd5b98a3265c11dd27e57a2739ddd687 namespace=k8s.io Sep 4 23:49:10.488412 containerd[1724]: time="2025-09-04T23:49:10.488247561Z" level=warning msg="cleaning up after shim disconnected" id=d77b90ea65a0ce71d19d729339e32840dd5b98a3265c11dd27e57a2739ddd687 namespace=k8s.io Sep 4 23:49:10.488412 containerd[1724]: time="2025-09-04T23:49:10.488256401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:10.839462 kubelet[3265]: E0904 23:49:10.838709 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-bjhkg" podUID="e05b5322-93b8-4593-8cc6-3c64df7865dc" Sep 4 23:49:10.839462 kubelet[3265]: E0904 23:49:10.839140 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-6p524" podUID="346a8623-aab7-4487-9e12-e94979bd8505" Sep 4 23:49:11.328573 containerd[1724]: time="2025-09-04T23:49:11.328515098Z" level=info msg="CreateContainer within sandbox \"44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:49:11.369266 containerd[1724]: time="2025-09-04T23:49:11.368793162Z" level=info msg="CreateContainer within sandbox \"44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0389ee5d758253a9db314a16c7a79c62381c9495c0d0b060a3a4e4ee7c335301\"" Sep 4 23:49:11.370887 containerd[1724]: time="2025-09-04T23:49:11.370842241Z" level=info msg="StartContainer for \"0389ee5d758253a9db314a16c7a79c62381c9495c0d0b060a3a4e4ee7c335301\"" Sep 4 23:49:11.395430 systemd[1]: run-containerd-runc-k8s.io-0389ee5d758253a9db314a16c7a79c62381c9495c0d0b060a3a4e4ee7c335301-runc.TH6eW5.mount: Deactivated successfully. Sep 4 23:49:11.405774 systemd[1]: Started cri-containerd-0389ee5d758253a9db314a16c7a79c62381c9495c0d0b060a3a4e4ee7c335301.scope - libcontainer container 0389ee5d758253a9db314a16c7a79c62381c9495c0d0b060a3a4e4ee7c335301. Sep 4 23:49:11.430715 systemd[1]: cri-containerd-0389ee5d758253a9db314a16c7a79c62381c9495c0d0b060a3a4e4ee7c335301.scope: Deactivated successfully. Sep 4 23:49:11.436385 containerd[1724]: time="2025-09-04T23:49:11.436171495Z" level=info msg="StartContainer for \"0389ee5d758253a9db314a16c7a79c62381c9495c0d0b060a3a4e4ee7c335301\" returns successfully" Sep 4 23:49:11.474197 containerd[1724]: time="2025-09-04T23:49:11.474118760Z" level=info msg="shim disconnected" id=0389ee5d758253a9db314a16c7a79c62381c9495c0d0b060a3a4e4ee7c335301 namespace=k8s.io Sep 4 23:49:11.474197 containerd[1724]: time="2025-09-04T23:49:11.474188560Z" level=warning msg="cleaning up after shim disconnected" id=0389ee5d758253a9db314a16c7a79c62381c9495c0d0b060a3a4e4ee7c335301 namespace=k8s.io Sep 4 23:49:11.474197 containerd[1724]: time="2025-09-04T23:49:11.474197440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:11.970521 kubelet[3265]: E0904 23:49:11.970463 3265 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:49:12.336357 containerd[1724]: time="2025-09-04T23:49:12.336313619Z" level=info msg="CreateContainer within sandbox \"44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:49:12.352466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0389ee5d758253a9db314a16c7a79c62381c9495c0d0b060a3a4e4ee7c335301-rootfs.mount: Deactivated successfully. Sep 4 23:49:12.365818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749681761.mount: Deactivated successfully. Sep 4 23:49:12.377679 containerd[1724]: time="2025-09-04T23:49:12.377509322Z" level=info msg="CreateContainer within sandbox \"44b39fb5c3854592a987ada0ac78ea453739e7446ef9804c7a9cf9a09dfd74b8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8c4e2f20a722bd9bd510508e8c1019ab60fe5ff8aa47921ad32595eadc95ac8d\"" Sep 4 23:49:12.379652 containerd[1724]: time="2025-09-04T23:49:12.378590682Z" level=info msg="StartContainer for \"8c4e2f20a722bd9bd510508e8c1019ab60fe5ff8aa47921ad32595eadc95ac8d\"" Sep 4 23:49:12.411738 systemd[1]: Started cri-containerd-8c4e2f20a722bd9bd510508e8c1019ab60fe5ff8aa47921ad32595eadc95ac8d.scope - libcontainer container 8c4e2f20a722bd9bd510508e8c1019ab60fe5ff8aa47921ad32595eadc95ac8d. Sep 4 23:49:12.449519 containerd[1724]: time="2025-09-04T23:49:12.449392374Z" level=info msg="StartContainer for \"8c4e2f20a722bd9bd510508e8c1019ab60fe5ff8aa47921ad32595eadc95ac8d\" returns successfully" Sep 4 23:49:12.840437 kubelet[3265]: E0904 23:49:12.838439 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-bjhkg" podUID="e05b5322-93b8-4593-8cc6-3c64df7865dc" Sep 4 23:49:12.840437 kubelet[3265]: E0904 23:49:12.838944 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-6p524" podUID="346a8623-aab7-4487-9e12-e94979bd8505" Sep 4 23:49:13.043592 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 23:49:13.351727 systemd[1]: run-containerd-runc-k8s.io-8c4e2f20a722bd9bd510508e8c1019ab60fe5ff8aa47921ad32595eadc95ac8d-runc.rbYjfq.mount: Deactivated successfully. Sep 4 23:49:14.840271 kubelet[3265]: E0904 23:49:14.840208 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-bjhkg" podUID="e05b5322-93b8-4593-8cc6-3c64df7865dc" Sep 4 23:49:14.842006 kubelet[3265]: E0904 23:49:14.840847 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-6p524" podUID="346a8623-aab7-4487-9e12-e94979bd8505" Sep 4 23:49:15.895012 systemd-networkd[1548]: lxc_health: Link UP Sep 4 23:49:15.910965 systemd-networkd[1548]: lxc_health: Gained carrier Sep 4 23:49:16.839116 kubelet[3265]: E0904 23:49:16.839056 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-bjhkg" podUID="e05b5322-93b8-4593-8cc6-3c64df7865dc" Sep 4 23:49:16.842315 kubelet[3265]: E0904 23:49:16.840980 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-6p524" podUID="346a8623-aab7-4487-9e12-e94979bd8505" Sep 4 23:49:17.322697 kubelet[3265]: I0904 23:49:17.322619 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jqfjf" podStartSLOduration=12.322510402 podStartE2EDuration="12.322510402s" podCreationTimestamp="2025-09-04 23:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:49:13.355165055 +0000 UTC m=+196.704806766" watchObservedRunningTime="2025-09-04 23:49:17.322510402 +0000 UTC m=+200.672152113" Sep 4 23:49:17.709720 systemd-networkd[1548]: lxc_health: Gained IPv6LL Sep 4 23:49:21.805723 systemd[1]: run-containerd-runc-k8s.io-8c4e2f20a722bd9bd510508e8c1019ab60fe5ff8aa47921ad32595eadc95ac8d-runc.orf5Sn.mount: Deactivated successfully. Sep 4 23:49:21.944492 sshd[5064]: Connection closed by 10.200.16.10 port 39196 Sep 4 23:49:21.943645 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:21.946795 systemd[1]: sshd@25-10.200.20.37:22-10.200.16.10:39196.service: Deactivated successfully. Sep 4 23:49:21.949469 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 23:49:21.951892 systemd-logind[1702]: Session 28 logged out. Waiting for processes to exit. Sep 4 23:49:21.953285 systemd-logind[1702]: Removed session 28. Sep 4 23:49:36.848489 systemd[1]: cri-containerd-f6e5ec8e67658dd06be940319ba05b92fab8be5d6ac79abe78bd01a284242b22.scope: Deactivated successfully. Sep 4 23:49:36.848843 systemd[1]: cri-containerd-f6e5ec8e67658dd06be940319ba05b92fab8be5d6ac79abe78bd01a284242b22.scope: Consumed 2.886s CPU time, 53.6M memory peak. Sep 4 23:49:36.868505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6e5ec8e67658dd06be940319ba05b92fab8be5d6ac79abe78bd01a284242b22-rootfs.mount: Deactivated successfully. Sep 4 23:49:36.884014 containerd[1724]: time="2025-09-04T23:49:36.883929475Z" level=info msg="shim disconnected" id=f6e5ec8e67658dd06be940319ba05b92fab8be5d6ac79abe78bd01a284242b22 namespace=k8s.io Sep 4 23:49:36.884014 containerd[1724]: time="2025-09-04T23:49:36.884013075Z" level=warning msg="cleaning up after shim disconnected" id=f6e5ec8e67658dd06be940319ba05b92fab8be5d6ac79abe78bd01a284242b22 namespace=k8s.io Sep 4 23:49:36.884014 containerd[1724]: time="2025-09-04T23:49:36.884022435Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:37.384562 kubelet[3265]: I0904 23:49:37.384086 3265 scope.go:117] "RemoveContainer" containerID="f6e5ec8e67658dd06be940319ba05b92fab8be5d6ac79abe78bd01a284242b22" Sep 4 23:49:37.385933 containerd[1724]: time="2025-09-04T23:49:37.385812163Z" level=info msg="CreateContainer within sandbox \"e498470da17d5729636bb9378dafef8bc9def95bafaaf1c827d62fe6d39305cd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 23:49:37.425508 containerd[1724]: time="2025-09-04T23:49:37.425448388Z" level=info msg="CreateContainer within sandbox \"e498470da17d5729636bb9378dafef8bc9def95bafaaf1c827d62fe6d39305cd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6556e6fe15f27b9836ca582e3120d2fe2a34b06b07c186c851766621c1f543c8\"" Sep 4 23:49:37.426663 containerd[1724]: time="2025-09-04T23:49:37.426005828Z" level=info msg="StartContainer for \"6556e6fe15f27b9836ca582e3120d2fe2a34b06b07c186c851766621c1f543c8\"" Sep 4 23:49:37.458735 systemd[1]: Started cri-containerd-6556e6fe15f27b9836ca582e3120d2fe2a34b06b07c186c851766621c1f543c8.scope - libcontainer container 6556e6fe15f27b9836ca582e3120d2fe2a34b06b07c186c851766621c1f543c8. Sep 4 23:49:37.497490 containerd[1724]: time="2025-09-04T23:49:37.497343161Z" level=info msg="StartContainer for \"6556e6fe15f27b9836ca582e3120d2fe2a34b06b07c186c851766621c1f543c8\" returns successfully"