Sep 9 23:41:56.029811 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Sep 9 23:41:56.029829 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:10:22 -00 2025 Sep 9 23:41:56.029836 kernel: KASLR enabled Sep 9 23:41:56.029839 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 9 23:41:56.029844 kernel: printk: legacy bootconsole [pl11] enabled Sep 9 23:41:56.029848 kernel: efi: EFI v2.7 by EDK II Sep 9 23:41:56.029853 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 9 23:41:56.029857 kernel: random: crng init done Sep 9 23:41:56.029861 kernel: secureboot: Secure boot disabled Sep 9 23:41:56.029865 kernel: ACPI: Early table checksum verification disabled Sep 9 23:41:56.029869 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 9 23:41:56.029873 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:56.029876 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:56.029881 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 9 23:41:56.029886 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:56.029890 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:56.029895 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:56.029899 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:56.029904 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:56.029908 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:56.029912 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 9 23:41:56.029916 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 23:41:56.029920 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 9 23:41:56.029924 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:41:56.029928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 9 23:41:56.029932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Sep 9 23:41:56.029937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Sep 9 23:41:56.029941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 9 23:41:56.029945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 9 23:41:56.029950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 9 23:41:56.029954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 9 23:41:56.029958 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 9 23:41:56.029962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 9 23:41:56.029966 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 9 23:41:56.029970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 9 23:41:56.029974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 9 23:41:56.029979 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Sep 9 23:41:56.029983 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Sep 9 23:41:56.029987 kernel: Zone ranges: Sep 9 23:41:56.029991 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 9 23:41:56.029998 kernel: DMA32 empty Sep 9 23:41:56.030002 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 9 23:41:56.030007 kernel: Device empty Sep 9 23:41:56.030011 kernel: Movable zone start for each node Sep 9 23:41:56.030015 kernel: Early memory node ranges Sep 9 23:41:56.030020 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 9 23:41:56.030025 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 9 23:41:56.030029 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 9 23:41:56.030033 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 9 23:41:56.030038 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 9 23:41:56.030042 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 9 23:41:56.030046 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 9 23:41:56.030051 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 9 23:41:56.030055 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 9 23:41:56.030059 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 9 23:41:56.030064 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 9 23:41:56.030068 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Sep 9 23:41:56.030073 kernel: psci: probing for conduit method from ACPI. Sep 9 23:41:56.030077 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 23:41:56.030082 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:41:56.030086 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 9 23:41:56.030090 kernel: psci: SMC Calling Convention v1.4 Sep 9 23:41:56.030095 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 9 23:41:56.030099 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 9 23:41:56.030103 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:41:56.030108 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:41:56.030112 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 9 23:41:56.030117 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:41:56.030134 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Sep 9 23:41:56.030139 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:41:56.030143 kernel: CPU features: detected: Spectre-v4 Sep 9 23:41:56.030148 kernel: CPU features: detected: Spectre-BHB Sep 9 23:41:56.030152 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 23:41:56.030156 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 23:41:56.030161 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Sep 9 23:41:56.030165 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 23:41:56.030170 kernel: alternatives: applying boot alternatives Sep 9 23:41:56.030175 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:41:56.030180 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:41:56.030185 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:41:56.030190 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:41:56.030194 kernel: Fallback order for Node 0: 0 Sep 9 23:41:56.030198 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Sep 9 23:41:56.030203 kernel: Policy zone: Normal Sep 9 23:41:56.030207 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:41:56.030212 kernel: software IO TLB: area num 2. Sep 9 23:41:56.030216 kernel: software IO TLB: mapped [mem 0x0000000036290000-0x000000003a290000] (64MB) Sep 9 23:41:56.030220 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 23:41:56.030225 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:41:56.030230 kernel: rcu: RCU event tracing is enabled. Sep 9 23:41:56.030235 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 23:41:56.030240 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:41:56.030244 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:41:56.030249 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:41:56.030253 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 23:41:56.030257 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:41:56.030262 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:41:56.030266 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:41:56.030270 kernel: GICv3: 960 SPIs implemented Sep 9 23:41:56.030275 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:41:56.030279 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:41:56.030284 kernel: GICv3: GICv3 features: 16 PPIs, RSS Sep 9 23:41:56.030289 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Sep 9 23:41:56.030293 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 9 23:41:56.030298 kernel: ITS: No ITS available, not enabling LPIs Sep 9 23:41:56.030302 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:41:56.030306 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Sep 9 23:41:56.030311 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 23:41:56.030315 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Sep 9 23:41:56.030320 kernel: Console: colour dummy device 80x25 Sep 9 23:41:56.030324 kernel: printk: legacy console [tty1] enabled Sep 9 23:41:56.030329 kernel: ACPI: Core revision 20240827 Sep 9 23:41:56.030334 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Sep 9 23:41:56.030339 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:41:56.030344 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:41:56.030348 kernel: landlock: Up and running. Sep 9 23:41:56.030353 kernel: SELinux: Initializing. Sep 9 23:41:56.030357 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:41:56.030365 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:41:56.030371 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Sep 9 23:41:56.030376 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Sep 9 23:41:56.030380 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 9 23:41:56.030385 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:41:56.030390 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:41:56.030395 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:41:56.030400 kernel: Remapping and enabling EFI services. Sep 9 23:41:56.030405 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:41:56.030409 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:41:56.030414 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 9 23:41:56.030420 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Sep 9 23:41:56.030425 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 23:41:56.030429 kernel: SMP: Total of 2 processors activated. Sep 9 23:41:56.030434 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:41:56.030439 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:41:56.030443 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 9 23:41:56.030448 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 23:41:56.030453 kernel: CPU features: detected: Common not Private translations Sep 9 23:41:56.030458 kernel: CPU features: detected: CRC32 instructions Sep 9 23:41:56.030464 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Sep 9 23:41:56.030469 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 23:41:56.030473 kernel: CPU features: detected: LSE atomic instructions Sep 9 23:41:56.030478 kernel: CPU features: detected: Privileged Access Never Sep 9 23:41:56.030483 kernel: CPU features: detected: Speculation barrier (SB) Sep 9 23:41:56.030488 kernel: CPU features: detected: TLB range maintenance instructions Sep 9 23:41:56.030492 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 23:41:56.030497 kernel: CPU features: detected: Scalable Vector Extension Sep 9 23:41:56.030502 kernel: alternatives: applying system-wide alternatives Sep 9 23:41:56.030507 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 9 23:41:56.030512 kernel: SVE: maximum available vector length 16 bytes per vector Sep 9 23:41:56.030517 kernel: SVE: default vector length 16 bytes per vector Sep 9 23:41:56.030522 kernel: Memory: 3959668K/4194160K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 213304K reserved, 16384K cma-reserved) Sep 9 23:41:56.030526 kernel: devtmpfs: initialized Sep 9 23:41:56.030531 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:41:56.030536 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 23:41:56.030541 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 23:41:56.030545 kernel: 0 pages in range for non-PLT usage Sep 9 23:41:56.030551 kernel: 508576 pages in range for PLT usage Sep 9 23:41:56.030556 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:41:56.030561 kernel: SMBIOS 3.1.0 present. Sep 9 23:41:56.030565 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 9 23:41:56.030570 kernel: DMI: Memory slots populated: 2/2 Sep 9 23:41:56.030575 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:41:56.030580 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:41:56.030584 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:41:56.030589 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:41:56.030595 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:41:56.030599 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Sep 9 23:41:56.030604 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:41:56.030609 kernel: cpuidle: using governor menu Sep 9 23:41:56.030614 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:41:56.030618 kernel: ASID allocator initialised with 32768 entries Sep 9 23:41:56.030623 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:41:56.030628 kernel: Serial: AMBA PL011 UART driver Sep 9 23:41:56.030632 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:41:56.030638 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:41:56.030643 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:41:56.030647 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:41:56.030652 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:41:56.030657 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:41:56.030661 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:41:56.030666 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:41:56.030671 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:41:56.030675 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:41:56.030681 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:41:56.030686 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:41:56.030690 kernel: ACPI: Interpreter enabled Sep 9 23:41:56.030695 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:41:56.030700 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 9 23:41:56.030705 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 23:41:56.030709 kernel: printk: legacy bootconsole [pl11] disabled Sep 9 23:41:56.030714 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 9 23:41:56.030719 kernel: ACPI: CPU0 has been hot-added Sep 9 23:41:56.030724 kernel: ACPI: CPU1 has been hot-added Sep 9 23:41:56.030729 kernel: iommu: Default domain type: Translated Sep 9 23:41:56.030734 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:41:56.030738 kernel: efivars: Registered efivars operations Sep 9 23:41:56.030743 kernel: vgaarb: loaded Sep 9 23:41:56.030748 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:41:56.030753 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:41:56.030758 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:41:56.030762 kernel: pnp: PnP ACPI init Sep 9 23:41:56.030768 kernel: pnp: PnP ACPI: found 0 devices Sep 9 23:41:56.030772 kernel: NET: Registered PF_INET protocol family Sep 9 23:41:56.030777 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:41:56.030782 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:41:56.030787 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:41:56.030792 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:41:56.030797 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:41:56.030801 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:41:56.030806 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:41:56.030812 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:41:56.030816 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:41:56.030821 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:41:56.030826 kernel: kvm [1]: HYP mode not available Sep 9 23:41:56.030830 kernel: Initialise system trusted keyrings Sep 9 23:41:56.030835 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:41:56.030840 kernel: Key type asymmetric registered Sep 9 23:41:56.030844 kernel: Asymmetric key parser 'x509' registered Sep 9 23:41:56.030849 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:41:56.030855 kernel: io scheduler mq-deadline registered Sep 9 23:41:56.030859 kernel: io scheduler kyber registered Sep 9 23:41:56.030864 kernel: io scheduler bfq registered Sep 9 23:41:56.030869 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:41:56.030874 kernel: thunder_xcv, ver 1.0 Sep 9 23:41:56.030878 kernel: thunder_bgx, ver 1.0 Sep 9 23:41:56.030883 kernel: nicpf, ver 1.0 Sep 9 23:41:56.030888 kernel: nicvf, ver 1.0 Sep 9 23:41:56.031012 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:41:56.031065 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:41:55 UTC (1757461315) Sep 9 23:41:56.031071 kernel: efifb: probing for efifb Sep 9 23:41:56.031076 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 9 23:41:56.031081 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 9 23:41:56.031086 kernel: efifb: scrolling: redraw Sep 9 23:41:56.031090 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 23:41:56.031095 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 23:41:56.031100 kernel: fb0: EFI VGA frame buffer device Sep 9 23:41:56.031106 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 9 23:41:56.031110 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:41:56.031115 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 23:41:56.031128 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:41:56.031133 kernel: watchdog: NMI not fully supported Sep 9 23:41:56.031138 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:41:56.031143 kernel: Segment Routing with IPv6 Sep 9 23:41:56.031147 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:41:56.031152 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:41:56.031158 kernel: Key type dns_resolver registered Sep 9 23:41:56.031163 kernel: registered taskstats version 1 Sep 9 23:41:56.031167 kernel: Loading compiled-in X.509 certificates Sep 9 23:41:56.031173 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 61217a1897415238555e2058a4e44c51622b0f87' Sep 9 23:41:56.031177 kernel: Demotion targets for Node 0: null Sep 9 23:41:56.031182 kernel: Key type .fscrypt registered Sep 9 23:41:56.031187 kernel: Key type fscrypt-provisioning registered Sep 9 23:41:56.031191 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:41:56.031196 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:41:56.031202 kernel: ima: No architecture policies found Sep 9 23:41:56.031206 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:41:56.031211 kernel: clk: Disabling unused clocks Sep 9 23:41:56.031216 kernel: PM: genpd: Disabling unused power domains Sep 9 23:41:56.031221 kernel: Warning: unable to open an initial console. Sep 9 23:41:56.031225 kernel: Freeing unused kernel memory: 38912K Sep 9 23:41:56.031230 kernel: Run /init as init process Sep 9 23:41:56.031235 kernel: with arguments: Sep 9 23:41:56.031239 kernel: /init Sep 9 23:41:56.031245 kernel: with environment: Sep 9 23:41:56.031249 kernel: HOME=/ Sep 9 23:41:56.031254 kernel: TERM=linux Sep 9 23:41:56.031259 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:41:56.031264 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:41:56.031271 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:41:56.031277 systemd[1]: Detected virtualization microsoft. Sep 9 23:41:56.031282 systemd[1]: Detected architecture arm64. Sep 9 23:41:56.031288 systemd[1]: Running in initrd. Sep 9 23:41:56.031293 systemd[1]: No hostname configured, using default hostname. Sep 9 23:41:56.031298 systemd[1]: Hostname set to . Sep 9 23:41:56.031303 systemd[1]: Initializing machine ID from random generator. Sep 9 23:41:56.031308 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:41:56.031314 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:41:56.031319 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:41:56.031324 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:41:56.031331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:41:56.031336 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:41:56.031341 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:41:56.031347 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:41:56.031352 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:41:56.031358 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:41:56.031364 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:41:56.031369 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:41:56.031374 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:41:56.031379 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:41:56.031384 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:41:56.031389 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:41:56.031394 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:41:56.031400 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:41:56.031405 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:41:56.031411 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:41:56.031416 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:41:56.031421 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:41:56.031426 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:41:56.031431 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:41:56.031437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:41:56.031442 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:41:56.031447 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:41:56.031454 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:41:56.031459 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:41:56.031464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:41:56.031482 systemd-journald[225]: Collecting audit messages is disabled. Sep 9 23:41:56.031497 systemd-journald[225]: Journal started Sep 9 23:41:56.031511 systemd-journald[225]: Runtime Journal (/run/log/journal/cb3bb0b1d7df4b03ba59d77f76eb7607) is 8M, max 78.5M, 70.5M free. Sep 9 23:41:56.039163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:56.044799 systemd-modules-load[227]: Inserted module 'overlay' Sep 9 23:41:56.057400 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:41:56.069598 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:41:56.085424 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:41:56.085448 kernel: Bridge firewalling registered Sep 9 23:41:56.073778 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:41:56.080910 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:41:56.088107 systemd-modules-load[227]: Inserted module 'br_netfilter' Sep 9 23:41:56.089054 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:41:56.101716 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:56.111146 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:41:56.123564 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:41:56.137185 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:41:56.150152 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:41:56.162673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:41:56.171922 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:41:56.177152 systemd-tmpfiles[246]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:41:56.190147 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:41:56.194766 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:41:56.205354 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:41:56.227025 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:41:56.233911 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:41:56.251052 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:41:56.281574 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:41:56.294790 systemd-resolved[263]: Positive Trust Anchors: Sep 9 23:41:56.294800 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:41:56.294819 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:41:56.297230 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 9 23:41:56.302944 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:41:56.307066 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:41:56.396139 kernel: SCSI subsystem initialized Sep 9 23:41:56.402133 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:41:56.409265 kernel: iscsi: registered transport (tcp) Sep 9 23:41:56.422814 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:41:56.422826 kernel: QLogic iSCSI HBA Driver Sep 9 23:41:56.437369 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:41:56.463845 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:41:56.470089 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:41:56.517782 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:41:56.526277 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:41:56.581139 kernel: raid6: neonx8 gen() 18549 MB/s Sep 9 23:41:56.598125 kernel: raid6: neonx4 gen() 18556 MB/s Sep 9 23:41:56.617126 kernel: raid6: neonx2 gen() 17072 MB/s Sep 9 23:41:56.636127 kernel: raid6: neonx1 gen() 15030 MB/s Sep 9 23:41:56.655127 kernel: raid6: int64x8 gen() 10532 MB/s Sep 9 23:41:56.674126 kernel: raid6: int64x4 gen() 10617 MB/s Sep 9 23:41:56.694233 kernel: raid6: int64x2 gen() 8982 MB/s Sep 9 23:41:56.714997 kernel: raid6: int64x1 gen() 7006 MB/s Sep 9 23:41:56.715073 kernel: raid6: using algorithm neonx4 gen() 18556 MB/s Sep 9 23:41:56.736086 kernel: raid6: .... xor() 15150 MB/s, rmw enabled Sep 9 23:41:56.736177 kernel: raid6: using neon recovery algorithm Sep 9 23:41:56.743417 kernel: xor: measuring software checksum speed Sep 9 23:41:56.743425 kernel: 8regs : 28595 MB/sec Sep 9 23:41:56.745677 kernel: 32regs : 28816 MB/sec Sep 9 23:41:56.747865 kernel: arm64_neon : 37566 MB/sec Sep 9 23:41:56.750665 kernel: xor: using function: arm64_neon (37566 MB/sec) Sep 9 23:41:56.788215 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:41:56.794191 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:41:56.799966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:41:56.831096 systemd-udevd[474]: Using default interface naming scheme 'v255'. Sep 9 23:41:56.837157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:41:56.848934 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:41:56.877612 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Sep 9 23:41:56.899719 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:41:56.906277 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:41:56.951236 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:41:56.963592 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:41:57.020980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:41:57.031581 kernel: hv_vmbus: Vmbus version:5.3 Sep 9 23:41:57.031605 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 23:41:57.031612 kernel: hv_vmbus: registering driver hid_hyperv Sep 9 23:41:57.024561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:57.040142 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:57.056570 kernel: hv_vmbus: registering driver hv_netvsc Sep 9 23:41:57.056588 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 9 23:41:57.056595 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 23:41:57.063610 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 9 23:41:57.063640 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 9 23:41:57.070989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:57.084749 kernel: hv_vmbus: registering driver hv_storvsc Sep 9 23:41:57.084786 kernel: PTP clock support registered Sep 9 23:41:57.097968 kernel: scsi host0: storvsc_host_t Sep 9 23:41:57.106577 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 9 23:41:57.106589 kernel: scsi host1: storvsc_host_t Sep 9 23:41:57.106673 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 9 23:41:57.098371 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:41:57.117418 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 9 23:41:57.117214 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:41:57.117292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:57.128653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:41:56.709933 kernel: hv_utils: Registering HyperV Utility Driver Sep 9 23:41:56.714759 kernel: hv_vmbus: registering driver hv_utils Sep 9 23:41:56.714772 kernel: hv_utils: Shutdown IC version 3.2 Sep 9 23:41:56.714778 kernel: hv_utils: Heartbeat IC version 3.0 Sep 9 23:41:56.714784 kernel: hv_utils: TimeSync IC version 4.0 Sep 9 23:41:56.714789 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 9 23:41:56.714909 systemd-journald[225]: Time jumped backwards, rotating. Sep 9 23:41:56.714936 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 9 23:41:56.705218 systemd-resolved[263]: Clock change detected. Flushing caches. Sep 9 23:41:56.742445 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 9 23:41:56.742582 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 9 23:41:56.742650 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 9 23:41:56.742711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 23:41:56.742778 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 23:41:56.742834 kernel: hv_netvsc 002248bb-c12e-0022-48bb-c12e002248bb eth0: VF slot 1 added Sep 9 23:41:56.754972 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 23:41:56.755021 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 9 23:41:56.755339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:41:56.776470 kernel: hv_vmbus: registering driver hv_pci Sep 9 23:41:56.776491 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 9 23:41:56.776668 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 23:41:56.776674 kernel: hv_pci 2fff5f2e-4fb5-4e66-b12c-aa16831fd255: PCI VMBus probing: Using version 0x10004 Sep 9 23:41:56.776760 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 9 23:41:56.782108 kernel: hv_pci 2fff5f2e-4fb5-4e66-b12c-aa16831fd255: PCI host bridge to bus 4fb5:00 Sep 9 23:41:56.786446 kernel: pci_bus 4fb5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 9 23:41:56.791003 kernel: pci_bus 4fb5:00: No busn resource found for root bus, will use [bus 00-ff] Sep 9 23:41:56.800437 kernel: pci 4fb5:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Sep 9 23:41:56.805001 kernel: pci 4fb5:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 9 23:41:56.809067 kernel: pci 4fb5:00:02.0: enabling Extended Tags Sep 9 23:41:56.820996 kernel: pci 4fb5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4fb5:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Sep 9 23:41:56.829212 kernel: pci_bus 4fb5:00: busn_res: [bus 00-ff] end is updated to 00 Sep 9 23:41:56.829380 kernel: pci 4fb5:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Sep 9 23:41:56.847060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 23:41:56.868015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#274 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 23:41:56.895287 kernel: mlx5_core 4fb5:00:02.0: enabling device (0000 -> 0002) Sep 9 23:41:56.903937 kernel: mlx5_core 4fb5:00:02.0: PTM is not supported by PCIe Sep 9 23:41:56.904119 kernel: mlx5_core 4fb5:00:02.0: firmware version: 16.30.5006 Sep 9 23:41:57.080841 kernel: hv_netvsc 002248bb-c12e-0022-48bb-c12e002248bb eth0: VF registering: eth1 Sep 9 23:41:57.081053 kernel: mlx5_core 4fb5:00:02.0 eth1: joined to eth0 Sep 9 23:41:57.085605 kernel: mlx5_core 4fb5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 9 23:41:57.094403 kernel: mlx5_core 4fb5:00:02.0 enP20405s1: renamed from eth1 Sep 9 23:41:57.392856 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 9 23:41:57.417328 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 9 23:41:57.428286 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 9 23:41:57.455670 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 9 23:41:57.460769 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 9 23:41:57.473003 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:41:57.480687 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:41:57.489661 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:41:57.498289 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:41:57.506993 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:41:57.533668 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:41:57.552005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#314 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 23:41:57.555964 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:41:57.571030 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 23:41:58.585178 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#297 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 23:41:58.598000 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 23:41:58.598288 disk-uuid[663]: The operation has completed successfully. Sep 9 23:41:58.673968 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:41:58.677343 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:41:58.702117 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:41:58.718321 sh[821]: Success Sep 9 23:41:58.749609 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:41:58.749663 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:41:58.750328 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:41:58.763017 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:41:59.087097 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:41:59.094656 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:41:59.106029 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:41:59.127069 kernel: BTRFS: device fsid 2bc16190-0dd5-44d6-b331-3d703f5a1d1f devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (846) Sep 9 23:41:59.127115 kernel: BTRFS info (device dm-0): first mount of filesystem 2bc16190-0dd5-44d6-b331-3d703f5a1d1f Sep 9 23:41:59.135137 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:41:59.487003 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:41:59.487081 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:41:59.523384 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:41:59.527056 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:41:59.533312 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:41:59.534020 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:41:59.551711 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:41:59.574000 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (869) Sep 9 23:41:59.583819 kernel: BTRFS info (device sda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:59.583859 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:41:59.632082 kernel: BTRFS info (device sda6): turning on async discard Sep 9 23:41:59.632131 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 23:41:59.640010 kernel: BTRFS info (device sda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:41:59.640329 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:41:59.645474 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:41:59.663044 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:41:59.672878 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:41:59.702284 systemd-networkd[1015]: lo: Link UP Sep 9 23:41:59.704606 systemd-networkd[1015]: lo: Gained carrier Sep 9 23:41:59.705397 systemd-networkd[1015]: Enumeration completed Sep 9 23:41:59.705493 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:41:59.708413 systemd-networkd[1015]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:41:59.708416 systemd-networkd[1015]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:41:59.712017 systemd[1]: Reached target network.target - Network. Sep 9 23:41:59.778992 kernel: mlx5_core 4fb5:00:02.0 enP20405s1: Link up Sep 9 23:41:59.783009 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 9 23:41:59.816437 systemd-networkd[1015]: enP20405s1: Link UP Sep 9 23:41:59.819713 kernel: hv_netvsc 002248bb-c12e-0022-48bb-c12e002248bb eth0: Data path switched to VF: enP20405s1 Sep 9 23:41:59.816494 systemd-networkd[1015]: eth0: Link UP Sep 9 23:41:59.816581 systemd-networkd[1015]: eth0: Gained carrier Sep 9 23:41:59.816595 systemd-networkd[1015]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:41:59.834200 systemd-networkd[1015]: enP20405s1: Gained carrier Sep 9 23:41:59.843019 systemd-networkd[1015]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 23:42:01.168139 systemd-networkd[1015]: eth0: Gained IPv6LL Sep 9 23:42:01.179010 ignition[1006]: Ignition 2.21.0 Sep 9 23:42:01.179021 ignition[1006]: Stage: fetch-offline Sep 9 23:42:01.181696 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:42:01.179090 ignition[1006]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:42:01.188951 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 23:42:01.179096 ignition[1006]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:42:01.179175 ignition[1006]: parsed url from cmdline: "" Sep 9 23:42:01.179177 ignition[1006]: no config URL provided Sep 9 23:42:01.179181 ignition[1006]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:42:01.179185 ignition[1006]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:42:01.179188 ignition[1006]: failed to fetch config: resource requires networking Sep 9 23:42:01.179293 ignition[1006]: Ignition finished successfully Sep 9 23:42:01.216722 ignition[1026]: Ignition 2.21.0 Sep 9 23:42:01.216727 ignition[1026]: Stage: fetch Sep 9 23:42:01.216901 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:42:01.216907 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:42:01.216975 ignition[1026]: parsed url from cmdline: "" Sep 9 23:42:01.216977 ignition[1026]: no config URL provided Sep 9 23:42:01.217008 ignition[1026]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:42:01.217014 ignition[1026]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:42:01.217048 ignition[1026]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 9 23:42:01.281435 ignition[1026]: GET result: OK Sep 9 23:42:01.281496 ignition[1026]: config has been read from IMDS userdata Sep 9 23:42:01.281516 ignition[1026]: parsing config with SHA512: 2c980c7115b28617f64ec508fdc52460db273e86946707dbb4c2ac62c7623d16ad308499f9319097d926dfbeb7d7cf0267ec1f07fc9d41e93cca9db2506f1f6b Sep 9 23:42:01.284309 unknown[1026]: fetched base config from "system" Sep 9 23:42:01.284314 unknown[1026]: fetched base config from "system" Sep 9 23:42:01.287675 ignition[1026]: fetch: fetch complete Sep 9 23:42:01.284317 unknown[1026]: fetched user config from "azure" Sep 9 23:42:01.287681 ignition[1026]: fetch: fetch passed Sep 9 23:42:01.289598 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 23:42:01.287734 ignition[1026]: Ignition finished successfully Sep 9 23:42:01.297104 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:42:01.324791 ignition[1032]: Ignition 2.21.0 Sep 9 23:42:01.327035 ignition[1032]: Stage: kargs Sep 9 23:42:01.327212 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:42:01.327220 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:42:01.327802 ignition[1032]: kargs: kargs passed Sep 9 23:42:01.330011 ignition[1032]: Ignition finished successfully Sep 9 23:42:01.337552 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:42:01.344317 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:42:01.369510 ignition[1038]: Ignition 2.21.0 Sep 9 23:42:01.371634 ignition[1038]: Stage: disks Sep 9 23:42:01.371799 ignition[1038]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:42:01.374802 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:42:01.371807 ignition[1038]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:42:01.381949 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:42:01.372355 ignition[1038]: disks: disks passed Sep 9 23:42:01.388744 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:42:01.372400 ignition[1038]: Ignition finished successfully Sep 9 23:42:01.396916 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:42:01.404154 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:42:01.410291 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:42:01.418448 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:42:01.505687 systemd-fsck[1046]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 9 23:42:01.512718 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:42:01.518317 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:42:03.913008 kernel: EXT4-fs (sda9): mounted filesystem 7cc0d7f3-e4a1-4dc4-8b58-ceece0d874c1 r/w with ordered data mode. Quota mode: none. Sep 9 23:42:03.913565 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:42:03.917284 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:42:03.952051 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:42:03.970744 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:42:03.978709 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 9 23:42:03.993675 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1060) Sep 9 23:42:03.989514 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:42:04.015255 kernel: BTRFS info (device sda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:42:04.015274 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:42:03.989555 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:42:04.004173 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:42:04.026778 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:42:04.043759 kernel: BTRFS info (device sda6): turning on async discard Sep 9 23:42:04.043792 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 23:42:04.046748 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:42:04.539758 coreos-metadata[1062]: Sep 09 23:42:04.539 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 9 23:42:04.545295 coreos-metadata[1062]: Sep 09 23:42:04.544 INFO Fetch successful Sep 9 23:42:04.545295 coreos-metadata[1062]: Sep 09 23:42:04.545 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 9 23:42:04.556654 coreos-metadata[1062]: Sep 09 23:42:04.556 INFO Fetch successful Sep 9 23:42:04.572596 coreos-metadata[1062]: Sep 09 23:42:04.571 INFO wrote hostname ci-4426.0.0-n-044e8b6791 to /sysroot/etc/hostname Sep 9 23:42:04.578786 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 23:42:04.826147 initrd-setup-root[1090]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:42:04.872461 initrd-setup-root[1097]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:42:04.892798 initrd-setup-root[1104]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:42:04.899807 initrd-setup-root[1111]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:42:06.164194 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:42:06.169831 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:42:06.185575 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:42:06.199182 kernel: BTRFS info (device sda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:42:06.191863 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:42:06.217786 ignition[1180]: INFO : Ignition 2.21.0 Sep 9 23:42:06.217786 ignition[1180]: INFO : Stage: mount Sep 9 23:42:06.230913 ignition[1180]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:42:06.230913 ignition[1180]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:42:06.230913 ignition[1180]: INFO : mount: mount passed Sep 9 23:42:06.230913 ignition[1180]: INFO : Ignition finished successfully Sep 9 23:42:06.222090 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:42:06.234806 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:42:06.252105 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:42:06.262106 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:42:06.289021 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1192) Sep 9 23:42:06.297901 kernel: BTRFS info (device sda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:42:06.297925 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:42:06.305992 kernel: BTRFS info (device sda6): turning on async discard Sep 9 23:42:06.306021 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 23:42:06.307740 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:42:06.332399 ignition[1210]: INFO : Ignition 2.21.0 Sep 9 23:42:06.332399 ignition[1210]: INFO : Stage: files Sep 9 23:42:06.338302 ignition[1210]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:42:06.338302 ignition[1210]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:42:06.338302 ignition[1210]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:42:06.351413 ignition[1210]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:42:06.351413 ignition[1210]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:42:06.403849 ignition[1210]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:42:06.408913 ignition[1210]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:42:06.408913 ignition[1210]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:42:06.404286 unknown[1210]: wrote ssh authorized keys file for user: core Sep 9 23:42:06.462964 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 23:42:06.470553 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 9 23:42:06.491071 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:42:06.568043 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 23:42:06.568043 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:42:06.581792 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 23:42:06.772119 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 23:42:06.851506 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:42:06.858110 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:42:06.858110 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:42:06.858110 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:42:06.858110 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:42:06.858110 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:42:06.858110 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:42:06.858110 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:42:06.858110 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:42:06.910143 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:42:06.910143 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:42:06.910143 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:42:06.910143 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:42:06.910143 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:42:06.910143 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 23:42:07.385826 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 23:42:07.708059 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:42:07.708059 ignition[1210]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 23:42:07.739988 ignition[1210]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:42:07.753908 ignition[1210]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:42:07.753908 ignition[1210]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 23:42:07.771120 ignition[1210]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:42:07.771120 ignition[1210]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:42:07.771120 ignition[1210]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:42:07.771120 ignition[1210]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:42:07.771120 ignition[1210]: INFO : files: files passed Sep 9 23:42:07.771120 ignition[1210]: INFO : Ignition finished successfully Sep 9 23:42:07.755298 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:42:07.765794 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:42:07.785430 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:42:07.797106 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:42:07.797179 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:42:07.834743 initrd-setup-root-after-ignition[1239]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:42:07.834743 initrd-setup-root-after-ignition[1239]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:42:07.845971 initrd-setup-root-after-ignition[1243]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:42:07.842182 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:42:07.850547 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:42:07.860532 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:42:07.905588 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:42:07.905697 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:42:07.913483 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:42:07.921390 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:42:07.928341 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:42:07.930010 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:42:07.962460 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:42:07.968540 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:42:07.995284 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:42:07.999920 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:42:08.008019 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:42:08.015534 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:42:08.015642 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:42:08.026164 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:42:08.030309 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:42:08.037691 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:42:08.045186 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:42:08.052235 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:42:08.060582 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:42:08.068645 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:42:08.076564 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:42:08.084872 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:42:08.092200 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:42:08.099973 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:42:08.106575 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:42:08.106686 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:42:08.116548 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:42:08.121047 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:42:08.129117 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:42:08.132828 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:42:08.138316 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:42:08.138412 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:42:08.150219 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:42:08.150303 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:42:08.155588 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:42:08.155658 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:42:08.162814 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 9 23:42:08.218741 ignition[1263]: INFO : Ignition 2.21.0 Sep 9 23:42:08.218741 ignition[1263]: INFO : Stage: umount Sep 9 23:42:08.218741 ignition[1263]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:42:08.218741 ignition[1263]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 23:42:08.218741 ignition[1263]: INFO : umount: umount passed Sep 9 23:42:08.218741 ignition[1263]: INFO : Ignition finished successfully Sep 9 23:42:08.162876 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 23:42:08.173621 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:42:08.197804 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:42:08.208554 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:42:08.208691 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:42:08.215221 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:42:08.215345 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:42:08.229931 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:42:08.232127 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:42:08.240515 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:42:08.240593 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:42:08.248407 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:42:08.248448 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:42:08.254980 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:42:08.255035 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:42:08.262780 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 23:42:08.262811 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 23:42:08.272433 systemd[1]: Stopped target network.target - Network. Sep 9 23:42:08.279922 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:42:08.279963 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:42:08.287777 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:42:08.300267 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:42:08.304040 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:42:08.312946 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:42:08.319795 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:42:08.327431 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:42:08.327482 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:42:08.334636 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:42:08.334666 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:42:08.343220 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:42:08.343275 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:42:08.350599 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:42:08.350627 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:42:08.358757 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:42:08.368095 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:42:08.376941 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:42:08.380023 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:42:08.380111 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:42:08.391898 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:42:08.392092 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:42:08.392282 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:42:08.563200 kernel: hv_netvsc 002248bb-c12e-0022-48bb-c12e002248bb eth0: Data path switched from VF: enP20405s1 Sep 9 23:42:08.404643 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:42:08.404830 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:42:08.404917 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:42:08.413099 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:42:08.420147 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:42:08.420195 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:42:08.430047 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:42:08.430116 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:42:08.438443 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:42:08.447432 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:42:08.447508 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:42:08.452840 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:42:08.452881 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:42:08.463879 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:42:08.463923 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:42:08.472456 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:42:08.472512 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:42:08.484501 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:42:08.492862 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:42:08.492923 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:42:08.509159 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:42:08.509326 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:42:08.518515 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:42:08.518555 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:42:08.525677 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:42:08.525701 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:42:08.533072 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:42:08.533115 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:42:08.551825 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:42:08.551880 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:42:08.563273 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:42:08.563343 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:42:08.578163 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:42:08.585624 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:42:08.585693 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:42:08.597402 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:42:08.597458 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:42:08.618992 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 23:42:08.619056 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:42:08.627381 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:42:08.627435 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:42:08.631947 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:42:08.631993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:42:08.644773 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 23:42:08.644818 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 23:42:08.644839 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 23:42:08.644862 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:42:08.811986 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Sep 9 23:42:08.645161 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:42:08.645252 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:42:08.651095 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:42:08.651168 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:42:08.659717 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:42:08.669007 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:42:08.708167 systemd[1]: Switching root. Sep 9 23:42:08.836239 systemd-journald[225]: Journal stopped Sep 9 23:42:16.600964 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:42:16.601007 kernel: SELinux: policy capability open_perms=1 Sep 9 23:42:16.601016 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:42:16.601022 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:42:16.601030 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:42:16.601036 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:42:16.601042 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:42:16.601047 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:42:16.601053 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:42:16.601058 kernel: audit: type=1403 audit(1757461330.336:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:42:16.601066 systemd[1]: Successfully loaded SELinux policy in 247.051ms. Sep 9 23:42:16.601074 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.274ms. Sep 9 23:42:16.601080 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:42:16.601087 systemd[1]: Detected virtualization microsoft. Sep 9 23:42:16.601093 systemd[1]: Detected architecture arm64. Sep 9 23:42:16.601100 systemd[1]: Detected first boot. Sep 9 23:42:16.601107 systemd[1]: Hostname set to . Sep 9 23:42:16.601115 systemd[1]: Initializing machine ID from random generator. Sep 9 23:42:16.601121 zram_generator::config[1306]: No configuration found. Sep 9 23:42:16.601128 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:42:16.601134 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:42:16.601141 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:42:16.601148 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:42:16.601154 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:42:16.601160 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:42:16.601166 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:42:16.601172 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:42:16.601178 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:42:16.601184 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:42:16.601192 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:42:16.601198 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:42:16.601204 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:42:16.601210 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:42:16.601216 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:42:16.601222 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:42:16.601228 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:42:16.601235 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:42:16.601241 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:42:16.601249 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:42:16.601255 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 23:42:16.601263 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:42:16.601269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:42:16.601276 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:42:16.601283 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:42:16.601289 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:42:16.601296 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:42:16.601302 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:42:16.601308 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:42:16.601314 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:42:16.601320 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:42:16.601326 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:42:16.601332 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:42:16.601340 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:42:16.601346 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:42:16.601352 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:42:16.601358 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:42:16.601364 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:42:16.601370 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:42:16.601378 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:42:16.601384 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:42:16.601391 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:42:16.601397 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:42:16.601403 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:42:16.601410 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:42:16.601416 systemd[1]: Reached target machines.target - Containers. Sep 9 23:42:16.601422 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:42:16.601429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:42:16.601436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:42:16.601442 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:42:16.601448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:42:16.601454 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:42:16.601460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:42:16.601466 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:42:16.601473 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:42:16.601479 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:42:16.601486 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:42:16.601493 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:42:16.601499 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:42:16.601505 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:42:16.601511 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:42:16.601518 kernel: fuse: init (API version 7.41) Sep 9 23:42:16.601524 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:42:16.601530 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:42:16.601538 kernel: loop: module loaded Sep 9 23:42:16.601544 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:42:16.601550 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:42:16.601556 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:42:16.601563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:42:16.601569 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:42:16.601575 systemd[1]: Stopped verity-setup.service. Sep 9 23:42:16.601581 kernel: ACPI: bus type drm_connector registered Sep 9 23:42:16.601604 systemd-journald[1386]: Collecting audit messages is disabled. Sep 9 23:42:16.601618 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:42:16.601625 systemd-journald[1386]: Journal started Sep 9 23:42:16.601642 systemd-journald[1386]: Runtime Journal (/run/log/journal/2b3f80b75b5d4772b69268be59291c65) is 8M, max 78.5M, 70.5M free. Sep 9 23:42:15.854276 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:42:15.861435 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 9 23:42:15.861809 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:42:15.862092 systemd[1]: systemd-journald.service: Consumed 2.183s CPU time. Sep 9 23:42:16.615715 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:42:16.616360 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:42:16.620346 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:42:16.624046 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:42:16.628035 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:42:16.632393 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:42:16.637011 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:42:16.642437 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:42:16.647518 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:42:16.647653 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:42:16.652038 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:42:16.652169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:42:16.656353 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:42:16.656465 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:42:16.660676 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:42:16.660808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:42:16.665497 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:42:16.665613 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:42:16.669702 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:42:16.669826 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:42:16.673942 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:42:16.678280 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:42:16.683090 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:42:16.687832 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:42:16.692702 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:42:16.704472 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:42:16.709704 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:42:16.718076 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:42:16.722195 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:42:16.722230 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:42:16.726940 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:42:16.732509 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:42:16.736202 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:42:16.750418 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:42:16.761653 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:42:16.766131 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:42:16.766974 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:42:16.770919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:42:16.771671 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:42:16.778114 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:42:16.783097 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:42:16.789683 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:42:16.794466 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:42:16.801639 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:42:16.806266 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:42:16.813964 systemd-journald[1386]: Time spent on flushing to /var/log/journal/2b3f80b75b5d4772b69268be59291c65 is 33.432ms for 949 entries. Sep 9 23:42:16.813964 systemd-journald[1386]: System Journal (/var/log/journal/2b3f80b75b5d4772b69268be59291c65) is 11.8M, max 2.6G, 2.6G free. Sep 9 23:42:16.910942 systemd-journald[1386]: Received client request to flush runtime journal. Sep 9 23:42:16.911008 systemd-journald[1386]: /var/log/journal/2b3f80b75b5d4772b69268be59291c65/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 9 23:42:16.911026 systemd-journald[1386]: Rotating system journal. Sep 9 23:42:16.911045 kernel: loop0: detected capacity change from 0 to 119320 Sep 9 23:42:16.813181 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:42:16.912389 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:42:16.921632 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:42:16.923683 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:42:16.941339 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:42:17.000405 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Sep 9 23:42:17.000417 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Sep 9 23:42:17.003223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:42:17.008764 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:42:17.404014 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:42:17.456009 kernel: loop1: detected capacity change from 0 to 100608 Sep 9 23:42:17.677864 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:42:17.683097 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:42:17.705698 systemd-tmpfiles[1465]: ACLs are not supported, ignoring. Sep 9 23:42:17.705712 systemd-tmpfiles[1465]: ACLs are not supported, ignoring. Sep 9 23:42:17.707893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:42:17.967007 kernel: loop2: detected capacity change from 0 to 211168 Sep 9 23:42:18.016029 kernel: loop3: detected capacity change from 0 to 29264 Sep 9 23:42:18.574008 kernel: loop4: detected capacity change from 0 to 119320 Sep 9 23:42:18.585014 kernel: loop5: detected capacity change from 0 to 100608 Sep 9 23:42:18.591105 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:42:18.600000 kernel: loop6: detected capacity change from 0 to 211168 Sep 9 23:42:18.601126 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:42:18.623009 kernel: loop7: detected capacity change from 0 to 29264 Sep 9 23:42:18.630543 (sd-merge)[1471]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 9 23:42:18.630902 (sd-merge)[1471]: Merged extensions into '/usr'. Sep 9 23:42:18.632140 systemd-udevd[1473]: Using default interface naming scheme 'v255'. Sep 9 23:42:18.634736 systemd[1]: Reload requested from client PID 1445 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:42:18.634748 systemd[1]: Reloading... Sep 9 23:42:18.692010 zram_generator::config[1498]: No configuration found. Sep 9 23:42:18.867911 systemd[1]: Reloading finished in 232 ms. Sep 9 23:42:18.890078 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:42:18.899912 systemd[1]: Starting ensure-sysext.service... Sep 9 23:42:18.905106 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:42:18.937903 systemd[1]: Reload requested from client PID 1554 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:42:18.937921 systemd[1]: Reloading... Sep 9 23:42:18.947848 systemd-tmpfiles[1555]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:42:18.947868 systemd-tmpfiles[1555]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:42:18.948125 systemd-tmpfiles[1555]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:42:18.948258 systemd-tmpfiles[1555]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:42:18.948668 systemd-tmpfiles[1555]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:42:18.948802 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Sep 9 23:42:18.948830 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Sep 9 23:42:18.990007 zram_generator::config[1581]: No configuration found. Sep 9 23:42:18.991232 systemd-tmpfiles[1555]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:42:18.991244 systemd-tmpfiles[1555]: Skipping /boot Sep 9 23:42:18.997185 systemd-tmpfiles[1555]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:42:18.997196 systemd-tmpfiles[1555]: Skipping /boot Sep 9 23:42:19.125157 systemd[1]: Reloading finished in 186 ms. Sep 9 23:42:19.132928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:42:19.146124 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:42:19.182213 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:42:19.191649 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:42:19.199688 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:42:19.213098 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:42:19.223410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:42:19.234168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:42:19.241169 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:42:19.246517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:42:19.251020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:42:19.251115 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:42:19.251885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:42:19.252201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:42:19.256771 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:42:19.256907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:42:19.262132 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:42:19.262312 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:42:19.270488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:42:19.271622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:42:19.279686 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:42:19.287882 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:42:19.293157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:42:19.293267 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:42:19.298533 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:42:19.303074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:42:19.303223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:42:19.309689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:42:19.309842 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:42:19.314910 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:42:19.315062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:42:19.323398 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 9 23:42:19.327183 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:42:19.330245 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:42:19.336787 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:42:19.348127 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:42:19.356496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:42:19.361972 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:42:19.362099 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:42:19.362212 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:42:19.369151 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:42:19.374212 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:42:19.374393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:42:19.379202 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:42:19.379345 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:42:19.383613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:42:19.383748 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:42:19.388592 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:42:19.390027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:42:19.403109 systemd[1]: Finished ensure-sysext.service. Sep 9 23:42:19.406953 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:42:19.416629 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:42:19.431446 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:42:19.437289 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:42:19.437360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:42:19.503043 systemd-resolved[1643]: Positive Trust Anchors: Sep 9 23:42:19.503056 systemd-resolved[1643]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:42:19.503076 systemd-resolved[1643]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:42:19.552901 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 23:42:19.589148 systemd-resolved[1643]: Using system hostname 'ci-4426.0.0-n-044e8b6791'. Sep 9 23:42:19.590225 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:42:19.596418 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:42:19.611028 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 23:42:19.625007 augenrules[1741]: No rules Sep 9 23:42:19.626086 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:42:19.627118 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:42:19.662884 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 9 23:42:19.672216 kernel: hv_vmbus: registering driver hv_balloon Sep 9 23:42:19.672307 kernel: hv_vmbus: registering driver hyperv_fb Sep 9 23:42:19.672320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#290 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 23:42:19.695891 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 9 23:42:19.701614 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 9 23:42:19.701679 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 9 23:42:19.701698 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 9 23:42:19.712279 kernel: Console: switching to colour dummy device 80x25 Sep 9 23:42:19.716609 systemd-networkd[1714]: lo: Link UP Sep 9 23:42:19.717592 systemd-networkd[1714]: lo: Gained carrier Sep 9 23:42:19.718687 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 23:42:19.723616 systemd-networkd[1714]: Enumeration completed Sep 9 23:42:19.727120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:42:19.732013 systemd-networkd[1714]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:42:19.732018 systemd-networkd[1714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:42:19.733403 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:42:19.749269 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:42:19.762680 systemd[1]: Reached target network.target - Network. Sep 9 23:42:19.767841 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:42:19.773518 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:42:19.781213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:42:19.786176 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:42:19.792099 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:42:19.799788 kernel: mlx5_core 4fb5:00:02.0 enP20405s1: Link up Sep 9 23:42:19.804259 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 9 23:42:19.800053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:42:19.807975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:42:19.808145 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:42:19.817207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:42:19.836001 kernel: hv_netvsc 002248bb-c12e-0022-48bb-c12e002248bb eth0: Data path switched to VF: enP20405s1 Sep 9 23:42:19.838069 systemd-networkd[1714]: enP20405s1: Link UP Sep 9 23:42:19.838505 systemd-networkd[1714]: eth0: Link UP Sep 9 23:42:19.839238 systemd-networkd[1714]: eth0: Gained carrier Sep 9 23:42:19.839340 systemd-networkd[1714]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:42:19.848757 systemd-networkd[1714]: enP20405s1: Gained carrier Sep 9 23:42:19.854339 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:42:19.868143 systemd-networkd[1714]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 23:42:19.901408 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 9 23:42:19.909152 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:42:19.939028 kernel: MACsec IEEE 802.1AE Sep 9 23:42:20.028362 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:42:21.079650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:42:21.136197 systemd-networkd[1714]: eth0: Gained IPv6LL Sep 9 23:42:21.141276 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:42:21.146086 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:42:21.663045 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:42:21.668005 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:42:25.409459 ldconfig[1440]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:42:25.421075 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:42:25.426906 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:42:25.455353 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:42:25.459809 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:42:25.464117 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:42:25.468999 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:42:25.473883 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:42:25.478354 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:42:25.483160 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:42:25.487783 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:42:25.487805 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:42:25.491539 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:42:25.543040 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:42:25.548410 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:42:25.553255 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:42:25.558464 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:42:25.563239 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:42:25.568927 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:42:25.573302 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:42:25.578499 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:42:25.582535 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:42:25.586030 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:42:25.589517 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:42:25.589541 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:42:25.604150 systemd[1]: Starting chronyd.service - NTP client/server... Sep 9 23:42:25.607962 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:42:25.618658 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 23:42:25.625107 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:42:25.630633 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:42:25.637471 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:42:25.644660 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:42:25.649133 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:42:25.652105 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 9 23:42:25.657483 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 9 23:42:25.658683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:25.664610 jq[1854]: false Sep 9 23:42:25.666367 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:42:25.673943 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:42:25.678123 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:42:25.684369 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:42:25.691124 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:42:25.696710 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:42:25.700950 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:42:25.701336 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:42:25.701793 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:42:25.706459 KVP[1856]: KVP starting; pid is:1856 Sep 9 23:42:25.706971 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:42:25.711705 chronyd[1846]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 9 23:42:25.717327 KVP[1856]: KVP LIC Version: 3.1 Sep 9 23:42:25.717763 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:42:25.718069 kernel: hv_utils: KVP IC version 4.0 Sep 9 23:42:25.722428 jq[1869]: true Sep 9 23:42:25.725151 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:42:25.725665 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:42:25.730729 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:42:25.734260 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:42:25.749764 extend-filesystems[1855]: Found /dev/sda6 Sep 9 23:42:25.770559 jq[1875]: true Sep 9 23:42:25.774971 extend-filesystems[1855]: Found /dev/sda9 Sep 9 23:42:25.781115 update_engine[1868]: I20250909 23:42:25.774534 1868 main.cc:92] Flatcar Update Engine starting Sep 9 23:42:25.781348 extend-filesystems[1855]: Checking size of /dev/sda9 Sep 9 23:42:25.782559 chronyd[1846]: Timezone right/UTC failed leap second check, ignoring Sep 9 23:42:25.787285 (ntainerd)[1890]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:42:25.788913 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:42:25.789269 chronyd[1846]: Loaded seccomp filter (level 2) Sep 9 23:42:25.790047 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:42:25.795667 systemd[1]: Started chronyd.service - NTP client/server. Sep 9 23:42:25.806494 tar[1874]: linux-arm64/LICENSE Sep 9 23:42:25.807801 tar[1874]: linux-arm64/helm Sep 9 23:42:25.819542 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:42:25.828110 systemd-logind[1867]: New seat seat0. Sep 9 23:42:25.831585 systemd-logind[1867]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 9 23:42:25.831792 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:42:25.850406 extend-filesystems[1855]: Old size kept for /dev/sda9 Sep 9 23:42:25.859073 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:42:25.859333 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:42:25.916905 bash[1917]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:42:25.920827 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:42:25.929405 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 23:42:26.098838 dbus-daemon[1849]: [system] SELinux support is enabled Sep 9 23:42:26.099105 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:42:26.106766 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:42:26.106793 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:42:26.108161 dbus-daemon[1849]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 23:42:26.112129 update_engine[1868]: I20250909 23:42:26.111914 1868 update_check_scheduler.cc:74] Next update check in 7m46s Sep 9 23:42:26.116242 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:42:26.116266 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:42:26.126169 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:42:26.135844 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:42:26.199944 coreos-metadata[1848]: Sep 09 23:42:26.199 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 9 23:42:26.204608 coreos-metadata[1848]: Sep 09 23:42:26.204 INFO Fetch successful Sep 9 23:42:26.204845 coreos-metadata[1848]: Sep 09 23:42:26.204 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 9 23:42:26.209269 coreos-metadata[1848]: Sep 09 23:42:26.209 INFO Fetch successful Sep 9 23:42:26.209592 coreos-metadata[1848]: Sep 09 23:42:26.209 INFO Fetching http://168.63.129.16/machine/183c28af-faa3-4d47-a2d1-4fca8001315d/4aeb5a65%2Dffbc%2D431e%2Dbf36%2D32c6ef5824ec.%5Fci%2D4426.0.0%2Dn%2D044e8b6791?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 9 23:42:26.211948 coreos-metadata[1848]: Sep 09 23:42:26.211 INFO Fetch successful Sep 9 23:42:26.212176 coreos-metadata[1848]: Sep 09 23:42:26.212 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 9 23:42:26.223567 coreos-metadata[1848]: Sep 09 23:42:26.223 INFO Fetch successful Sep 9 23:42:26.244398 sshd_keygen[1910]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:42:26.258659 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 23:42:26.265081 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:42:26.282251 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:42:26.288284 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:42:26.290784 tar[1874]: linux-arm64/README.md Sep 9 23:42:26.295816 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 9 23:42:26.316928 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:42:26.323571 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:42:26.323758 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:42:26.335639 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:42:26.340879 locksmithd[1991]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:42:26.350921 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 9 23:42:26.365683 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:42:26.372240 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:42:26.379520 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 23:42:26.385459 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:42:26.594393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:26.601393 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:26.626339 containerd[1890]: time="2025-09-09T23:42:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:42:26.628000 containerd[1890]: time="2025-09-09T23:42:26.627181952Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 23:42:26.632506 containerd[1890]: time="2025-09-09T23:42:26.632479624Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7µs" Sep 9 23:42:26.632584 containerd[1890]: time="2025-09-09T23:42:26.632571808Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:42:26.632628 containerd[1890]: time="2025-09-09T23:42:26.632618800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:42:26.632810 containerd[1890]: time="2025-09-09T23:42:26.632793208Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:42:26.632885 containerd[1890]: time="2025-09-09T23:42:26.632873392Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:42:26.632937 containerd[1890]: time="2025-09-09T23:42:26.632927008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:42:26.633051 containerd[1890]: time="2025-09-09T23:42:26.633036016Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:42:26.633098 containerd[1890]: time="2025-09-09T23:42:26.633085632Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:42:26.633477 containerd[1890]: time="2025-09-09T23:42:26.633437960Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:42:26.633557 containerd[1890]: time="2025-09-09T23:42:26.633537088Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:42:26.633615 containerd[1890]: time="2025-09-09T23:42:26.633599224Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:42:26.633655 containerd[1890]: time="2025-09-09T23:42:26.633643224Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:42:26.633787 containerd[1890]: time="2025-09-09T23:42:26.633770880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:42:26.634286 containerd[1890]: time="2025-09-09T23:42:26.634260136Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:42:26.634381 containerd[1890]: time="2025-09-09T23:42:26.634363848Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:42:26.634423 containerd[1890]: time="2025-09-09T23:42:26.634410712Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:42:26.634496 containerd[1890]: time="2025-09-09T23:42:26.634484312Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:42:26.634709 containerd[1890]: time="2025-09-09T23:42:26.634692816Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:42:26.634822 containerd[1890]: time="2025-09-09T23:42:26.634808104Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:42:26.650011 containerd[1890]: time="2025-09-09T23:42:26.649972024Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:42:26.650132 containerd[1890]: time="2025-09-09T23:42:26.650120128Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:42:26.650228 containerd[1890]: time="2025-09-09T23:42:26.650217992Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:42:26.650305 containerd[1890]: time="2025-09-09T23:42:26.650294152Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:42:26.650362 containerd[1890]: time="2025-09-09T23:42:26.650350544Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:42:26.650405 containerd[1890]: time="2025-09-09T23:42:26.650394440Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:42:26.650444 containerd[1890]: time="2025-09-09T23:42:26.650433952Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:42:26.650489 containerd[1890]: time="2025-09-09T23:42:26.650477880Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:42:26.650540 containerd[1890]: time="2025-09-09T23:42:26.650528616Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:42:26.650581 containerd[1890]: time="2025-09-09T23:42:26.650571432Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:42:26.650635 containerd[1890]: time="2025-09-09T23:42:26.650622800Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:42:26.650683 containerd[1890]: time="2025-09-09T23:42:26.650673360Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:42:26.650852 containerd[1890]: time="2025-09-09T23:42:26.650831904Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:42:26.650924 containerd[1890]: time="2025-09-09T23:42:26.650912288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:42:26.650967 containerd[1890]: time="2025-09-09T23:42:26.650956936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:42:26.651035 containerd[1890]: time="2025-09-09T23:42:26.651021952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:42:26.651078 containerd[1890]: time="2025-09-09T23:42:26.651067304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:42:26.651139 containerd[1890]: time="2025-09-09T23:42:26.651126624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:42:26.651189 containerd[1890]: time="2025-09-09T23:42:26.651177488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:42:26.651237 containerd[1890]: time="2025-09-09T23:42:26.651225088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:42:26.651285 containerd[1890]: time="2025-09-09T23:42:26.651273896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:42:26.651351 containerd[1890]: time="2025-09-09T23:42:26.651338648Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:42:26.651395 containerd[1890]: time="2025-09-09T23:42:26.651384096Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:42:26.651491 containerd[1890]: time="2025-09-09T23:42:26.651479608Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:42:26.651544 containerd[1890]: time="2025-09-09T23:42:26.651534168Z" level=info msg="Start snapshots syncer" Sep 9 23:42:26.651621 containerd[1890]: time="2025-09-09T23:42:26.651609616Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:42:26.651921 containerd[1890]: time="2025-09-09T23:42:26.651854160Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652073840Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652147760Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652244560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652259912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652267824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652274704Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652282184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652289032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652296744Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652324232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652331152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652337592Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652359320Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:42:26.652704 containerd[1890]: time="2025-09-09T23:42:26.652368464Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:42:26.652902 containerd[1890]: time="2025-09-09T23:42:26.652374552Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:42:26.652902 containerd[1890]: time="2025-09-09T23:42:26.652380080Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:42:26.652902 containerd[1890]: time="2025-09-09T23:42:26.652384424Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:42:26.652902 containerd[1890]: time="2025-09-09T23:42:26.652393224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:42:26.652902 containerd[1890]: time="2025-09-09T23:42:26.652399712Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:42:26.652902 containerd[1890]: time="2025-09-09T23:42:26.652412056Z" level=info msg="runtime interface created" Sep 9 23:42:26.652902 containerd[1890]: time="2025-09-09T23:42:26.652415192Z" level=info msg="created NRI interface" Sep 9 23:42:26.652902 containerd[1890]: time="2025-09-09T23:42:26.652420424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:42:26.652902 containerd[1890]: time="2025-09-09T23:42:26.652428624Z" level=info msg="Connect containerd service" Sep 9 23:42:26.652902 containerd[1890]: time="2025-09-09T23:42:26.652447320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:42:26.653596 containerd[1890]: time="2025-09-09T23:42:26.653571512Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:42:26.931154 kubelet[2038]: E0909 23:42:26.931041 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:26.933396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:26.933506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:26.934029 systemd[1]: kubelet.service: Consumed 549ms CPU time, 256.9M memory peak. Sep 9 23:42:27.471139 containerd[1890]: time="2025-09-09T23:42:27.471073552Z" level=info msg="Start subscribing containerd event" Sep 9 23:42:27.471427 containerd[1890]: time="2025-09-09T23:42:27.471360512Z" level=info msg="Start recovering state" Sep 9 23:42:27.471619 containerd[1890]: time="2025-09-09T23:42:27.471594808Z" level=info msg="Start event monitor" Sep 9 23:42:27.471619 containerd[1890]: time="2025-09-09T23:42:27.471621232Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:42:27.471619 containerd[1890]: time="2025-09-09T23:42:27.471599736Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:42:27.471619 containerd[1890]: time="2025-09-09T23:42:27.471630240Z" level=info msg="Start streaming server" Sep 9 23:42:27.471619 containerd[1890]: time="2025-09-09T23:42:27.471659664Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:42:27.471619 containerd[1890]: time="2025-09-09T23:42:27.471665088Z" level=info msg="runtime interface starting up..." Sep 9 23:42:27.471619 containerd[1890]: time="2025-09-09T23:42:27.471669040Z" level=info msg="starting plugins..." Sep 9 23:42:27.471619 containerd[1890]: time="2025-09-09T23:42:27.471678792Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:42:27.471619 containerd[1890]: time="2025-09-09T23:42:27.471684504Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:42:27.471907 containerd[1890]: time="2025-09-09T23:42:27.471799128Z" level=info msg="containerd successfully booted in 0.845834s" Sep 9 23:42:27.472086 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:42:27.477267 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:42:27.485048 systemd[1]: Startup finished in 1.574s (kernel) + 14.932s (initrd) + 17.393s (userspace) = 33.900s. Sep 9 23:42:28.161149 login[2033]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 9 23:42:28.161377 login[2032]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:28.171042 systemd-logind[1867]: New session 2 of user core. Sep 9 23:42:28.171253 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:42:28.174077 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:42:28.205521 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:42:28.208320 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:42:28.249924 (systemd)[2067]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:42:28.251980 systemd-logind[1867]: New session c1 of user core. Sep 9 23:42:28.552915 systemd[2067]: Queued start job for default target default.target. Sep 9 23:42:28.570752 systemd[2067]: Created slice app.slice - User Application Slice. Sep 9 23:42:28.570775 systemd[2067]: Reached target paths.target - Paths. Sep 9 23:42:28.570808 systemd[2067]: Reached target timers.target - Timers. Sep 9 23:42:28.572031 systemd[2067]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:42:28.580463 systemd[2067]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:42:28.580605 systemd[2067]: Reached target sockets.target - Sockets. Sep 9 23:42:28.580688 systemd[2067]: Reached target basic.target - Basic System. Sep 9 23:42:28.580855 systemd[2067]: Reached target default.target - Main User Target. Sep 9 23:42:28.580938 systemd[2067]: Startup finished in 323ms. Sep 9 23:42:28.581145 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:42:28.588112 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:42:28.595033 waagent[2030]: 2025-09-09T23:42:28.594947Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 9 23:42:28.599017 waagent[2030]: 2025-09-09T23:42:28.598965Z INFO Daemon Daemon OS: flatcar 4426.0.0 Sep 9 23:42:28.602114 waagent[2030]: 2025-09-09T23:42:28.602083Z INFO Daemon Daemon Python: 3.11.13 Sep 9 23:42:28.606995 waagent[2030]: 2025-09-09T23:42:28.605333Z INFO Daemon Daemon Run daemon Sep 9 23:42:28.608329 waagent[2030]: 2025-09-09T23:42:28.608032Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4426.0.0' Sep 9 23:42:28.613912 waagent[2030]: 2025-09-09T23:42:28.613879Z INFO Daemon Daemon Using waagent for provisioning Sep 9 23:42:28.617336 waagent[2030]: 2025-09-09T23:42:28.617305Z INFO Daemon Daemon Activate resource disk Sep 9 23:42:28.620876 waagent[2030]: 2025-09-09T23:42:28.620848Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 9 23:42:28.628442 waagent[2030]: 2025-09-09T23:42:28.628397Z INFO Daemon Daemon Found device: None Sep 9 23:42:28.631331 waagent[2030]: 2025-09-09T23:42:28.631302Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 9 23:42:28.637147 waagent[2030]: 2025-09-09T23:42:28.637123Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 9 23:42:28.644919 waagent[2030]: 2025-09-09T23:42:28.644882Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 9 23:42:28.648661 waagent[2030]: 2025-09-09T23:42:28.648631Z INFO Daemon Daemon Running default provisioning handler Sep 9 23:42:28.656935 waagent[2030]: 2025-09-09T23:42:28.656891Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 9 23:42:28.665582 waagent[2030]: 2025-09-09T23:42:28.665546Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 9 23:42:28.672456 waagent[2030]: 2025-09-09T23:42:28.672286Z INFO Daemon Daemon cloud-init is enabled: False Sep 9 23:42:28.675741 waagent[2030]: 2025-09-09T23:42:28.675712Z INFO Daemon Daemon Copying ovf-env.xml Sep 9 23:42:28.788284 waagent[2030]: 2025-09-09T23:42:28.787142Z INFO Daemon Daemon Successfully mounted dvd Sep 9 23:42:28.813101 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 9 23:42:28.815008 waagent[2030]: 2025-09-09T23:42:28.814826Z INFO Daemon Daemon Detect protocol endpoint Sep 9 23:42:28.818099 waagent[2030]: 2025-09-09T23:42:28.818060Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 9 23:42:28.821669 waagent[2030]: 2025-09-09T23:42:28.821639Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 9 23:42:28.826046 waagent[2030]: 2025-09-09T23:42:28.826017Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 9 23:42:28.829477 waagent[2030]: 2025-09-09T23:42:28.829446Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 9 23:42:28.832726 waagent[2030]: 2025-09-09T23:42:28.832700Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 9 23:42:28.890604 waagent[2030]: 2025-09-09T23:42:28.890564Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 9 23:42:28.895135 waagent[2030]: 2025-09-09T23:42:28.895111Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 9 23:42:28.898541 waagent[2030]: 2025-09-09T23:42:28.898510Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 9 23:42:29.022645 waagent[2030]: 2025-09-09T23:42:29.022485Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 9 23:42:29.026950 waagent[2030]: 2025-09-09T23:42:29.026900Z INFO Daemon Daemon Forcing an update of the goal state. Sep 9 23:42:29.034126 waagent[2030]: 2025-09-09T23:42:29.034092Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 9 23:42:29.067795 waagent[2030]: 2025-09-09T23:42:29.067717Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 9 23:42:29.071997 waagent[2030]: 2025-09-09T23:42:29.071960Z INFO Daemon Sep 9 23:42:29.074159 waagent[2030]: 2025-09-09T23:42:29.074132Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 7066b148-c5ca-40fd-9f9f-29eee782f67c eTag: 5132779609331545901 source: Fabric] Sep 9 23:42:29.082033 waagent[2030]: 2025-09-09T23:42:29.082004Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 9 23:42:29.086416 waagent[2030]: 2025-09-09T23:42:29.086388Z INFO Daemon Sep 9 23:42:29.088333 waagent[2030]: 2025-09-09T23:42:29.088309Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 9 23:42:29.096794 waagent[2030]: 2025-09-09T23:42:29.096767Z INFO Daemon Daemon Downloading artifacts profile blob Sep 9 23:42:29.153016 waagent[2030]: 2025-09-09T23:42:29.152942Z INFO Daemon Downloaded certificate {'thumbprint': 'A107F4C54730244D3E836B25405BFCB49840CCA8', 'hasPrivateKey': True} Sep 9 23:42:29.159699 waagent[2030]: 2025-09-09T23:42:29.159663Z INFO Daemon Fetch goal state completed Sep 9 23:42:29.164100 login[2033]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:29.167976 systemd-logind[1867]: New session 1 of user core. Sep 9 23:42:29.169970 waagent[2030]: 2025-09-09T23:42:29.169191Z INFO Daemon Daemon Starting provisioning Sep 9 23:42:29.173086 waagent[2030]: 2025-09-09T23:42:29.173056Z INFO Daemon Daemon Handle ovf-env.xml. Sep 9 23:42:29.174107 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:42:29.181704 waagent[2030]: 2025-09-09T23:42:29.176946Z INFO Daemon Daemon Set hostname [ci-4426.0.0-n-044e8b6791] Sep 9 23:42:29.222413 waagent[2030]: 2025-09-09T23:42:29.222344Z INFO Daemon Daemon Publish hostname [ci-4426.0.0-n-044e8b6791] Sep 9 23:42:29.227338 waagent[2030]: 2025-09-09T23:42:29.226786Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 9 23:42:29.231023 waagent[2030]: 2025-09-09T23:42:29.230969Z INFO Daemon Daemon Primary interface is [eth0] Sep 9 23:42:29.239878 systemd-networkd[1714]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:42:29.239885 systemd-networkd[1714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:42:29.239919 systemd-networkd[1714]: eth0: DHCP lease lost Sep 9 23:42:29.241005 waagent[2030]: 2025-09-09T23:42:29.240554Z INFO Daemon Daemon Create user account if not exists Sep 9 23:42:29.244301 waagent[2030]: 2025-09-09T23:42:29.244263Z INFO Daemon Daemon User core already exists, skip useradd Sep 9 23:42:29.247945 waagent[2030]: 2025-09-09T23:42:29.247912Z INFO Daemon Daemon Configure sudoer Sep 9 23:42:29.257785 waagent[2030]: 2025-09-09T23:42:29.255143Z INFO Daemon Daemon Configure sshd Sep 9 23:42:29.261465 waagent[2030]: 2025-09-09T23:42:29.261421Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 9 23:42:29.262030 systemd-networkd[1714]: eth0: DHCPv4 address 10.200.20.13/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 23:42:29.269638 waagent[2030]: 2025-09-09T23:42:29.269597Z INFO Daemon Daemon Deploy ssh public key. Sep 9 23:42:30.385322 waagent[2030]: 2025-09-09T23:42:30.385273Z INFO Daemon Daemon Provisioning complete Sep 9 23:42:30.398382 waagent[2030]: 2025-09-09T23:42:30.398339Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 9 23:42:30.402569 waagent[2030]: 2025-09-09T23:42:30.402536Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 9 23:42:30.408690 waagent[2030]: 2025-09-09T23:42:30.408661Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 9 23:42:30.507737 waagent[2118]: 2025-09-09T23:42:30.507666Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 9 23:42:30.508094 waagent[2118]: 2025-09-09T23:42:30.507804Z INFO ExtHandler ExtHandler OS: flatcar 4426.0.0 Sep 9 23:42:30.508094 waagent[2118]: 2025-09-09T23:42:30.507842Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 9 23:42:30.508094 waagent[2118]: 2025-09-09T23:42:30.507877Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 9 23:42:30.570463 waagent[2118]: 2025-09-09T23:42:30.570385Z INFO ExtHandler ExtHandler Distro: flatcar-4426.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 9 23:42:30.570626 waagent[2118]: 2025-09-09T23:42:30.570600Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 23:42:30.570666 waagent[2118]: 2025-09-09T23:42:30.570650Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 23:42:30.576609 waagent[2118]: 2025-09-09T23:42:30.576560Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 9 23:42:30.583018 waagent[2118]: 2025-09-09T23:42:30.582969Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 9 23:42:30.583420 waagent[2118]: 2025-09-09T23:42:30.583390Z INFO ExtHandler Sep 9 23:42:30.583466 waagent[2118]: 2025-09-09T23:42:30.583451Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e08e0f51-7b12-4eb2-8bc9-4e4cc7b10872 eTag: 5132779609331545901 source: Fabric] Sep 9 23:42:30.583679 waagent[2118]: 2025-09-09T23:42:30.583656Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 9 23:42:30.584083 waagent[2118]: 2025-09-09T23:42:30.584055Z INFO ExtHandler Sep 9 23:42:30.584118 waagent[2118]: 2025-09-09T23:42:30.584103Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 9 23:42:30.587514 waagent[2118]: 2025-09-09T23:42:30.587487Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 9 23:42:30.648123 waagent[2118]: 2025-09-09T23:42:30.648016Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A107F4C54730244D3E836B25405BFCB49840CCA8', 'hasPrivateKey': True} Sep 9 23:42:30.648448 waagent[2118]: 2025-09-09T23:42:30.648414Z INFO ExtHandler Fetch goal state completed Sep 9 23:42:30.659838 waagent[2118]: 2025-09-09T23:42:30.659788Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Sep 9 23:42:30.663112 waagent[2118]: 2025-09-09T23:42:30.663071Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2118 Sep 9 23:42:30.663215 waagent[2118]: 2025-09-09T23:42:30.663193Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 9 23:42:30.663440 waagent[2118]: 2025-09-09T23:42:30.663415Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 9 23:42:30.664505 waagent[2118]: 2025-09-09T23:42:30.664470Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4426.0.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 9 23:42:30.664804 waagent[2118]: 2025-09-09T23:42:30.664775Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4426.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 9 23:42:30.664916 waagent[2118]: 2025-09-09T23:42:30.664895Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 9 23:42:30.665345 waagent[2118]: 2025-09-09T23:42:30.665317Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 9 23:42:30.755137 waagent[2118]: 2025-09-09T23:42:30.754733Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 9 23:42:30.755137 waagent[2118]: 2025-09-09T23:42:30.754929Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 9 23:42:30.759242 waagent[2118]: 2025-09-09T23:42:30.759211Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 9 23:42:30.763872 systemd[1]: Reload requested from client PID 2133 ('systemctl') (unit waagent.service)... Sep 9 23:42:30.764091 systemd[1]: Reloading... Sep 9 23:42:30.839010 zram_generator::config[2178]: No configuration found. Sep 9 23:42:30.983197 systemd[1]: Reloading finished in 218 ms. Sep 9 23:42:30.999979 waagent[2118]: 2025-09-09T23:42:30.999907Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 9 23:42:31.000115 waagent[2118]: 2025-09-09T23:42:31.000088Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 9 23:42:31.983976 waagent[2118]: 2025-09-09T23:42:31.983173Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 9 23:42:31.983976 waagent[2118]: 2025-09-09T23:42:31.983493Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 9 23:42:31.984293 waagent[2118]: 2025-09-09T23:42:31.984193Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 23:42:31.984293 waagent[2118]: 2025-09-09T23:42:31.984257Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 23:42:31.984432 waagent[2118]: 2025-09-09T23:42:31.984400Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 9 23:42:31.984533 waagent[2118]: 2025-09-09T23:42:31.984487Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 9 23:42:31.984664 waagent[2118]: 2025-09-09T23:42:31.984636Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 9 23:42:31.984664 waagent[2118]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 9 23:42:31.984664 waagent[2118]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 9 23:42:31.984664 waagent[2118]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 9 23:42:31.984664 waagent[2118]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 9 23:42:31.984664 waagent[2118]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 9 23:42:31.984664 waagent[2118]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 9 23:42:31.985101 waagent[2118]: 2025-09-09T23:42:31.985063Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 9 23:42:31.985460 waagent[2118]: 2025-09-09T23:42:31.985422Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 9 23:42:31.985637 waagent[2118]: 2025-09-09T23:42:31.985551Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 23:42:31.985637 waagent[2118]: 2025-09-09T23:42:31.985584Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 9 23:42:31.985843 waagent[2118]: 2025-09-09T23:42:31.985815Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 23:42:31.985943 waagent[2118]: 2025-09-09T23:42:31.985917Z INFO EnvHandler ExtHandler Configure routes Sep 9 23:42:31.985975 waagent[2118]: 2025-09-09T23:42:31.985962Z INFO EnvHandler ExtHandler Gateway:None Sep 9 23:42:31.986236 waagent[2118]: 2025-09-09T23:42:31.986183Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 9 23:42:31.986295 waagent[2118]: 2025-09-09T23:42:31.986249Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 9 23:42:31.986379 waagent[2118]: 2025-09-09T23:42:31.986337Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 9 23:42:31.986430 waagent[2118]: 2025-09-09T23:42:31.986408Z INFO EnvHandler ExtHandler Routes:None Sep 9 23:42:31.991936 waagent[2118]: 2025-09-09T23:42:31.991902Z INFO ExtHandler ExtHandler Sep 9 23:42:31.992091 waagent[2118]: 2025-09-09T23:42:31.992063Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e1903a7b-6bfb-43a7-8cd4-dcecf81aeff9 correlation 04a8b6a8-44ee-4437-a441-072f7c9728c6 created: 2025-09-09T23:41:11.414284Z] Sep 9 23:42:31.992435 waagent[2118]: 2025-09-09T23:42:31.992403Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 9 23:42:31.992929 waagent[2118]: 2025-09-09T23:42:31.992898Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 9 23:42:32.021323 waagent[2118]: 2025-09-09T23:42:32.021276Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 9 23:42:32.021323 waagent[2118]: Try `iptables -h' or 'iptables --help' for more information.) Sep 9 23:42:32.022963 waagent[2118]: 2025-09-09T23:42:32.022924Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 30B5C06C-5605-4DB4-82EE-8E42250760B1;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 9 23:42:32.072016 waagent[2118]: 2025-09-09T23:42:32.071944Z INFO MonitorHandler ExtHandler Network interfaces: Sep 9 23:42:32.072016 waagent[2118]: Executing ['ip', '-a', '-o', 'link']: Sep 9 23:42:32.072016 waagent[2118]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 9 23:42:32.072016 waagent[2118]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:c1:2e brd ff:ff:ff:ff:ff:ff Sep 9 23:42:32.072016 waagent[2118]: 3: enP20405s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:c1:2e brd ff:ff:ff:ff:ff:ff\ altname enP20405p0s2 Sep 9 23:42:32.072016 waagent[2118]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 9 23:42:32.072016 waagent[2118]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 9 23:42:32.072016 waagent[2118]: 2: eth0 inet 10.200.20.13/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 9 23:42:32.072016 waagent[2118]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 9 23:42:32.072016 waagent[2118]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 9 23:42:32.072016 waagent[2118]: 2: eth0 inet6 fe80::222:48ff:febb:c12e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 9 23:42:32.125897 waagent[2118]: 2025-09-09T23:42:32.125853Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 9 23:42:32.125897 waagent[2118]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:32.125897 waagent[2118]: pkts bytes target prot opt in out source destination Sep 9 23:42:32.125897 waagent[2118]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:32.125897 waagent[2118]: pkts bytes target prot opt in out source destination Sep 9 23:42:32.125897 waagent[2118]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:32.125897 waagent[2118]: pkts bytes target prot opt in out source destination Sep 9 23:42:32.125897 waagent[2118]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 9 23:42:32.125897 waagent[2118]: 3 534 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 9 23:42:32.125897 waagent[2118]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 9 23:42:32.128881 waagent[2118]: 2025-09-09T23:42:32.128844Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 9 23:42:32.128881 waagent[2118]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:32.128881 waagent[2118]: pkts bytes target prot opt in out source destination Sep 9 23:42:32.128881 waagent[2118]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:32.128881 waagent[2118]: pkts bytes target prot opt in out source destination Sep 9 23:42:32.128881 waagent[2118]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 23:42:32.128881 waagent[2118]: pkts bytes target prot opt in out source destination Sep 9 23:42:32.128881 waagent[2118]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 9 23:42:32.128881 waagent[2118]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 9 23:42:32.128881 waagent[2118]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 9 23:42:32.129361 waagent[2118]: 2025-09-09T23:42:32.129336Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 9 23:42:37.171886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:42:37.173154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:37.271020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:37.274042 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:37.398697 kubelet[2268]: E0909 23:42:37.398653 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:37.401522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:37.401629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:37.402080 systemd[1]: kubelet.service: Consumed 112ms CPU time, 106M memory peak. Sep 9 23:42:47.421809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 23:42:47.422989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:47.765781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:47.777168 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:47.803729 kubelet[2283]: E0909 23:42:47.803669 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:47.805817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:47.806035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:47.806469 systemd[1]: kubelet.service: Consumed 101ms CPU time, 107.1M memory peak. Sep 9 23:42:49.585404 chronyd[1846]: Selected source PHC0 Sep 9 23:42:57.921956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 23:42:57.923617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:42:58.062061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:42:58.064835 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:42:58.088972 kubelet[2298]: E0909 23:42:58.088910 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:42:58.091218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:42:58.091326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:42:58.091937 systemd[1]: kubelet.service: Consumed 101ms CPU time, 104.5M memory peak. Sep 9 23:42:58.179487 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:42:58.180746 systemd[1]: Started sshd@0-10.200.20.13:22-10.200.16.10:48534.service - OpenSSH per-connection server daemon (10.200.16.10:48534). Sep 9 23:42:59.158926 sshd[2306]: Accepted publickey for core from 10.200.16.10 port 48534 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:42:59.160044 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:42:59.163634 systemd-logind[1867]: New session 3 of user core. Sep 9 23:42:59.174144 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:42:59.580522 systemd[1]: Started sshd@1-10.200.20.13:22-10.200.16.10:48548.service - OpenSSH per-connection server daemon (10.200.16.10:48548). Sep 9 23:43:00.039160 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 48548 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:43:00.040346 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:00.043809 systemd-logind[1867]: New session 4 of user core. Sep 9 23:43:00.054261 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:43:00.381378 sshd[2315]: Connection closed by 10.200.16.10 port 48548 Sep 9 23:43:00.381885 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:00.385803 systemd[1]: sshd@1-10.200.20.13:22-10.200.16.10:48548.service: Deactivated successfully. Sep 9 23:43:00.387116 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:43:00.387666 systemd-logind[1867]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:43:00.388668 systemd-logind[1867]: Removed session 4. Sep 9 23:43:00.462512 systemd[1]: Started sshd@2-10.200.20.13:22-10.200.16.10:36402.service - OpenSSH per-connection server daemon (10.200.16.10:36402). Sep 9 23:43:00.917567 sshd[2321]: Accepted publickey for core from 10.200.16.10 port 36402 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:43:00.918693 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:00.922102 systemd-logind[1867]: New session 5 of user core. Sep 9 23:43:00.933299 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:43:01.258797 sshd[2324]: Connection closed by 10.200.16.10 port 36402 Sep 9 23:43:01.259379 sshd-session[2321]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:01.262489 systemd[1]: sshd@2-10.200.20.13:22-10.200.16.10:36402.service: Deactivated successfully. Sep 9 23:43:01.264091 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:43:01.264849 systemd-logind[1867]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:43:01.266231 systemd-logind[1867]: Removed session 5. Sep 9 23:43:01.360186 systemd[1]: Started sshd@3-10.200.20.13:22-10.200.16.10:36408.service - OpenSSH per-connection server daemon (10.200.16.10:36408). Sep 9 23:43:01.852811 sshd[2330]: Accepted publickey for core from 10.200.16.10 port 36408 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:43:01.853966 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:01.857351 systemd-logind[1867]: New session 6 of user core. Sep 9 23:43:01.861098 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:43:02.204184 sshd[2333]: Connection closed by 10.200.16.10 port 36408 Sep 9 23:43:02.204705 sshd-session[2330]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:02.207801 systemd[1]: sshd@3-10.200.20.13:22-10.200.16.10:36408.service: Deactivated successfully. Sep 9 23:43:02.209450 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:43:02.210087 systemd-logind[1867]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:43:02.211478 systemd-logind[1867]: Removed session 6. Sep 9 23:43:02.292009 systemd[1]: Started sshd@4-10.200.20.13:22-10.200.16.10:36420.service - OpenSSH per-connection server daemon (10.200.16.10:36420). Sep 9 23:43:02.783514 sshd[2339]: Accepted publickey for core from 10.200.16.10 port 36420 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:43:02.784605 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:02.787953 systemd-logind[1867]: New session 7 of user core. Sep 9 23:43:02.800136 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:43:03.271958 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:43:03.272209 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:03.299506 sudo[2343]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:03.371501 sshd[2342]: Connection closed by 10.200.16.10 port 36420 Sep 9 23:43:03.372214 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:03.375623 systemd[1]: sshd@4-10.200.20.13:22-10.200.16.10:36420.service: Deactivated successfully. Sep 9 23:43:03.376932 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:43:03.378099 systemd-logind[1867]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:43:03.379673 systemd-logind[1867]: Removed session 7. Sep 9 23:43:03.467297 systemd[1]: Started sshd@5-10.200.20.13:22-10.200.16.10:36428.service - OpenSSH per-connection server daemon (10.200.16.10:36428). Sep 9 23:43:03.918328 sshd[2349]: Accepted publickey for core from 10.200.16.10 port 36428 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:43:03.919457 sshd-session[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:03.923050 systemd-logind[1867]: New session 8 of user core. Sep 9 23:43:03.930153 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:43:04.173417 sudo[2354]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:43:04.173623 sudo[2354]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:04.180248 sudo[2354]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:04.184151 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:43:04.184357 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:04.191883 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:43:04.223592 augenrules[2376]: No rules Sep 9 23:43:04.224980 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:43:04.225313 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:43:04.226486 sudo[2353]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:04.316133 sshd[2352]: Connection closed by 10.200.16.10 port 36428 Sep 9 23:43:04.316505 sshd-session[2349]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:04.319632 systemd-logind[1867]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:43:04.321117 systemd[1]: sshd@5-10.200.20.13:22-10.200.16.10:36428.service: Deactivated successfully. Sep 9 23:43:04.322640 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:43:04.324879 systemd-logind[1867]: Removed session 8. Sep 9 23:43:04.407882 systemd[1]: Started sshd@6-10.200.20.13:22-10.200.16.10:36438.service - OpenSSH per-connection server daemon (10.200.16.10:36438). Sep 9 23:43:04.897696 sshd[2385]: Accepted publickey for core from 10.200.16.10 port 36438 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:43:04.898795 sshd-session[2385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:43:04.902770 systemd-logind[1867]: New session 9 of user core. Sep 9 23:43:04.909166 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:43:05.171494 sudo[2389]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:43:05.171736 sudo[2389]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:43:06.883512 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:43:06.892229 (dockerd)[2406]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:43:07.840165 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 9 23:43:08.055004 dockerd[2406]: time="2025-09-09T23:43:08.053598453Z" level=info msg="Starting up" Sep 9 23:43:08.055906 dockerd[2406]: time="2025-09-09T23:43:08.055884177Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:43:08.064097 dockerd[2406]: time="2025-09-09T23:43:08.064067855Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 23:43:08.095063 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1242674009-merged.mount: Deactivated successfully. Sep 9 23:43:08.097675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 23:43:08.099073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:08.306102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:08.309160 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:43:08.348972 kubelet[2435]: E0909 23:43:08.348848 2435 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:43:08.351075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:43:08.351186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:43:08.351620 systemd[1]: kubelet.service: Consumed 118ms CPU time, 105.2M memory peak. Sep 9 23:43:08.635070 dockerd[2406]: time="2025-09-09T23:43:08.634912652Z" level=info msg="Loading containers: start." Sep 9 23:43:08.715259 kernel: Initializing XFRM netlink socket Sep 9 23:43:09.217500 systemd-networkd[1714]: docker0: Link UP Sep 9 23:43:09.236808 dockerd[2406]: time="2025-09-09T23:43:09.236755677Z" level=info msg="Loading containers: done." Sep 9 23:43:09.258037 dockerd[2406]: time="2025-09-09T23:43:09.257967244Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:43:09.258207 dockerd[2406]: time="2025-09-09T23:43:09.258086353Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 23:43:09.258207 dockerd[2406]: time="2025-09-09T23:43:09.258179229Z" level=info msg="Initializing buildkit" Sep 9 23:43:09.307861 dockerd[2406]: time="2025-09-09T23:43:09.307814143Z" level=info msg="Completed buildkit initialization" Sep 9 23:43:09.313378 dockerd[2406]: time="2025-09-09T23:43:09.313276430Z" level=info msg="Daemon has completed initialization" Sep 9 23:43:09.313950 dockerd[2406]: time="2025-09-09T23:43:09.313678983Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:43:09.313568 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:43:10.043624 containerd[1890]: time="2025-09-09T23:43:10.043586110Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 23:43:10.933293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967819038.mount: Deactivated successfully. Sep 9 23:43:11.588045 update_engine[1868]: I20250909 23:43:11.587885 1868 update_attempter.cc:509] Updating boot flags... Sep 9 23:43:12.009321 containerd[1890]: time="2025-09-09T23:43:12.009260917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:12.012504 containerd[1890]: time="2025-09-09T23:43:12.012278849Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352613" Sep 9 23:43:12.015693 containerd[1890]: time="2025-09-09T23:43:12.015664237Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:12.020494 containerd[1890]: time="2025-09-09T23:43:12.020463582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:12.020973 containerd[1890]: time="2025-09-09T23:43:12.020944171Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.9773217s" Sep 9 23:43:12.021037 containerd[1890]: time="2025-09-09T23:43:12.020976733Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 9 23:43:12.022302 containerd[1890]: time="2025-09-09T23:43:12.022276878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 23:43:13.322837 containerd[1890]: time="2025-09-09T23:43:13.322784662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:13.325662 containerd[1890]: time="2025-09-09T23:43:13.325633306Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536977" Sep 9 23:43:13.328997 containerd[1890]: time="2025-09-09T23:43:13.328898769Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:13.333467 containerd[1890]: time="2025-09-09T23:43:13.333440055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:13.334096 containerd[1890]: time="2025-09-09T23:43:13.333979839Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.311677184s" Sep 9 23:43:13.334096 containerd[1890]: time="2025-09-09T23:43:13.334017657Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 9 23:43:13.334662 containerd[1890]: time="2025-09-09T23:43:13.334637956Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 23:43:14.470019 containerd[1890]: time="2025-09-09T23:43:14.469345411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:14.472394 containerd[1890]: time="2025-09-09T23:43:14.472366324Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292014" Sep 9 23:43:14.476138 containerd[1890]: time="2025-09-09T23:43:14.476115367Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:14.480121 containerd[1890]: time="2025-09-09T23:43:14.480094612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:14.480636 containerd[1890]: time="2025-09-09T23:43:14.480612323Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.145949126s" Sep 9 23:43:14.480712 containerd[1890]: time="2025-09-09T23:43:14.480701815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 9 23:43:14.481284 containerd[1890]: time="2025-09-09T23:43:14.481258473Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 23:43:15.428268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882327022.mount: Deactivated successfully. Sep 9 23:43:16.151872 containerd[1890]: time="2025-09-09T23:43:16.151815037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:16.155944 containerd[1890]: time="2025-09-09T23:43:16.155876110Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199959" Sep 9 23:43:16.158897 containerd[1890]: time="2025-09-09T23:43:16.158406033Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:16.162710 containerd[1890]: time="2025-09-09T23:43:16.162685540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:16.163463 containerd[1890]: time="2025-09-09T23:43:16.163443262Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.682159869s" Sep 9 23:43:16.163573 containerd[1890]: time="2025-09-09T23:43:16.163558716Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 23:43:16.164044 containerd[1890]: time="2025-09-09T23:43:16.164023889Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 23:43:16.855722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount653287388.mount: Deactivated successfully. Sep 9 23:43:17.716702 containerd[1890]: time="2025-09-09T23:43:17.716043427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:17.719932 containerd[1890]: time="2025-09-09T23:43:17.719903987Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 9 23:43:17.724193 containerd[1890]: time="2025-09-09T23:43:17.724167117Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:17.728571 containerd[1890]: time="2025-09-09T23:43:17.728533203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:17.729263 containerd[1890]: time="2025-09-09T23:43:17.729233331Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.565184361s" Sep 9 23:43:17.729263 containerd[1890]: time="2025-09-09T23:43:17.729262260Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 9 23:43:17.730350 containerd[1890]: time="2025-09-09T23:43:17.730183638Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:43:18.285420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362041434.mount: Deactivated successfully. Sep 9 23:43:18.308030 containerd[1890]: time="2025-09-09T23:43:18.307670982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:18.311395 containerd[1890]: time="2025-09-09T23:43:18.311230800Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 9 23:43:18.315575 containerd[1890]: time="2025-09-09T23:43:18.315543668Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:18.320784 containerd[1890]: time="2025-09-09T23:43:18.320737232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:43:18.321409 containerd[1890]: time="2025-09-09T23:43:18.321108937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 590.899778ms" Sep 9 23:43:18.321409 containerd[1890]: time="2025-09-09T23:43:18.321133026Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:43:18.321602 containerd[1890]: time="2025-09-09T23:43:18.321581687Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 23:43:18.421754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 9 23:43:18.424041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:18.527676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:18.530602 (kubelet)[2824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:43:18.641780 kubelet[2824]: E0909 23:43:18.641649 2824 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:43:18.643945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:43:18.644063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:43:18.644575 systemd[1]: kubelet.service: Consumed 104ms CPU time, 107.3M memory peak. Sep 9 23:43:19.365229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811194038.mount: Deactivated successfully. Sep 9 23:43:21.580400 containerd[1890]: time="2025-09-09T23:43:21.580349749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:21.583245 containerd[1890]: time="2025-09-09T23:43:21.583209264Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465295" Sep 9 23:43:21.587271 containerd[1890]: time="2025-09-09T23:43:21.587243239Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:21.591941 containerd[1890]: time="2025-09-09T23:43:21.591281215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:21.591941 containerd[1890]: time="2025-09-09T23:43:21.591833776Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.270229089s" Sep 9 23:43:21.591941 containerd[1890]: time="2025-09-09T23:43:21.591859985Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 9 23:43:24.955408 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:24.955525 systemd[1]: kubelet.service: Consumed 104ms CPU time, 107.3M memory peak. Sep 9 23:43:24.957394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:24.981053 systemd[1]: Reload requested from client PID 2916 ('systemctl') (unit session-9.scope)... Sep 9 23:43:24.981066 systemd[1]: Reloading... Sep 9 23:43:25.087154 zram_generator::config[2981]: No configuration found. Sep 9 23:43:25.225894 systemd[1]: Reloading finished in 244 ms. Sep 9 23:43:25.270366 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 23:43:25.270590 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 23:43:25.270900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:25.271030 systemd[1]: kubelet.service: Consumed 76ms CPU time, 95M memory peak. Sep 9 23:43:25.272355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:25.470060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:25.476216 (kubelet)[3030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:43:25.501131 kubelet[3030]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:43:25.501131 kubelet[3030]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:43:25.501131 kubelet[3030]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:43:25.501483 kubelet[3030]: I0909 23:43:25.501171 3030 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:43:26.419552 kubelet[3030]: I0909 23:43:26.419511 3030 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 23:43:26.420934 kubelet[3030]: I0909 23:43:26.419704 3030 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:43:26.420934 kubelet[3030]: I0909 23:43:26.419894 3030 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 23:43:27.346807 kubelet[3030]: E0909 23:43:26.961337 3030 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 23:43:27.346807 kubelet[3030]: I0909 23:43:26.962120 3030 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:43:27.346807 kubelet[3030]: I0909 23:43:26.993474 3030 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:43:27.346807 kubelet[3030]: I0909 23:43:26.997119 3030 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:43:27.346807 kubelet[3030]: I0909 23:43:26.998149 3030 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:43:27.347213 kubelet[3030]: I0909 23:43:26.998180 3030 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.0.0-n-044e8b6791","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:43:27.347213 kubelet[3030]: I0909 23:43:26.998305 3030 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:43:27.347213 kubelet[3030]: I0909 23:43:26.998311 3030 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 23:43:27.347213 kubelet[3030]: I0909 23:43:27.346691 3030 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:43:27.349514 kubelet[3030]: I0909 23:43:27.349490 3030 kubelet.go:480] "Attempting to sync node with API server" Sep 9 23:43:27.349514 kubelet[3030]: I0909 23:43:27.349517 3030 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:43:27.349594 kubelet[3030]: I0909 23:43:27.349545 3030 kubelet.go:386] "Adding apiserver pod source" Sep 9 23:43:27.351990 kubelet[3030]: I0909 23:43:27.350520 3030 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:43:27.357493 kubelet[3030]: E0909 23:43:27.357465 3030 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 23:43:27.357576 kubelet[3030]: E0909 23:43:27.357507 3030 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-n-044e8b6791&limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 23:43:27.357990 kubelet[3030]: I0909 23:43:27.357964 3030 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:43:27.358409 kubelet[3030]: I0909 23:43:27.358392 3030 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 23:43:27.358461 kubelet[3030]: W0909 23:43:27.358449 3030 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:43:27.360176 kubelet[3030]: I0909 23:43:27.360158 3030 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:43:27.360234 kubelet[3030]: I0909 23:43:27.360201 3030 server.go:1289] "Started kubelet" Sep 9 23:43:27.360285 kubelet[3030]: I0909 23:43:27.360263 3030 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:43:27.360825 kubelet[3030]: I0909 23:43:27.360809 3030 server.go:317] "Adding debug handlers to kubelet server" Sep 9 23:43:27.362273 kubelet[3030]: I0909 23:43:27.362209 3030 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:43:27.362529 kubelet[3030]: I0909 23:43:27.362510 3030 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:43:27.365017 kubelet[3030]: I0909 23:43:27.364588 3030 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:43:27.365864 kubelet[3030]: E0909 23:43:27.364931 3030 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.0.0-n-044e8b6791.1863c1d6ac293eae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.0.0-n-044e8b6791,UID:ci-4426.0.0-n-044e8b6791,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.0.0-n-044e8b6791,},FirstTimestamp:2025-09-09 23:43:27.360171694 +0000 UTC m=+1.880405399,LastTimestamp:2025-09-09 23:43:27.360171694 +0000 UTC m=+1.880405399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.0.0-n-044e8b6791,}" Sep 9 23:43:27.367158 kubelet[3030]: I0909 23:43:27.367139 3030 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:43:27.368892 kubelet[3030]: E0909 23:43:27.368862 3030 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-044e8b6791\" not found" Sep 9 23:43:27.369745 kubelet[3030]: I0909 23:43:27.369103 3030 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:43:27.369745 kubelet[3030]: I0909 23:43:27.369261 3030 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:43:27.369745 kubelet[3030]: I0909 23:43:27.369309 3030 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:43:27.369974 kubelet[3030]: E0909 23:43:27.369952 3030 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 23:43:27.370225 kubelet[3030]: I0909 23:43:27.370207 3030 factory.go:223] Registration of the systemd container factory successfully Sep 9 23:43:27.370359 kubelet[3030]: I0909 23:43:27.370344 3030 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:43:27.370688 kubelet[3030]: E0909 23:43:27.370670 3030 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:43:27.371336 kubelet[3030]: E0909 23:43:27.371311 3030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-n-044e8b6791?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="200ms" Sep 9 23:43:27.371741 kubelet[3030]: I0909 23:43:27.371727 3030 factory.go:223] Registration of the containerd container factory successfully Sep 9 23:43:27.398814 kubelet[3030]: I0909 23:43:27.398744 3030 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:43:27.398814 kubelet[3030]: I0909 23:43:27.398761 3030 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:43:27.398814 kubelet[3030]: I0909 23:43:27.398780 3030 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:43:27.469445 kubelet[3030]: E0909 23:43:27.469400 3030 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-044e8b6791\" not found" Sep 9 23:43:27.496268 kubelet[3030]: I0909 23:43:27.496237 3030 policy_none.go:49] "None policy: Start" Sep 9 23:43:27.496348 kubelet[3030]: I0909 23:43:27.496282 3030 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:43:27.496348 kubelet[3030]: I0909 23:43:27.496296 3030 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:43:27.536787 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:43:27.545333 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:43:27.549238 kubelet[3030]: I0909 23:43:27.548880 3030 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 23:43:27.549693 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:43:27.550077 kubelet[3030]: I0909 23:43:27.549705 3030 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 23:43:27.550077 kubelet[3030]: I0909 23:43:27.549720 3030 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 23:43:27.550077 kubelet[3030]: I0909 23:43:27.549738 3030 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:43:27.550077 kubelet[3030]: I0909 23:43:27.549742 3030 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 23:43:27.550077 kubelet[3030]: E0909 23:43:27.549774 3030 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:43:27.552641 kubelet[3030]: E0909 23:43:27.552609 3030 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 23:43:27.560784 kubelet[3030]: E0909 23:43:27.560703 3030 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 23:43:27.560901 kubelet[3030]: I0909 23:43:27.560863 3030 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:43:27.560901 kubelet[3030]: I0909 23:43:27.560876 3030 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:43:27.562011 kubelet[3030]: I0909 23:43:27.561538 3030 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:43:27.562307 kubelet[3030]: E0909 23:43:27.562278 3030 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:43:27.562655 kubelet[3030]: E0909 23:43:27.562311 3030 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.0.0-n-044e8b6791\" not found" Sep 9 23:43:27.572449 kubelet[3030]: E0909 23:43:27.572415 3030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-n-044e8b6791?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="400ms" Sep 9 23:43:27.662328 kubelet[3030]: I0909 23:43:27.662242 3030 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.663184 kubelet[3030]: E0909 23:43:27.663142 3030 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.664321 systemd[1]: Created slice kubepods-burstable-pod67dcd4873e4581b9ded094e75a853e4c.slice - libcontainer container kubepods-burstable-pod67dcd4873e4581b9ded094e75a853e4c.slice. Sep 9 23:43:27.673444 kubelet[3030]: I0909 23:43:27.673407 3030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0cef700657afcba278914a0c4a7c4c5c-ca-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" (UID: \"0cef700657afcba278914a0c4a7c4c5c\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.673444 kubelet[3030]: I0909 23:43:27.673428 3030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67dcd4873e4581b9ded094e75a853e4c-kubeconfig\") pod \"kube-scheduler-ci-4426.0.0-n-044e8b6791\" (UID: \"67dcd4873e4581b9ded094e75a853e4c\") " pod="kube-system/kube-scheduler-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.673525 kubelet[3030]: I0909 23:43:27.673450 3030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af10b0c178fa39fccdb920ef87cf74f2-ca-certs\") pod \"kube-apiserver-ci-4426.0.0-n-044e8b6791\" (UID: \"af10b0c178fa39fccdb920ef87cf74f2\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.673525 kubelet[3030]: I0909 23:43:27.673462 3030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af10b0c178fa39fccdb920ef87cf74f2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.0.0-n-044e8b6791\" (UID: \"af10b0c178fa39fccdb920ef87cf74f2\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.673525 kubelet[3030]: I0909 23:43:27.673474 3030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0cef700657afcba278914a0c4a7c4c5c-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" (UID: \"0cef700657afcba278914a0c4a7c4c5c\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.673525 kubelet[3030]: I0909 23:43:27.673488 3030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0cef700657afcba278914a0c4a7c4c5c-k8s-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" (UID: \"0cef700657afcba278914a0c4a7c4c5c\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.673525 kubelet[3030]: I0909 23:43:27.673496 3030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cef700657afcba278914a0c4a7c4c5c-kubeconfig\") pod \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" (UID: \"0cef700657afcba278914a0c4a7c4c5c\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.673741 kubelet[3030]: I0909 23:43:27.673505 3030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cef700657afcba278914a0c4a7c4c5c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" (UID: \"0cef700657afcba278914a0c4a7c4c5c\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.673741 kubelet[3030]: I0909 23:43:27.673514 3030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af10b0c178fa39fccdb920ef87cf74f2-k8s-certs\") pod \"kube-apiserver-ci-4426.0.0-n-044e8b6791\" (UID: \"af10b0c178fa39fccdb920ef87cf74f2\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.674523 kubelet[3030]: E0909 23:43:27.674498 3030 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-044e8b6791\" not found" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.677869 systemd[1]: Created slice kubepods-burstable-podaf10b0c178fa39fccdb920ef87cf74f2.slice - libcontainer container kubepods-burstable-podaf10b0c178fa39fccdb920ef87cf74f2.slice. Sep 9 23:43:27.687763 kubelet[3030]: E0909 23:43:27.687739 3030 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-044e8b6791\" not found" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.690105 systemd[1]: Created slice kubepods-burstable-pod0cef700657afcba278914a0c4a7c4c5c.slice - libcontainer container kubepods-burstable-pod0cef700657afcba278914a0c4a7c4c5c.slice. Sep 9 23:43:27.691357 kubelet[3030]: E0909 23:43:27.691335 3030 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-044e8b6791\" not found" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.864997 kubelet[3030]: I0909 23:43:27.864963 3030 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.865319 kubelet[3030]: E0909 23:43:27.865295 3030 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:27.973705 kubelet[3030]: E0909 23:43:27.973667 3030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-n-044e8b6791?timeout=10s\": dial tcp 10.200.20.13:6443: connect: connection refused" interval="800ms" Sep 9 23:43:27.975725 containerd[1890]: time="2025-09-09T23:43:27.975475899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.0.0-n-044e8b6791,Uid:67dcd4873e4581b9ded094e75a853e4c,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:27.988740 containerd[1890]: time="2025-09-09T23:43:27.988546354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.0.0-n-044e8b6791,Uid:af10b0c178fa39fccdb920ef87cf74f2,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:27.992140 containerd[1890]: time="2025-09-09T23:43:27.992117108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.0.0-n-044e8b6791,Uid:0cef700657afcba278914a0c4a7c4c5c,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:28.075576 containerd[1890]: time="2025-09-09T23:43:28.075543362Z" level=info msg="connecting to shim 71edc11f88a8d59eb4bf66339be97f607da190d76566687e8ed270f8f902cb48" address="unix:///run/containerd/s/8fb90e829ed041131fb5293a44509559df546697d42614d936d3ecdcb4991404" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:28.078970 containerd[1890]: time="2025-09-09T23:43:28.078670040Z" level=info msg="connecting to shim fdbf80cf2b49abbbed7cc9bd263b00fd17abd349f6b723b29219043ddb4a8d62" address="unix:///run/containerd/s/5cd03bbf8db8d844dc16722025c5ea91c7f8f188f75771c4f4cb564f9bbe5987" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:28.094885 containerd[1890]: time="2025-09-09T23:43:28.094844628Z" level=info msg="connecting to shim 1f358a4251f302041f099ab6a1eca065567e645ed61a18da5f37adde3c754f54" address="unix:///run/containerd/s/389945bfdfcb827f7eccefa94262b6cfa97802aeb1468775f537321542dc9a7e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:28.104139 systemd[1]: Started cri-containerd-71edc11f88a8d59eb4bf66339be97f607da190d76566687e8ed270f8f902cb48.scope - libcontainer container 71edc11f88a8d59eb4bf66339be97f607da190d76566687e8ed270f8f902cb48. Sep 9 23:43:28.111007 systemd[1]: Started cri-containerd-fdbf80cf2b49abbbed7cc9bd263b00fd17abd349f6b723b29219043ddb4a8d62.scope - libcontainer container fdbf80cf2b49abbbed7cc9bd263b00fd17abd349f6b723b29219043ddb4a8d62. Sep 9 23:43:28.131122 systemd[1]: Started cri-containerd-1f358a4251f302041f099ab6a1eca065567e645ed61a18da5f37adde3c754f54.scope - libcontainer container 1f358a4251f302041f099ab6a1eca065567e645ed61a18da5f37adde3c754f54. Sep 9 23:43:28.171680 containerd[1890]: time="2025-09-09T23:43:28.171640430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.0.0-n-044e8b6791,Uid:af10b0c178fa39fccdb920ef87cf74f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdbf80cf2b49abbbed7cc9bd263b00fd17abd349f6b723b29219043ddb4a8d62\"" Sep 9 23:43:28.179138 containerd[1890]: time="2025-09-09T23:43:28.179042357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.0.0-n-044e8b6791,Uid:67dcd4873e4581b9ded094e75a853e4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"71edc11f88a8d59eb4bf66339be97f607da190d76566687e8ed270f8f902cb48\"" Sep 9 23:43:28.182869 containerd[1890]: time="2025-09-09T23:43:28.182836072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.0.0-n-044e8b6791,Uid:0cef700657afcba278914a0c4a7c4c5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f358a4251f302041f099ab6a1eca065567e645ed61a18da5f37adde3c754f54\"" Sep 9 23:43:28.183135 containerd[1890]: time="2025-09-09T23:43:28.182933141Z" level=info msg="CreateContainer within sandbox \"fdbf80cf2b49abbbed7cc9bd263b00fd17abd349f6b723b29219043ddb4a8d62\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:43:28.188160 containerd[1890]: time="2025-09-09T23:43:28.188131960Z" level=info msg="CreateContainer within sandbox \"71edc11f88a8d59eb4bf66339be97f607da190d76566687e8ed270f8f902cb48\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:43:28.193892 containerd[1890]: time="2025-09-09T23:43:28.193862283Z" level=info msg="CreateContainer within sandbox \"1f358a4251f302041f099ab6a1eca065567e645ed61a18da5f37adde3c754f54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:43:28.213529 containerd[1890]: time="2025-09-09T23:43:28.213486419Z" level=info msg="Container 8d5b8536845e61d24e06ee3d57e0b56a10ba2a9327b6c9b54b84bb48fcd17011: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:28.217372 containerd[1890]: time="2025-09-09T23:43:28.217337097Z" level=info msg="Container 8b09aa3f6788c432835939558d32467c515e390e4c9bec4cd27f9774b3181f15: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:28.238761 containerd[1890]: time="2025-09-09T23:43:28.238162695Z" level=info msg="CreateContainer within sandbox \"fdbf80cf2b49abbbed7cc9bd263b00fd17abd349f6b723b29219043ddb4a8d62\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8d5b8536845e61d24e06ee3d57e0b56a10ba2a9327b6c9b54b84bb48fcd17011\"" Sep 9 23:43:28.238993 containerd[1890]: time="2025-09-09T23:43:28.238963356Z" level=info msg="StartContainer for \"8d5b8536845e61d24e06ee3d57e0b56a10ba2a9327b6c9b54b84bb48fcd17011\"" Sep 9 23:43:28.239836 containerd[1890]: time="2025-09-09T23:43:28.239806458Z" level=info msg="connecting to shim 8d5b8536845e61d24e06ee3d57e0b56a10ba2a9327b6c9b54b84bb48fcd17011" address="unix:///run/containerd/s/5cd03bbf8db8d844dc16722025c5ea91c7f8f188f75771c4f4cb564f9bbe5987" protocol=ttrpc version=3 Sep 9 23:43:28.250922 containerd[1890]: time="2025-09-09T23:43:28.250878367Z" level=info msg="Container f176c1ae67a7fbf7cbdb5b98db6e49f92e6f0d691b27c54605f5dfbc7ddc2145: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:28.255128 systemd[1]: Started cri-containerd-8d5b8536845e61d24e06ee3d57e0b56a10ba2a9327b6c9b54b84bb48fcd17011.scope - libcontainer container 8d5b8536845e61d24e06ee3d57e0b56a10ba2a9327b6c9b54b84bb48fcd17011. Sep 9 23:43:28.259395 containerd[1890]: time="2025-09-09T23:43:28.259362255Z" level=info msg="CreateContainer within sandbox \"71edc11f88a8d59eb4bf66339be97f607da190d76566687e8ed270f8f902cb48\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8b09aa3f6788c432835939558d32467c515e390e4c9bec4cd27f9774b3181f15\"" Sep 9 23:43:28.259877 containerd[1890]: time="2025-09-09T23:43:28.259856197Z" level=info msg="StartContainer for \"8b09aa3f6788c432835939558d32467c515e390e4c9bec4cd27f9774b3181f15\"" Sep 9 23:43:28.260669 containerd[1890]: time="2025-09-09T23:43:28.260641472Z" level=info msg="connecting to shim 8b09aa3f6788c432835939558d32467c515e390e4c9bec4cd27f9774b3181f15" address="unix:///run/containerd/s/8fb90e829ed041131fb5293a44509559df546697d42614d936d3ecdcb4991404" protocol=ttrpc version=3 Sep 9 23:43:28.268683 kubelet[3030]: I0909 23:43:28.268327 3030 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:28.268953 kubelet[3030]: E0909 23:43:28.268781 3030 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.13:6443/api/v1/nodes\": dial tcp 10.200.20.13:6443: connect: connection refused" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:28.270457 containerd[1890]: time="2025-09-09T23:43:28.270332327Z" level=info msg="CreateContainer within sandbox \"1f358a4251f302041f099ab6a1eca065567e645ed61a18da5f37adde3c754f54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f176c1ae67a7fbf7cbdb5b98db6e49f92e6f0d691b27c54605f5dfbc7ddc2145\"" Sep 9 23:43:28.271168 containerd[1890]: time="2025-09-09T23:43:28.271144308Z" level=info msg="StartContainer for \"f176c1ae67a7fbf7cbdb5b98db6e49f92e6f0d691b27c54605f5dfbc7ddc2145\"" Sep 9 23:43:28.275192 containerd[1890]: time="2025-09-09T23:43:28.275161297Z" level=info msg="connecting to shim f176c1ae67a7fbf7cbdb5b98db6e49f92e6f0d691b27c54605f5dfbc7ddc2145" address="unix:///run/containerd/s/389945bfdfcb827f7eccefa94262b6cfa97802aeb1468775f537321542dc9a7e" protocol=ttrpc version=3 Sep 9 23:43:28.295228 systemd[1]: Started cri-containerd-8b09aa3f6788c432835939558d32467c515e390e4c9bec4cd27f9774b3181f15.scope - libcontainer container 8b09aa3f6788c432835939558d32467c515e390e4c9bec4cd27f9774b3181f15. Sep 9 23:43:28.299032 systemd[1]: Started cri-containerd-f176c1ae67a7fbf7cbdb5b98db6e49f92e6f0d691b27c54605f5dfbc7ddc2145.scope - libcontainer container f176c1ae67a7fbf7cbdb5b98db6e49f92e6f0d691b27c54605f5dfbc7ddc2145. Sep 9 23:43:28.314581 containerd[1890]: time="2025-09-09T23:43:28.314458803Z" level=info msg="StartContainer for \"8d5b8536845e61d24e06ee3d57e0b56a10ba2a9327b6c9b54b84bb48fcd17011\" returns successfully" Sep 9 23:43:28.349673 containerd[1890]: time="2025-09-09T23:43:28.349637731Z" level=info msg="StartContainer for \"f176c1ae67a7fbf7cbdb5b98db6e49f92e6f0d691b27c54605f5dfbc7ddc2145\" returns successfully" Sep 9 23:43:28.381399 containerd[1890]: time="2025-09-09T23:43:28.381332669Z" level=info msg="StartContainer for \"8b09aa3f6788c432835939558d32467c515e390e4c9bec4cd27f9774b3181f15\" returns successfully" Sep 9 23:43:28.562241 kubelet[3030]: E0909 23:43:28.562001 3030 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-044e8b6791\" not found" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:28.563482 kubelet[3030]: E0909 23:43:28.563300 3030 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-044e8b6791\" not found" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:28.565525 kubelet[3030]: E0909 23:43:28.565506 3030 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-044e8b6791\" not found" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:29.072226 kubelet[3030]: I0909 23:43:29.072198 3030 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:29.569031 kubelet[3030]: E0909 23:43:29.568879 3030 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-044e8b6791\" not found" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:29.569031 kubelet[3030]: E0909 23:43:29.568919 3030 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-n-044e8b6791\" not found" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:29.934678 kubelet[3030]: E0909 23:43:29.934643 3030 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4426.0.0-n-044e8b6791\" not found" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:30.092787 kubelet[3030]: I0909 23:43:30.092751 3030 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:30.171795 kubelet[3030]: I0909 23:43:30.171754 3030 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:30.185164 kubelet[3030]: E0909 23:43:30.184822 3030 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.0.0-n-044e8b6791\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:30.185164 kubelet[3030]: I0909 23:43:30.184852 3030 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:30.187427 kubelet[3030]: E0909 23:43:30.187395 3030 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.0.0-n-044e8b6791\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:30.187427 kubelet[3030]: I0909 23:43:30.187418 3030 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:30.189363 kubelet[3030]: E0909 23:43:30.189336 3030 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:30.357253 kubelet[3030]: I0909 23:43:30.357216 3030 apiserver.go:52] "Watching apiserver" Sep 9 23:43:30.369418 kubelet[3030]: I0909 23:43:30.369363 3030 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:43:30.724685 kubelet[3030]: I0909 23:43:30.724526 3030 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:30.726339 kubelet[3030]: E0909 23:43:30.726313 3030 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:32.529063 systemd[1]: Reload requested from client PID 3307 ('systemctl') (unit session-9.scope)... Sep 9 23:43:32.529366 systemd[1]: Reloading... Sep 9 23:43:32.620012 zram_generator::config[3357]: No configuration found. Sep 9 23:43:32.779077 systemd[1]: Reloading finished in 249 ms. Sep 9 23:43:32.812768 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:32.824782 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:43:32.825161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:32.825303 systemd[1]: kubelet.service: Consumed 1.237s CPU time, 127.6M memory peak. Sep 9 23:43:32.827382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:43:33.084038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:43:33.095872 (kubelet)[3418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:43:33.121776 kubelet[3418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:43:33.122201 kubelet[3418]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:43:33.122201 kubelet[3418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:43:33.122318 kubelet[3418]: I0909 23:43:33.122280 3418 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:43:33.128044 kubelet[3418]: I0909 23:43:33.126969 3418 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 23:43:33.128044 kubelet[3418]: I0909 23:43:33.127027 3418 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:43:33.128044 kubelet[3418]: I0909 23:43:33.127181 3418 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 23:43:33.128312 kubelet[3418]: I0909 23:43:33.128297 3418 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 23:43:33.131056 kubelet[3418]: I0909 23:43:33.131033 3418 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:43:33.133240 kubelet[3418]: I0909 23:43:33.133226 3418 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:43:33.138572 kubelet[3418]: I0909 23:43:33.138523 3418 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:43:33.138905 kubelet[3418]: I0909 23:43:33.138874 3418 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:43:33.139203 kubelet[3418]: I0909 23:43:33.138963 3418 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.0.0-n-044e8b6791","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:43:33.139317 kubelet[3418]: I0909 23:43:33.139304 3418 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:43:33.139372 kubelet[3418]: I0909 23:43:33.139364 3418 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 23:43:33.139453 kubelet[3418]: I0909 23:43:33.139444 3418 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:43:33.139650 kubelet[3418]: I0909 23:43:33.139636 3418 kubelet.go:480] "Attempting to sync node with API server" Sep 9 23:43:33.139709 kubelet[3418]: I0909 23:43:33.139701 3418 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:43:33.139768 kubelet[3418]: I0909 23:43:33.139762 3418 kubelet.go:386] "Adding apiserver pod source" Sep 9 23:43:33.139821 kubelet[3418]: I0909 23:43:33.139812 3418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:43:33.141377 kubelet[3418]: I0909 23:43:33.141357 3418 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:43:33.141752 kubelet[3418]: I0909 23:43:33.141733 3418 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 23:43:33.143566 kubelet[3418]: I0909 23:43:33.143540 3418 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:43:33.143566 kubelet[3418]: I0909 23:43:33.143572 3418 server.go:1289] "Started kubelet" Sep 9 23:43:33.144888 kubelet[3418]: I0909 23:43:33.144868 3418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:43:33.145554 kubelet[3418]: I0909 23:43:33.145454 3418 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:43:33.146124 kubelet[3418]: I0909 23:43:33.146107 3418 server.go:317] "Adding debug handlers to kubelet server" Sep 9 23:43:33.150724 kubelet[3418]: I0909 23:43:33.150677 3418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:43:33.150867 kubelet[3418]: I0909 23:43:33.150851 3418 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:43:33.158331 kubelet[3418]: I0909 23:43:33.157876 3418 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:43:33.160023 kubelet[3418]: I0909 23:43:33.160002 3418 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:43:33.160194 kubelet[3418]: E0909 23:43:33.160172 3418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-n-044e8b6791\" not found" Sep 9 23:43:33.161817 kubelet[3418]: I0909 23:43:33.161787 3418 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:43:33.161892 kubelet[3418]: I0909 23:43:33.161886 3418 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:43:33.163688 kubelet[3418]: I0909 23:43:33.163506 3418 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:43:33.165483 kubelet[3418]: I0909 23:43:33.165140 3418 factory.go:223] Registration of the containerd container factory successfully Sep 9 23:43:33.165483 kubelet[3418]: I0909 23:43:33.165155 3418 factory.go:223] Registration of the systemd container factory successfully Sep 9 23:43:33.167158 kubelet[3418]: E0909 23:43:33.167131 3418 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:43:33.172711 kubelet[3418]: I0909 23:43:33.172529 3418 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 23:43:33.173884 kubelet[3418]: I0909 23:43:33.173849 3418 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 23:43:33.173884 kubelet[3418]: I0909 23:43:33.173879 3418 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 23:43:33.173958 kubelet[3418]: I0909 23:43:33.173896 3418 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:43:33.173958 kubelet[3418]: I0909 23:43:33.173900 3418 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 23:43:33.173958 kubelet[3418]: E0909 23:43:33.173935 3418 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:43:33.210797 kubelet[3418]: I0909 23:43:33.210769 3418 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:43:33.210797 kubelet[3418]: I0909 23:43:33.210786 3418 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:43:33.210943 kubelet[3418]: I0909 23:43:33.210816 3418 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:43:33.210960 kubelet[3418]: I0909 23:43:33.210942 3418 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:43:33.210960 kubelet[3418]: I0909 23:43:33.210950 3418 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:43:33.211015 kubelet[3418]: I0909 23:43:33.210964 3418 policy_none.go:49] "None policy: Start" Sep 9 23:43:33.211015 kubelet[3418]: I0909 23:43:33.210974 3418 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:43:33.211015 kubelet[3418]: I0909 23:43:33.211000 3418 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:43:33.211083 kubelet[3418]: I0909 23:43:33.211069 3418 state_mem.go:75] "Updated machine memory state" Sep 9 23:43:33.216487 kubelet[3418]: E0909 23:43:33.216247 3418 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 23:43:33.216773 kubelet[3418]: I0909 23:43:33.216673 3418 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:43:33.216773 kubelet[3418]: I0909 23:43:33.216694 3418 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:43:33.217402 kubelet[3418]: I0909 23:43:33.217369 3418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:43:33.219024 kubelet[3418]: E0909 23:43:33.218797 3418 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:43:33.276657 kubelet[3418]: I0909 23:43:33.275324 3418 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.276657 kubelet[3418]: I0909 23:43:33.275538 3418 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.276657 kubelet[3418]: I0909 23:43:33.275741 3418 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.284742 kubelet[3418]: I0909 23:43:33.284701 3418 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 23:43:33.292726 kubelet[3418]: I0909 23:43:33.292680 3418 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 23:43:33.293127 kubelet[3418]: I0909 23:43:33.292683 3418 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 23:43:33.318587 kubelet[3418]: I0909 23:43:33.318554 3418 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.332373 kubelet[3418]: I0909 23:43:33.332309 3418 kubelet_node_status.go:124] "Node was previously registered" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.332591 kubelet[3418]: I0909 23:43:33.332492 3418 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.364100 kubelet[3418]: I0909 23:43:33.363308 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cef700657afcba278914a0c4a7c4c5c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" (UID: \"0cef700657afcba278914a0c4a7c4c5c\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.364100 kubelet[3418]: I0909 23:43:33.363344 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af10b0c178fa39fccdb920ef87cf74f2-ca-certs\") pod \"kube-apiserver-ci-4426.0.0-n-044e8b6791\" (UID: \"af10b0c178fa39fccdb920ef87cf74f2\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.364100 kubelet[3418]: I0909 23:43:33.364028 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67dcd4873e4581b9ded094e75a853e4c-kubeconfig\") pod \"kube-scheduler-ci-4426.0.0-n-044e8b6791\" (UID: \"67dcd4873e4581b9ded094e75a853e4c\") " pod="kube-system/kube-scheduler-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.364256 kubelet[3418]: I0909 23:43:33.364134 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af10b0c178fa39fccdb920ef87cf74f2-k8s-certs\") pod \"kube-apiserver-ci-4426.0.0-n-044e8b6791\" (UID: \"af10b0c178fa39fccdb920ef87cf74f2\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.364256 kubelet[3418]: I0909 23:43:33.364157 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af10b0c178fa39fccdb920ef87cf74f2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.0.0-n-044e8b6791\" (UID: \"af10b0c178fa39fccdb920ef87cf74f2\") " pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.364256 kubelet[3418]: I0909 23:43:33.364168 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0cef700657afcba278914a0c4a7c4c5c-ca-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" (UID: \"0cef700657afcba278914a0c4a7c4c5c\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.364256 kubelet[3418]: I0909 23:43:33.364178 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0cef700657afcba278914a0c4a7c4c5c-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" (UID: \"0cef700657afcba278914a0c4a7c4c5c\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.364256 kubelet[3418]: I0909 23:43:33.364193 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0cef700657afcba278914a0c4a7c4c5c-k8s-certs\") pod \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" (UID: \"0cef700657afcba278914a0c4a7c4c5c\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.364332 kubelet[3418]: I0909 23:43:33.364202 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cef700657afcba278914a0c4a7c4c5c-kubeconfig\") pod \"kube-controller-manager-ci-4426.0.0-n-044e8b6791\" (UID: \"0cef700657afcba278914a0c4a7c4c5c\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:33.889544 sudo[3455]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 23:43:33.889751 sudo[3455]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 23:43:34.139524 sudo[3455]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:34.144234 kubelet[3418]: I0909 23:43:34.144112 3418 apiserver.go:52] "Watching apiserver" Sep 9 23:43:34.162719 kubelet[3418]: I0909 23:43:34.162667 3418 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:43:34.197841 kubelet[3418]: I0909 23:43:34.196866 3418 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:34.198172 kubelet[3418]: I0909 23:43:34.198154 3418 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:34.215357 kubelet[3418]: I0909 23:43:34.214828 3418 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 23:43:34.215357 kubelet[3418]: E0909 23:43:34.214922 3418 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.0.0-n-044e8b6791\" already exists" pod="kube-system/kube-scheduler-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:34.221160 kubelet[3418]: I0909 23:43:34.221005 3418 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 9 23:43:34.221374 kubelet[3418]: E0909 23:43:34.221330 3418 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.0.0-n-044e8b6791\" already exists" pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" Sep 9 23:43:34.221459 kubelet[3418]: I0909 23:43:34.221417 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426.0.0-n-044e8b6791" podStartSLOduration=1.221337962 podStartE2EDuration="1.221337962s" podCreationTimestamp="2025-09-09 23:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:34.221170739 +0000 UTC m=+1.121607383" watchObservedRunningTime="2025-09-09 23:43:34.221337962 +0000 UTC m=+1.121774598" Sep 9 23:43:34.235754 kubelet[3418]: I0909 23:43:34.235667 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426.0.0-n-044e8b6791" podStartSLOduration=1.2356529 podStartE2EDuration="1.2356529s" podCreationTimestamp="2025-09-09 23:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:34.235157966 +0000 UTC m=+1.135594602" watchObservedRunningTime="2025-09-09 23:43:34.2356529 +0000 UTC m=+1.136089544" Sep 9 23:43:34.293538 kubelet[3418]: I0909 23:43:34.293398 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426.0.0-n-044e8b6791" podStartSLOduration=1.293368855 podStartE2EDuration="1.293368855s" podCreationTimestamp="2025-09-09 23:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:34.248705078 +0000 UTC m=+1.149141714" watchObservedRunningTime="2025-09-09 23:43:34.293368855 +0000 UTC m=+1.193805555" Sep 9 23:43:35.393632 sudo[2389]: pam_unix(sudo:session): session closed for user root Sep 9 23:43:35.483079 sshd[2388]: Connection closed by 10.200.16.10 port 36438 Sep 9 23:43:35.483619 sshd-session[2385]: pam_unix(sshd:session): session closed for user core Sep 9 23:43:35.487390 systemd[1]: sshd@6-10.200.20.13:22-10.200.16.10:36438.service: Deactivated successfully. Sep 9 23:43:35.487563 systemd-logind[1867]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:43:35.491097 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:43:35.491316 systemd[1]: session-9.scope: Consumed 4.131s CPU time, 262.7M memory peak. Sep 9 23:43:35.494116 systemd-logind[1867]: Removed session 9. Sep 9 23:43:37.975513 kubelet[3418]: I0909 23:43:37.975482 3418 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:43:37.976030 kubelet[3418]: I0909 23:43:37.975885 3418 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:43:37.976068 containerd[1890]: time="2025-09-09T23:43:37.975748913Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:43:39.048073 systemd[1]: Created slice kubepods-burstable-pod6a45986f_4b27_4c11_923c_2e57bf55fc1d.slice - libcontainer container kubepods-burstable-pod6a45986f_4b27_4c11_923c_2e57bf55fc1d.slice. Sep 9 23:43:39.056145 systemd[1]: Created slice kubepods-besteffort-pod128b4aa3_9b0a_46b8_8a1a_317e3f060ec6.slice - libcontainer container kubepods-besteffort-pod128b4aa3_9b0a_46b8_8a1a_317e3f060ec6.slice. Sep 9 23:43:39.098622 kubelet[3418]: I0909 23:43:39.098243 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/128b4aa3-9b0a-46b8-8a1a-317e3f060ec6-xtables-lock\") pod \"kube-proxy-zlqt7\" (UID: \"128b4aa3-9b0a-46b8-8a1a-317e3f060ec6\") " pod="kube-system/kube-proxy-zlqt7" Sep 9 23:43:39.098622 kubelet[3418]: I0909 23:43:39.098279 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-bpf-maps\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.098622 kubelet[3418]: I0909 23:43:39.098293 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cni-path\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.098622 kubelet[3418]: I0909 23:43:39.098302 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-host-proc-sys-net\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.098622 kubelet[3418]: I0909 23:43:39.098315 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-host-proc-sys-kernel\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.098622 kubelet[3418]: I0909 23:43:39.098324 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5wx2\" (UniqueName: \"kubernetes.io/projected/6a45986f-4b27-4c11-923c-2e57bf55fc1d-kube-api-access-l5wx2\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.099357 kubelet[3418]: I0909 23:43:39.098339 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/128b4aa3-9b0a-46b8-8a1a-317e3f060ec6-lib-modules\") pod \"kube-proxy-zlqt7\" (UID: \"128b4aa3-9b0a-46b8-8a1a-317e3f060ec6\") " pod="kube-system/kube-proxy-zlqt7" Sep 9 23:43:39.099357 kubelet[3418]: I0909 23:43:39.098348 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkw8j\" (UniqueName: \"kubernetes.io/projected/128b4aa3-9b0a-46b8-8a1a-317e3f060ec6-kube-api-access-tkw8j\") pod \"kube-proxy-zlqt7\" (UID: \"128b4aa3-9b0a-46b8-8a1a-317e3f060ec6\") " pod="kube-system/kube-proxy-zlqt7" Sep 9 23:43:39.099357 kubelet[3418]: I0909 23:43:39.098357 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/128b4aa3-9b0a-46b8-8a1a-317e3f060ec6-kube-proxy\") pod \"kube-proxy-zlqt7\" (UID: \"128b4aa3-9b0a-46b8-8a1a-317e3f060ec6\") " pod="kube-system/kube-proxy-zlqt7" Sep 9 23:43:39.099357 kubelet[3418]: I0909 23:43:39.098366 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-run\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.099357 kubelet[3418]: I0909 23:43:39.098375 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-hostproc\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.099357 kubelet[3418]: I0909 23:43:39.098384 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-cgroup\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.099469 kubelet[3418]: I0909 23:43:39.098392 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-etc-cni-netd\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.099469 kubelet[3418]: I0909 23:43:39.098399 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-lib-modules\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.099469 kubelet[3418]: I0909 23:43:39.098408 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-xtables-lock\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.099469 kubelet[3418]: I0909 23:43:39.098419 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a45986f-4b27-4c11-923c-2e57bf55fc1d-clustermesh-secrets\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.099469 kubelet[3418]: I0909 23:43:39.098428 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-config-path\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.099469 kubelet[3418]: I0909 23:43:39.098437 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a45986f-4b27-4c11-923c-2e57bf55fc1d-hubble-tls\") pod \"cilium-jwhrz\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " pod="kube-system/cilium-jwhrz" Sep 9 23:43:39.198235 systemd[1]: Created slice kubepods-besteffort-poda40f008b_cd9e_4d92_8bfc_a603441c3ef9.slice - libcontainer container kubepods-besteffort-poda40f008b_cd9e_4d92_8bfc_a603441c3ef9.slice. Sep 9 23:43:39.199004 kubelet[3418]: I0909 23:43:39.198779 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a40f008b-cd9e-4d92-8bfc-a603441c3ef9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-srmbb\" (UID: \"a40f008b-cd9e-4d92-8bfc-a603441c3ef9\") " pod="kube-system/cilium-operator-6c4d7847fc-srmbb" Sep 9 23:43:39.199004 kubelet[3418]: I0909 23:43:39.198806 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt47n\" (UniqueName: \"kubernetes.io/projected/a40f008b-cd9e-4d92-8bfc-a603441c3ef9-kube-api-access-tt47n\") pod \"cilium-operator-6c4d7847fc-srmbb\" (UID: \"a40f008b-cd9e-4d92-8bfc-a603441c3ef9\") " pod="kube-system/cilium-operator-6c4d7847fc-srmbb" Sep 9 23:43:39.352881 containerd[1890]: time="2025-09-09T23:43:39.352752104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jwhrz,Uid:6a45986f-4b27-4c11-923c-2e57bf55fc1d,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:39.364538 containerd[1890]: time="2025-09-09T23:43:39.364403104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zlqt7,Uid:128b4aa3-9b0a-46b8-8a1a-317e3f060ec6,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:39.408193 containerd[1890]: time="2025-09-09T23:43:39.408150200Z" level=info msg="connecting to shim 5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89" address="unix:///run/containerd/s/447497b6a3b58da8ffabed4a05ad6c2a8b6dae64e786ea2c84d9e6f4f344ec65" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:39.426398 containerd[1890]: time="2025-09-09T23:43:39.426352964Z" level=info msg="connecting to shim 0a7de60beb1f987a51e8a9b82a98e988e2abb4ceb0a4adf7ee9a1afb5bf6966f" address="unix:///run/containerd/s/7b5e4d2e81971355d72db3bd52ac06f5f43e537ac9bcd45307d3b4d9cfa59b82" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:39.428035 systemd[1]: Started cri-containerd-5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89.scope - libcontainer container 5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89. Sep 9 23:43:39.452254 systemd[1]: Started cri-containerd-0a7de60beb1f987a51e8a9b82a98e988e2abb4ceb0a4adf7ee9a1afb5bf6966f.scope - libcontainer container 0a7de60beb1f987a51e8a9b82a98e988e2abb4ceb0a4adf7ee9a1afb5bf6966f. Sep 9 23:43:39.459640 containerd[1890]: time="2025-09-09T23:43:39.459590840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jwhrz,Uid:6a45986f-4b27-4c11-923c-2e57bf55fc1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\"" Sep 9 23:43:39.463343 containerd[1890]: time="2025-09-09T23:43:39.463312430Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 23:43:39.477764 containerd[1890]: time="2025-09-09T23:43:39.477729313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zlqt7,Uid:128b4aa3-9b0a-46b8-8a1a-317e3f060ec6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a7de60beb1f987a51e8a9b82a98e988e2abb4ceb0a4adf7ee9a1afb5bf6966f\"" Sep 9 23:43:39.486214 containerd[1890]: time="2025-09-09T23:43:39.486183218Z" level=info msg="CreateContainer within sandbox \"0a7de60beb1f987a51e8a9b82a98e988e2abb4ceb0a4adf7ee9a1afb5bf6966f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:43:39.508645 containerd[1890]: time="2025-09-09T23:43:39.508496886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-srmbb,Uid:a40f008b-cd9e-4d92-8bfc-a603441c3ef9,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:39.509482 containerd[1890]: time="2025-09-09T23:43:39.509456785Z" level=info msg="Container fd00c75d068dbcc5dc7990479a016aa0e8e7e9e2bcf27160edd53dd2c66b4ff0: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:39.532414 containerd[1890]: time="2025-09-09T23:43:39.532371503Z" level=info msg="CreateContainer within sandbox \"0a7de60beb1f987a51e8a9b82a98e988e2abb4ceb0a4adf7ee9a1afb5bf6966f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd00c75d068dbcc5dc7990479a016aa0e8e7e9e2bcf27160edd53dd2c66b4ff0\"" Sep 9 23:43:39.532959 containerd[1890]: time="2025-09-09T23:43:39.532935705Z" level=info msg="StartContainer for \"fd00c75d068dbcc5dc7990479a016aa0e8e7e9e2bcf27160edd53dd2c66b4ff0\"" Sep 9 23:43:39.533970 containerd[1890]: time="2025-09-09T23:43:39.533949070Z" level=info msg="connecting to shim fd00c75d068dbcc5dc7990479a016aa0e8e7e9e2bcf27160edd53dd2c66b4ff0" address="unix:///run/containerd/s/7b5e4d2e81971355d72db3bd52ac06f5f43e537ac9bcd45307d3b4d9cfa59b82" protocol=ttrpc version=3 Sep 9 23:43:39.550103 systemd[1]: Started cri-containerd-fd00c75d068dbcc5dc7990479a016aa0e8e7e9e2bcf27160edd53dd2c66b4ff0.scope - libcontainer container fd00c75d068dbcc5dc7990479a016aa0e8e7e9e2bcf27160edd53dd2c66b4ff0. Sep 9 23:43:39.560227 containerd[1890]: time="2025-09-09T23:43:39.560175536Z" level=info msg="connecting to shim 0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d" address="unix:///run/containerd/s/0ee7322b15a333e152c76393d3d9398afe884f9d53f4fcd41e0e633835907a3c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:43:39.582132 systemd[1]: Started cri-containerd-0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d.scope - libcontainer container 0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d. Sep 9 23:43:39.591156 containerd[1890]: time="2025-09-09T23:43:39.591120533Z" level=info msg="StartContainer for \"fd00c75d068dbcc5dc7990479a016aa0e8e7e9e2bcf27160edd53dd2c66b4ff0\" returns successfully" Sep 9 23:43:39.623201 containerd[1890]: time="2025-09-09T23:43:39.623081847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-srmbb,Uid:a40f008b-cd9e-4d92-8bfc-a603441c3ef9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\"" Sep 9 23:43:40.229097 kubelet[3418]: I0909 23:43:40.229042 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zlqt7" podStartSLOduration=2.229029159 podStartE2EDuration="2.229029159s" podCreationTimestamp="2025-09-09 23:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:43:40.229001206 +0000 UTC m=+7.129437850" watchObservedRunningTime="2025-09-09 23:43:40.229029159 +0000 UTC m=+7.129465803" Sep 9 23:43:49.219358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount481243344.mount: Deactivated successfully. Sep 9 23:43:50.662486 containerd[1890]: time="2025-09-09T23:43:50.662425310Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:50.689407 containerd[1890]: time="2025-09-09T23:43:50.689323466Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 23:43:50.693908 containerd[1890]: time="2025-09-09T23:43:50.693865499Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:50.694808 containerd[1890]: time="2025-09-09T23:43:50.694778332Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.231291534s" Sep 9 23:43:50.694808 containerd[1890]: time="2025-09-09T23:43:50.694807477Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 23:43:50.697071 containerd[1890]: time="2025-09-09T23:43:50.696951564Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 23:43:50.705403 containerd[1890]: time="2025-09-09T23:43:50.705370600Z" level=info msg="CreateContainer within sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:43:50.724765 containerd[1890]: time="2025-09-09T23:43:50.724320894Z" level=info msg="Container 0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:50.739149 containerd[1890]: time="2025-09-09T23:43:50.739059754Z" level=info msg="CreateContainer within sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\"" Sep 9 23:43:50.739433 containerd[1890]: time="2025-09-09T23:43:50.739414953Z" level=info msg="StartContainer for \"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\"" Sep 9 23:43:50.740974 containerd[1890]: time="2025-09-09T23:43:50.740715451Z" level=info msg="connecting to shim 0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09" address="unix:///run/containerd/s/447497b6a3b58da8ffabed4a05ad6c2a8b6dae64e786ea2c84d9e6f4f344ec65" protocol=ttrpc version=3 Sep 9 23:43:50.760122 systemd[1]: Started cri-containerd-0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09.scope - libcontainer container 0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09. Sep 9 23:43:50.788580 containerd[1890]: time="2025-09-09T23:43:50.787978812Z" level=info msg="StartContainer for \"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\" returns successfully" Sep 9 23:43:50.794678 systemd[1]: cri-containerd-0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09.scope: Deactivated successfully. Sep 9 23:43:50.797264 containerd[1890]: time="2025-09-09T23:43:50.797216085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\" id:\"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\" pid:3832 exited_at:{seconds:1757461430 nanos:796701662}" Sep 9 23:43:50.798127 containerd[1890]: time="2025-09-09T23:43:50.798102676Z" level=info msg="received exit event container_id:\"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\" id:\"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\" pid:3832 exited_at:{seconds:1757461430 nanos:796701662}" Sep 9 23:43:50.813667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09-rootfs.mount: Deactivated successfully. Sep 9 23:43:53.250741 containerd[1890]: time="2025-09-09T23:43:53.250012470Z" level=info msg="CreateContainer within sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:43:53.276757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967802825.mount: Deactivated successfully. Sep 9 23:43:53.278603 containerd[1890]: time="2025-09-09T23:43:53.277918696Z" level=info msg="Container 8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:53.293974 containerd[1890]: time="2025-09-09T23:43:53.293943660Z" level=info msg="CreateContainer within sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\"" Sep 9 23:43:53.294473 containerd[1890]: time="2025-09-09T23:43:53.294408433Z" level=info msg="StartContainer for \"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\"" Sep 9 23:43:53.295382 containerd[1890]: time="2025-09-09T23:43:53.295345514Z" level=info msg="connecting to shim 8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436" address="unix:///run/containerd/s/447497b6a3b58da8ffabed4a05ad6c2a8b6dae64e786ea2c84d9e6f4f344ec65" protocol=ttrpc version=3 Sep 9 23:43:53.311159 systemd[1]: Started cri-containerd-8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436.scope - libcontainer container 8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436. Sep 9 23:43:53.336646 containerd[1890]: time="2025-09-09T23:43:53.336594658Z" level=info msg="StartContainer for \"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\" returns successfully" Sep 9 23:43:53.346304 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:43:53.346475 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:53.348114 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:43:53.351220 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:43:53.352476 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:43:53.354352 systemd[1]: cri-containerd-8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436.scope: Deactivated successfully. Sep 9 23:43:53.356248 containerd[1890]: time="2025-09-09T23:43:53.356187364Z" level=info msg="received exit event container_id:\"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\" id:\"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\" pid:3878 exited_at:{seconds:1757461433 nanos:355071402}" Sep 9 23:43:53.356333 containerd[1890]: time="2025-09-09T23:43:53.356313313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\" id:\"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\" pid:3878 exited_at:{seconds:1757461433 nanos:355071402}" Sep 9 23:43:53.369574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:43:54.257320 containerd[1890]: time="2025-09-09T23:43:54.257281971Z" level=info msg="CreateContainer within sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:43:54.274440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436-rootfs.mount: Deactivated successfully. Sep 9 23:43:54.297022 containerd[1890]: time="2025-09-09T23:43:54.296668844Z" level=info msg="Container 4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:54.322851 containerd[1890]: time="2025-09-09T23:43:54.322651731Z" level=info msg="CreateContainer within sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\"" Sep 9 23:43:54.324168 containerd[1890]: time="2025-09-09T23:43:54.324087250Z" level=info msg="StartContainer for \"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\"" Sep 9 23:43:54.325440 containerd[1890]: time="2025-09-09T23:43:54.325417156Z" level=info msg="connecting to shim 4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41" address="unix:///run/containerd/s/447497b6a3b58da8ffabed4a05ad6c2a8b6dae64e786ea2c84d9e6f4f344ec65" protocol=ttrpc version=3 Sep 9 23:43:54.346132 systemd[1]: Started cri-containerd-4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41.scope - libcontainer container 4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41. Sep 9 23:43:54.354177 containerd[1890]: time="2025-09-09T23:43:54.353676598Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:54.356924 containerd[1890]: time="2025-09-09T23:43:54.356890195Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 23:43:54.360453 containerd[1890]: time="2025-09-09T23:43:54.360420357Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:43:54.361599 containerd[1890]: time="2025-09-09T23:43:54.361505436Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.664428307s" Sep 9 23:43:54.361599 containerd[1890]: time="2025-09-09T23:43:54.361535094Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 23:43:54.371500 containerd[1890]: time="2025-09-09T23:43:54.371470176Z" level=info msg="CreateContainer within sandbox \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 23:43:54.379843 systemd[1]: cri-containerd-4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41.scope: Deactivated successfully. Sep 9 23:43:54.382377 containerd[1890]: time="2025-09-09T23:43:54.380966150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\" id:\"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\" pid:3941 exited_at:{seconds:1757461434 nanos:380389917}" Sep 9 23:43:54.382448 containerd[1890]: time="2025-09-09T23:43:54.381861798Z" level=info msg="received exit event container_id:\"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\" id:\"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\" pid:3941 exited_at:{seconds:1757461434 nanos:380389917}" Sep 9 23:43:54.383310 containerd[1890]: time="2025-09-09T23:43:54.383272515Z" level=info msg="StartContainer for \"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\" returns successfully" Sep 9 23:43:54.394830 containerd[1890]: time="2025-09-09T23:43:54.394730648Z" level=info msg="Container f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:54.694379 containerd[1890]: time="2025-09-09T23:43:54.694277318Z" level=info msg="CreateContainer within sandbox \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\"" Sep 9 23:43:54.695206 containerd[1890]: time="2025-09-09T23:43:54.695160300Z" level=info msg="StartContainer for \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\"" Sep 9 23:43:54.695866 containerd[1890]: time="2025-09-09T23:43:54.695845282Z" level=info msg="connecting to shim f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc" address="unix:///run/containerd/s/0ee7322b15a333e152c76393d3d9398afe884f9d53f4fcd41e0e633835907a3c" protocol=ttrpc version=3 Sep 9 23:43:54.715107 systemd[1]: Started cri-containerd-f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc.scope - libcontainer container f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc. Sep 9 23:43:54.740109 containerd[1890]: time="2025-09-09T23:43:54.740079094Z" level=info msg="StartContainer for \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\" returns successfully" Sep 9 23:43:55.263604 containerd[1890]: time="2025-09-09T23:43:55.262378655Z" level=info msg="CreateContainer within sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:43:55.275666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41-rootfs.mount: Deactivated successfully. Sep 9 23:43:55.288266 containerd[1890]: time="2025-09-09T23:43:55.288229024Z" level=info msg="Container b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:55.293494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4108896310.mount: Deactivated successfully. Sep 9 23:43:55.305589 containerd[1890]: time="2025-09-09T23:43:55.305473761Z" level=info msg="CreateContainer within sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\"" Sep 9 23:43:55.306708 containerd[1890]: time="2025-09-09T23:43:55.306685334Z" level=info msg="StartContainer for \"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\"" Sep 9 23:43:55.308152 containerd[1890]: time="2025-09-09T23:43:55.308128733Z" level=info msg="connecting to shim b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0" address="unix:///run/containerd/s/447497b6a3b58da8ffabed4a05ad6c2a8b6dae64e786ea2c84d9e6f4f344ec65" protocol=ttrpc version=3 Sep 9 23:43:55.334125 systemd[1]: Started cri-containerd-b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0.scope - libcontainer container b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0. Sep 9 23:43:55.385322 systemd[1]: cri-containerd-b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0.scope: Deactivated successfully. Sep 9 23:43:55.389262 containerd[1890]: time="2025-09-09T23:43:55.389190435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\" id:\"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\" pid:4014 exited_at:{seconds:1757461435 nanos:387283167}" Sep 9 23:43:55.390460 containerd[1890]: time="2025-09-09T23:43:55.390359678Z" level=info msg="received exit event container_id:\"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\" id:\"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\" pid:4014 exited_at:{seconds:1757461435 nanos:387283167}" Sep 9 23:43:55.402242 containerd[1890]: time="2025-09-09T23:43:55.402216100Z" level=info msg="StartContainer for \"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\" returns successfully" Sep 9 23:43:55.417909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0-rootfs.mount: Deactivated successfully. Sep 9 23:43:56.268306 containerd[1890]: time="2025-09-09T23:43:56.268264924Z" level=info msg="CreateContainer within sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:43:56.280169 kubelet[3418]: I0909 23:43:56.280070 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-srmbb" podStartSLOduration=2.542759942 podStartE2EDuration="17.280053103s" podCreationTimestamp="2025-09-09 23:43:39 +0000 UTC" firstStartedPulling="2025-09-09 23:43:39.625447321 +0000 UTC m=+6.525883957" lastFinishedPulling="2025-09-09 23:43:54.362740482 +0000 UTC m=+21.263177118" observedRunningTime="2025-09-09 23:43:55.365817085 +0000 UTC m=+22.266253721" watchObservedRunningTime="2025-09-09 23:43:56.280053103 +0000 UTC m=+23.180489787" Sep 9 23:43:56.290342 containerd[1890]: time="2025-09-09T23:43:56.290228668Z" level=info msg="Container 46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:43:56.311799 containerd[1890]: time="2025-09-09T23:43:56.311761537Z" level=info msg="CreateContainer within sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\"" Sep 9 23:43:56.313534 containerd[1890]: time="2025-09-09T23:43:56.312383444Z" level=info msg="StartContainer for \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\"" Sep 9 23:43:56.314579 containerd[1890]: time="2025-09-09T23:43:56.314523513Z" level=info msg="connecting to shim 46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de" address="unix:///run/containerd/s/447497b6a3b58da8ffabed4a05ad6c2a8b6dae64e786ea2c84d9e6f4f344ec65" protocol=ttrpc version=3 Sep 9 23:43:56.333158 systemd[1]: Started cri-containerd-46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de.scope - libcontainer container 46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de. Sep 9 23:43:56.363206 containerd[1890]: time="2025-09-09T23:43:56.363166622Z" level=info msg="StartContainer for \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" returns successfully" Sep 9 23:43:56.438734 containerd[1890]: time="2025-09-09T23:43:56.438697194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" id:\"177ab9e9d7e701724bcf01eb78df41a864a916432af915c10f45238bdb19ea8f\" pid:4086 exited_at:{seconds:1757461436 nanos:437359559}" Sep 9 23:43:56.501237 kubelet[3418]: I0909 23:43:56.501203 3418 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 23:43:56.554049 systemd[1]: Created slice kubepods-burstable-pod1567f98c_8a0f_4e29_8f36_6845571c66c8.slice - libcontainer container kubepods-burstable-pod1567f98c_8a0f_4e29_8f36_6845571c66c8.slice. Sep 9 23:43:56.562204 systemd[1]: Created slice kubepods-burstable-pod73be1b27_8d0b_4ebb_bd4f_072a1313e0ee.slice - libcontainer container kubepods-burstable-pod73be1b27_8d0b_4ebb_bd4f_072a1313e0ee.slice. Sep 9 23:43:56.601876 kubelet[3418]: I0909 23:43:56.601801 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73be1b27-8d0b-4ebb-bd4f-072a1313e0ee-config-volume\") pod \"coredns-674b8bbfcf-ttnrp\" (UID: \"73be1b27-8d0b-4ebb-bd4f-072a1313e0ee\") " pod="kube-system/coredns-674b8bbfcf-ttnrp" Sep 9 23:43:56.601876 kubelet[3418]: I0909 23:43:56.601847 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b68ww\" (UniqueName: \"kubernetes.io/projected/73be1b27-8d0b-4ebb-bd4f-072a1313e0ee-kube-api-access-b68ww\") pod \"coredns-674b8bbfcf-ttnrp\" (UID: \"73be1b27-8d0b-4ebb-bd4f-072a1313e0ee\") " pod="kube-system/coredns-674b8bbfcf-ttnrp" Sep 9 23:43:56.602158 kubelet[3418]: I0909 23:43:56.601914 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1567f98c-8a0f-4e29-8f36-6845571c66c8-config-volume\") pod \"coredns-674b8bbfcf-xf7dk\" (UID: \"1567f98c-8a0f-4e29-8f36-6845571c66c8\") " pod="kube-system/coredns-674b8bbfcf-xf7dk" Sep 9 23:43:56.602158 kubelet[3418]: I0909 23:43:56.601943 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqtp8\" (UniqueName: \"kubernetes.io/projected/1567f98c-8a0f-4e29-8f36-6845571c66c8-kube-api-access-gqtp8\") pod \"coredns-674b8bbfcf-xf7dk\" (UID: \"1567f98c-8a0f-4e29-8f36-6845571c66c8\") " pod="kube-system/coredns-674b8bbfcf-xf7dk" Sep 9 23:43:56.859905 containerd[1890]: time="2025-09-09T23:43:56.859360218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xf7dk,Uid:1567f98c-8a0f-4e29-8f36-6845571c66c8,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:56.865030 containerd[1890]: time="2025-09-09T23:43:56.864966663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ttnrp,Uid:73be1b27-8d0b-4ebb-bd4f-072a1313e0ee,Namespace:kube-system,Attempt:0,}" Sep 9 23:43:57.284144 kubelet[3418]: I0909 23:43:57.284022 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jwhrz" podStartSLOduration=7.049912766 podStartE2EDuration="18.284006041s" podCreationTimestamp="2025-09-09 23:43:39 +0000 UTC" firstStartedPulling="2025-09-09 23:43:39.461695549 +0000 UTC m=+6.362132185" lastFinishedPulling="2025-09-09 23:43:50.695788824 +0000 UTC m=+17.596225460" observedRunningTime="2025-09-09 23:43:57.282868375 +0000 UTC m=+24.183305019" watchObservedRunningTime="2025-09-09 23:43:57.284006041 +0000 UTC m=+24.184442685" Sep 9 23:43:58.510264 systemd-networkd[1714]: cilium_host: Link UP Sep 9 23:43:58.510340 systemd-networkd[1714]: cilium_net: Link UP Sep 9 23:43:58.510414 systemd-networkd[1714]: cilium_net: Gained carrier Sep 9 23:43:58.512934 systemd-networkd[1714]: cilium_host: Gained carrier Sep 9 23:43:58.545146 systemd-networkd[1714]: cilium_host: Gained IPv6LL Sep 9 23:43:58.660786 systemd-networkd[1714]: cilium_vxlan: Link UP Sep 9 23:43:58.660954 systemd-networkd[1714]: cilium_vxlan: Gained carrier Sep 9 23:43:58.896007 kernel: NET: Registered PF_ALG protocol family Sep 9 23:43:59.185212 systemd-networkd[1714]: cilium_net: Gained IPv6LL Sep 9 23:43:59.454634 systemd-networkd[1714]: lxc_health: Link UP Sep 9 23:43:59.455832 systemd-networkd[1714]: lxc_health: Gained carrier Sep 9 23:43:59.893782 systemd-networkd[1714]: lxca34ebd2f094d: Link UP Sep 9 23:43:59.900137 kernel: eth0: renamed from tmp7f38e Sep 9 23:43:59.899938 systemd-networkd[1714]: lxca34ebd2f094d: Gained carrier Sep 9 23:43:59.919323 systemd-networkd[1714]: lxc6dd7a0740204: Link UP Sep 9 23:43:59.934102 kernel: eth0: renamed from tmp6e15f Sep 9 23:43:59.934489 systemd-networkd[1714]: lxc6dd7a0740204: Gained carrier Sep 9 23:44:00.208182 systemd-networkd[1714]: cilium_vxlan: Gained IPv6LL Sep 9 23:44:00.784164 systemd-networkd[1714]: lxc_health: Gained IPv6LL Sep 9 23:44:01.488248 systemd-networkd[1714]: lxca34ebd2f094d: Gained IPv6LL Sep 9 23:44:01.488518 systemd-networkd[1714]: lxc6dd7a0740204: Gained IPv6LL Sep 9 23:44:02.479695 containerd[1890]: time="2025-09-09T23:44:02.479621362Z" level=info msg="connecting to shim 6e15f24b541732c26d89b405e395db6486e28fe22fce55dd457be3f7b021e30f" address="unix:///run/containerd/s/2cd177a5cd4dfcd856886e65ebe8d68050914d828f4663583c6b8e41adb8db52" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:02.488136 containerd[1890]: time="2025-09-09T23:44:02.488095301Z" level=info msg="connecting to shim 7f38e3d5f361c338680c0d4ce8356b331d65325fdb114d7636dc4e6f88446f64" address="unix:///run/containerd/s/41607482edb304e39e45607afff1fd446f8b36cccd6f591a017998a1bb848ee4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:02.505165 systemd[1]: Started cri-containerd-6e15f24b541732c26d89b405e395db6486e28fe22fce55dd457be3f7b021e30f.scope - libcontainer container 6e15f24b541732c26d89b405e395db6486e28fe22fce55dd457be3f7b021e30f. Sep 9 23:44:02.512004 systemd[1]: Started cri-containerd-7f38e3d5f361c338680c0d4ce8356b331d65325fdb114d7636dc4e6f88446f64.scope - libcontainer container 7f38e3d5f361c338680c0d4ce8356b331d65325fdb114d7636dc4e6f88446f64. Sep 9 23:44:02.546363 containerd[1890]: time="2025-09-09T23:44:02.546321044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ttnrp,Uid:73be1b27-8d0b-4ebb-bd4f-072a1313e0ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e15f24b541732c26d89b405e395db6486e28fe22fce55dd457be3f7b021e30f\"" Sep 9 23:44:02.557634 containerd[1890]: time="2025-09-09T23:44:02.557593207Z" level=info msg="CreateContainer within sandbox \"6e15f24b541732c26d89b405e395db6486e28fe22fce55dd457be3f7b021e30f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:44:02.558437 containerd[1890]: time="2025-09-09T23:44:02.558409619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xf7dk,Uid:1567f98c-8a0f-4e29-8f36-6845571c66c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f38e3d5f361c338680c0d4ce8356b331d65325fdb114d7636dc4e6f88446f64\"" Sep 9 23:44:02.574722 containerd[1890]: time="2025-09-09T23:44:02.574657216Z" level=info msg="CreateContainer within sandbox \"7f38e3d5f361c338680c0d4ce8356b331d65325fdb114d7636dc4e6f88446f64\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:44:02.582793 containerd[1890]: time="2025-09-09T23:44:02.582742341Z" level=info msg="Container cc0a9436f16f050517d572eb8c30e79a46b95adff8f388cfd9ddc4f627a8a5f9: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:02.614023 containerd[1890]: time="2025-09-09T23:44:02.613605214Z" level=info msg="Container 047ec9bda268553e5111ac369de71b978b9775a75df5c48749389d26c0655d48: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:02.615476 containerd[1890]: time="2025-09-09T23:44:02.615370208Z" level=info msg="CreateContainer within sandbox \"6e15f24b541732c26d89b405e395db6486e28fe22fce55dd457be3f7b021e30f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc0a9436f16f050517d572eb8c30e79a46b95adff8f388cfd9ddc4f627a8a5f9\"" Sep 9 23:44:02.616631 containerd[1890]: time="2025-09-09T23:44:02.615884855Z" level=info msg="StartContainer for \"cc0a9436f16f050517d572eb8c30e79a46b95adff8f388cfd9ddc4f627a8a5f9\"" Sep 9 23:44:02.616631 containerd[1890]: time="2025-09-09T23:44:02.616548465Z" level=info msg="connecting to shim cc0a9436f16f050517d572eb8c30e79a46b95adff8f388cfd9ddc4f627a8a5f9" address="unix:///run/containerd/s/2cd177a5cd4dfcd856886e65ebe8d68050914d828f4663583c6b8e41adb8db52" protocol=ttrpc version=3 Sep 9 23:44:02.629433 containerd[1890]: time="2025-09-09T23:44:02.629394747Z" level=info msg="CreateContainer within sandbox \"7f38e3d5f361c338680c0d4ce8356b331d65325fdb114d7636dc4e6f88446f64\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"047ec9bda268553e5111ac369de71b978b9775a75df5c48749389d26c0655d48\"" Sep 9 23:44:02.630428 containerd[1890]: time="2025-09-09T23:44:02.630409914Z" level=info msg="StartContainer for \"047ec9bda268553e5111ac369de71b978b9775a75df5c48749389d26c0655d48\"" Sep 9 23:44:02.631929 containerd[1890]: time="2025-09-09T23:44:02.631844359Z" level=info msg="connecting to shim 047ec9bda268553e5111ac369de71b978b9775a75df5c48749389d26c0655d48" address="unix:///run/containerd/s/41607482edb304e39e45607afff1fd446f8b36cccd6f591a017998a1bb848ee4" protocol=ttrpc version=3 Sep 9 23:44:02.634318 systemd[1]: Started cri-containerd-cc0a9436f16f050517d572eb8c30e79a46b95adff8f388cfd9ddc4f627a8a5f9.scope - libcontainer container cc0a9436f16f050517d572eb8c30e79a46b95adff8f388cfd9ddc4f627a8a5f9. Sep 9 23:44:02.651132 systemd[1]: Started cri-containerd-047ec9bda268553e5111ac369de71b978b9775a75df5c48749389d26c0655d48.scope - libcontainer container 047ec9bda268553e5111ac369de71b978b9775a75df5c48749389d26c0655d48. Sep 9 23:44:02.695733 containerd[1890]: time="2025-09-09T23:44:02.695628015Z" level=info msg="StartContainer for \"cc0a9436f16f050517d572eb8c30e79a46b95adff8f388cfd9ddc4f627a8a5f9\" returns successfully" Sep 9 23:44:02.704142 containerd[1890]: time="2025-09-09T23:44:02.703907327Z" level=info msg="StartContainer for \"047ec9bda268553e5111ac369de71b978b9775a75df5c48749389d26c0655d48\" returns successfully" Sep 9 23:44:03.292907 kubelet[3418]: I0909 23:44:03.292845 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ttnrp" podStartSLOduration=24.292830355 podStartE2EDuration="24.292830355s" podCreationTimestamp="2025-09-09 23:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:03.292604127 +0000 UTC m=+30.193040763" watchObservedRunningTime="2025-09-09 23:44:03.292830355 +0000 UTC m=+30.193266991" Sep 9 23:44:03.315951 kubelet[3418]: I0909 23:44:03.315410 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xf7dk" podStartSLOduration=24.315394099 podStartE2EDuration="24.315394099s" podCreationTimestamp="2025-09-09 23:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:03.314979341 +0000 UTC m=+30.215415977" watchObservedRunningTime="2025-09-09 23:44:03.315394099 +0000 UTC m=+30.215830735" Sep 9 23:45:38.557656 systemd[1]: Started sshd@7-10.200.20.13:22-10.200.16.10:55314.service - OpenSSH per-connection server daemon (10.200.16.10:55314). Sep 9 23:45:39.056668 sshd[4744]: Accepted publickey for core from 10.200.16.10 port 55314 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:39.057637 sshd-session[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:39.061061 systemd-logind[1867]: New session 10 of user core. Sep 9 23:45:39.065292 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:45:39.454760 sshd[4747]: Connection closed by 10.200.16.10 port 55314 Sep 9 23:45:39.456163 sshd-session[4744]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:39.459336 systemd[1]: sshd@7-10.200.20.13:22-10.200.16.10:55314.service: Deactivated successfully. Sep 9 23:45:39.461803 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:45:39.462879 systemd-logind[1867]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:45:39.464215 systemd-logind[1867]: Removed session 10. Sep 9 23:45:44.548289 systemd[1]: Started sshd@8-10.200.20.13:22-10.200.16.10:39722.service - OpenSSH per-connection server daemon (10.200.16.10:39722). Sep 9 23:45:44.999253 sshd[4762]: Accepted publickey for core from 10.200.16.10 port 39722 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:45.000419 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:45.004098 systemd-logind[1867]: New session 11 of user core. Sep 9 23:45:45.014128 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:45:45.383813 sshd[4765]: Connection closed by 10.200.16.10 port 39722 Sep 9 23:45:45.384348 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:45.387484 systemd[1]: sshd@8-10.200.20.13:22-10.200.16.10:39722.service: Deactivated successfully. Sep 9 23:45:45.388970 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:45:45.389630 systemd-logind[1867]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:45:45.390741 systemd-logind[1867]: Removed session 11. Sep 9 23:45:50.458591 systemd[1]: Started sshd@9-10.200.20.13:22-10.200.16.10:37478.service - OpenSSH per-connection server daemon (10.200.16.10:37478). Sep 9 23:45:50.868937 sshd[4777]: Accepted publickey for core from 10.200.16.10 port 37478 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:50.870002 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:50.873426 systemd-logind[1867]: New session 12 of user core. Sep 9 23:45:50.882122 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:45:51.225662 sshd[4780]: Connection closed by 10.200.16.10 port 37478 Sep 9 23:45:51.225572 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:51.228750 systemd[1]: sshd@9-10.200.20.13:22-10.200.16.10:37478.service: Deactivated successfully. Sep 9 23:45:51.231186 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:45:51.231807 systemd-logind[1867]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:45:51.232763 systemd-logind[1867]: Removed session 12. Sep 9 23:45:56.324177 systemd[1]: Started sshd@10-10.200.20.13:22-10.200.16.10:37482.service - OpenSSH per-connection server daemon (10.200.16.10:37482). Sep 9 23:45:56.816163 sshd[4792]: Accepted publickey for core from 10.200.16.10 port 37482 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:56.817255 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:56.820620 systemd-logind[1867]: New session 13 of user core. Sep 9 23:45:56.824114 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:45:57.201644 sshd[4795]: Connection closed by 10.200.16.10 port 37482 Sep 9 23:45:57.202173 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:57.205228 systemd-logind[1867]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:45:57.205644 systemd[1]: sshd@10-10.200.20.13:22-10.200.16.10:37482.service: Deactivated successfully. Sep 9 23:45:57.208805 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:45:57.210807 systemd-logind[1867]: Removed session 13. Sep 9 23:45:57.290483 systemd[1]: Started sshd@11-10.200.20.13:22-10.200.16.10:37496.service - OpenSSH per-connection server daemon (10.200.16.10:37496). Sep 9 23:45:57.785489 sshd[4808]: Accepted publickey for core from 10.200.16.10 port 37496 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:57.786580 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:57.790037 systemd-logind[1867]: New session 14 of user core. Sep 9 23:45:57.798256 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:45:58.209246 sshd[4811]: Connection closed by 10.200.16.10 port 37496 Sep 9 23:45:58.208693 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:58.211381 systemd-logind[1867]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:45:58.212353 systemd[1]: sshd@11-10.200.20.13:22-10.200.16.10:37496.service: Deactivated successfully. Sep 9 23:45:58.214194 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:45:58.215483 systemd-logind[1867]: Removed session 14. Sep 9 23:45:58.289726 systemd[1]: Started sshd@12-10.200.20.13:22-10.200.16.10:37512.service - OpenSSH per-connection server daemon (10.200.16.10:37512). Sep 9 23:45:58.749113 sshd[4821]: Accepted publickey for core from 10.200.16.10 port 37512 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:45:58.750252 sshd-session[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:58.753849 systemd-logind[1867]: New session 15 of user core. Sep 9 23:45:58.765121 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:45:59.128710 sshd[4824]: Connection closed by 10.200.16.10 port 37512 Sep 9 23:45:59.129341 sshd-session[4821]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:59.132537 systemd[1]: sshd@12-10.200.20.13:22-10.200.16.10:37512.service: Deactivated successfully. Sep 9 23:45:59.134446 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:45:59.135082 systemd-logind[1867]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:45:59.136493 systemd-logind[1867]: Removed session 15. Sep 9 23:46:04.204348 systemd[1]: Started sshd@13-10.200.20.13:22-10.200.16.10:43082.service - OpenSSH per-connection server daemon (10.200.16.10:43082). Sep 9 23:46:04.623719 sshd[4837]: Accepted publickey for core from 10.200.16.10 port 43082 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:04.624726 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:04.628089 systemd-logind[1867]: New session 16 of user core. Sep 9 23:46:04.639276 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:46:04.981696 sshd[4840]: Connection closed by 10.200.16.10 port 43082 Sep 9 23:46:04.982267 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:04.985317 systemd[1]: sshd@13-10.200.20.13:22-10.200.16.10:43082.service: Deactivated successfully. Sep 9 23:46:04.986689 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:46:04.987760 systemd-logind[1867]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:46:04.988762 systemd-logind[1867]: Removed session 16. Sep 9 23:46:10.079642 systemd[1]: Started sshd@14-10.200.20.13:22-10.200.16.10:58918.service - OpenSSH per-connection server daemon (10.200.16.10:58918). Sep 9 23:46:10.573421 sshd[4854]: Accepted publickey for core from 10.200.16.10 port 58918 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:10.574551 sshd-session[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:10.577943 systemd-logind[1867]: New session 17 of user core. Sep 9 23:46:10.585106 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:46:10.962000 sshd[4857]: Connection closed by 10.200.16.10 port 58918 Sep 9 23:46:10.962562 sshd-session[4854]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:10.965595 systemd[1]: sshd@14-10.200.20.13:22-10.200.16.10:58918.service: Deactivated successfully. Sep 9 23:46:10.967345 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:46:10.968480 systemd-logind[1867]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:46:10.969546 systemd-logind[1867]: Removed session 17. Sep 9 23:46:11.049462 systemd[1]: Started sshd@15-10.200.20.13:22-10.200.16.10:58920.service - OpenSSH per-connection server daemon (10.200.16.10:58920). Sep 9 23:46:11.540378 sshd[4868]: Accepted publickey for core from 10.200.16.10 port 58920 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:11.541477 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:11.545051 systemd-logind[1867]: New session 18 of user core. Sep 9 23:46:11.551101 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:46:11.969026 sshd[4871]: Connection closed by 10.200.16.10 port 58920 Sep 9 23:46:11.969537 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:11.972699 systemd-logind[1867]: Session 18 logged out. Waiting for processes to exit. Sep 9 23:46:11.973307 systemd[1]: sshd@15-10.200.20.13:22-10.200.16.10:58920.service: Deactivated successfully. Sep 9 23:46:11.975322 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 23:46:11.977331 systemd-logind[1867]: Removed session 18. Sep 9 23:46:12.060419 systemd[1]: Started sshd@16-10.200.20.13:22-10.200.16.10:58936.service - OpenSSH per-connection server daemon (10.200.16.10:58936). Sep 9 23:46:12.557008 sshd[4881]: Accepted publickey for core from 10.200.16.10 port 58936 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:12.558043 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:12.561727 systemd-logind[1867]: New session 19 of user core. Sep 9 23:46:12.575100 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 23:46:13.243316 sshd[4884]: Connection closed by 10.200.16.10 port 58936 Sep 9 23:46:13.244877 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:13.248251 systemd[1]: sshd@16-10.200.20.13:22-10.200.16.10:58936.service: Deactivated successfully. Sep 9 23:46:13.251792 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 23:46:13.252924 systemd-logind[1867]: Session 19 logged out. Waiting for processes to exit. Sep 9 23:46:13.255405 systemd-logind[1867]: Removed session 19. Sep 9 23:46:13.344904 systemd[1]: Started sshd@17-10.200.20.13:22-10.200.16.10:58942.service - OpenSSH per-connection server daemon (10.200.16.10:58942). Sep 9 23:46:13.797866 sshd[4901]: Accepted publickey for core from 10.200.16.10 port 58942 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:13.798964 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:13.802597 systemd-logind[1867]: New session 20 of user core. Sep 9 23:46:13.809204 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 23:46:14.264605 sshd[4904]: Connection closed by 10.200.16.10 port 58942 Sep 9 23:46:14.264955 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:14.268247 systemd-logind[1867]: Session 20 logged out. Waiting for processes to exit. Sep 9 23:46:14.268371 systemd[1]: sshd@17-10.200.20.13:22-10.200.16.10:58942.service: Deactivated successfully. Sep 9 23:46:14.271098 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 23:46:14.272944 systemd-logind[1867]: Removed session 20. Sep 9 23:46:14.349719 systemd[1]: Started sshd@18-10.200.20.13:22-10.200.16.10:58944.service - OpenSSH per-connection server daemon (10.200.16.10:58944). Sep 9 23:46:14.808668 sshd[4914]: Accepted publickey for core from 10.200.16.10 port 58944 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:14.809744 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:14.813277 systemd-logind[1867]: New session 21 of user core. Sep 9 23:46:14.822098 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 23:46:15.190917 sshd[4917]: Connection closed by 10.200.16.10 port 58944 Sep 9 23:46:15.191442 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:15.194457 systemd[1]: sshd@18-10.200.20.13:22-10.200.16.10:58944.service: Deactivated successfully. Sep 9 23:46:15.196118 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 23:46:15.196909 systemd-logind[1867]: Session 21 logged out. Waiting for processes to exit. Sep 9 23:46:15.198245 systemd-logind[1867]: Removed session 21. Sep 9 23:46:20.283503 systemd[1]: Started sshd@19-10.200.20.13:22-10.200.16.10:46288.service - OpenSSH per-connection server daemon (10.200.16.10:46288). Sep 9 23:46:20.774894 sshd[4931]: Accepted publickey for core from 10.200.16.10 port 46288 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:20.776015 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:20.779616 systemd-logind[1867]: New session 22 of user core. Sep 9 23:46:20.795098 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 23:46:21.166102 sshd[4934]: Connection closed by 10.200.16.10 port 46288 Sep 9 23:46:21.166897 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:21.169826 systemd[1]: sshd@19-10.200.20.13:22-10.200.16.10:46288.service: Deactivated successfully. Sep 9 23:46:21.170024 systemd-logind[1867]: Session 22 logged out. Waiting for processes to exit. Sep 9 23:46:21.171853 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 23:46:21.175352 systemd-logind[1867]: Removed session 22. Sep 9 23:46:26.248609 systemd[1]: Started sshd@20-10.200.20.13:22-10.200.16.10:46294.service - OpenSSH per-connection server daemon (10.200.16.10:46294). Sep 9 23:46:26.699946 sshd[4945]: Accepted publickey for core from 10.200.16.10 port 46294 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:26.701008 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:26.704408 systemd-logind[1867]: New session 23 of user core. Sep 9 23:46:26.711097 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 23:46:27.079582 sshd[4948]: Connection closed by 10.200.16.10 port 46294 Sep 9 23:46:27.080074 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:27.083638 systemd[1]: sshd@20-10.200.20.13:22-10.200.16.10:46294.service: Deactivated successfully. Sep 9 23:46:27.085291 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 23:46:27.086170 systemd-logind[1867]: Session 23 logged out. Waiting for processes to exit. Sep 9 23:46:27.087296 systemd-logind[1867]: Removed session 23. Sep 9 23:46:27.171188 systemd[1]: Started sshd@21-10.200.20.13:22-10.200.16.10:46304.service - OpenSSH per-connection server daemon (10.200.16.10:46304). Sep 9 23:46:27.628973 sshd[4959]: Accepted publickey for core from 10.200.16.10 port 46304 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:27.630206 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:27.633631 systemd-logind[1867]: New session 24 of user core. Sep 9 23:46:27.642095 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 23:46:29.188711 containerd[1890]: time="2025-09-09T23:46:29.188669378Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:46:29.194227 containerd[1890]: time="2025-09-09T23:46:29.194123984Z" level=info msg="StopContainer for \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\" with timeout 30 (s)" Sep 9 23:46:29.194478 containerd[1890]: time="2025-09-09T23:46:29.194457708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" id:\"7148b30fd21223e465f2ba805eb7404243a945cecb298e204e99c59e9fecbd9c\" pid:4981 exited_at:{seconds:1757461589 nanos:194174346}" Sep 9 23:46:29.195744 containerd[1890]: time="2025-09-09T23:46:29.195596971Z" level=info msg="Stop container \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\" with signal terminated" Sep 9 23:46:29.197404 containerd[1890]: time="2025-09-09T23:46:29.197386258Z" level=info msg="StopContainer for \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" with timeout 2 (s)" Sep 9 23:46:29.197688 containerd[1890]: time="2025-09-09T23:46:29.197674372Z" level=info msg="Stop container \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" with signal terminated" Sep 9 23:46:29.207304 systemd-networkd[1714]: lxc_health: Link DOWN Sep 9 23:46:29.207309 systemd-networkd[1714]: lxc_health: Lost carrier Sep 9 23:46:29.215095 systemd[1]: cri-containerd-f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc.scope: Deactivated successfully. Sep 9 23:46:29.216651 containerd[1890]: time="2025-09-09T23:46:29.216580294Z" level=info msg="received exit event container_id:\"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\" id:\"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\" pid:3980 exited_at:{seconds:1757461589 nanos:214916028}" Sep 9 23:46:29.217587 containerd[1890]: time="2025-09-09T23:46:29.217563193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\" id:\"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\" pid:3980 exited_at:{seconds:1757461589 nanos:214916028}" Sep 9 23:46:29.226819 systemd[1]: cri-containerd-46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de.scope: Deactivated successfully. Sep 9 23:46:29.227399 systemd[1]: cri-containerd-46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de.scope: Consumed 4.413s CPU time, 121.3M memory peak, 128K read from disk, 12.9M written to disk. Sep 9 23:46:29.232296 containerd[1890]: time="2025-09-09T23:46:29.232258297Z" level=info msg="received exit event container_id:\"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" id:\"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" pid:4052 exited_at:{seconds:1757461589 nanos:231636523}" Sep 9 23:46:29.232487 containerd[1890]: time="2025-09-09T23:46:29.232466848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" id:\"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" pid:4052 exited_at:{seconds:1757461589 nanos:231636523}" Sep 9 23:46:29.250097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc-rootfs.mount: Deactivated successfully. Sep 9 23:46:29.256808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de-rootfs.mount: Deactivated successfully. Sep 9 23:46:29.318593 containerd[1890]: time="2025-09-09T23:46:29.318553159Z" level=info msg="StopContainer for \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\" returns successfully" Sep 9 23:46:29.319537 containerd[1890]: time="2025-09-09T23:46:29.319343490Z" level=info msg="StopPodSandbox for \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\"" Sep 9 23:46:29.319537 containerd[1890]: time="2025-09-09T23:46:29.319397892Z" level=info msg="Container to stop \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:29.323291 containerd[1890]: time="2025-09-09T23:46:29.323269699Z" level=info msg="StopContainer for \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" returns successfully" Sep 9 23:46:29.324320 containerd[1890]: time="2025-09-09T23:46:29.324288895Z" level=info msg="StopPodSandbox for \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\"" Sep 9 23:46:29.324470 containerd[1890]: time="2025-09-09T23:46:29.324416843Z" level=info msg="Container to stop \"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:29.324470 containerd[1890]: time="2025-09-09T23:46:29.324428556Z" level=info msg="Container to stop \"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:29.324470 containerd[1890]: time="2025-09-09T23:46:29.324435276Z" level=info msg="Container to stop \"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:29.324470 containerd[1890]: time="2025-09-09T23:46:29.324440884Z" level=info msg="Container to stop \"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:29.324572 containerd[1890]: time="2025-09-09T23:46:29.324446500Z" level=info msg="Container to stop \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:29.326957 systemd[1]: cri-containerd-0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d.scope: Deactivated successfully. Sep 9 23:46:29.329404 containerd[1890]: time="2025-09-09T23:46:29.329348391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\" id:\"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\" pid:3645 exit_status:137 exited_at:{seconds:1757461589 nanos:329076158}" Sep 9 23:46:29.333634 systemd[1]: cri-containerd-5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89.scope: Deactivated successfully. Sep 9 23:46:29.353294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89-rootfs.mount: Deactivated successfully. Sep 9 23:46:29.357921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d-rootfs.mount: Deactivated successfully. Sep 9 23:46:29.368567 containerd[1890]: time="2025-09-09T23:46:29.368433449Z" level=info msg="shim disconnected" id=5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89 namespace=k8s.io Sep 9 23:46:29.368567 containerd[1890]: time="2025-09-09T23:46:29.368463218Z" level=warning msg="cleaning up after shim disconnected" id=5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89 namespace=k8s.io Sep 9 23:46:29.368567 containerd[1890]: time="2025-09-09T23:46:29.368487403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:46:29.370254 containerd[1890]: time="2025-09-09T23:46:29.370131980Z" level=info msg="shim disconnected" id=0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d namespace=k8s.io Sep 9 23:46:29.370254 containerd[1890]: time="2025-09-09T23:46:29.370150429Z" level=warning msg="cleaning up after shim disconnected" id=0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d namespace=k8s.io Sep 9 23:46:29.370254 containerd[1890]: time="2025-09-09T23:46:29.370166733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:46:29.378779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89-shm.mount: Deactivated successfully. Sep 9 23:46:29.379013 containerd[1890]: time="2025-09-09T23:46:29.377776582Z" level=info msg="received exit event sandbox_id:\"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" exit_status:137 exited_at:{seconds:1757461589 nanos:334936570}" Sep 9 23:46:29.380116 containerd[1890]: time="2025-09-09T23:46:29.379517011Z" level=info msg="TearDown network for sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" successfully" Sep 9 23:46:29.380116 containerd[1890]: time="2025-09-09T23:46:29.379537652Z" level=info msg="StopPodSandbox for \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" returns successfully" Sep 9 23:46:29.383676 containerd[1890]: time="2025-09-09T23:46:29.383599561Z" level=info msg="received exit event sandbox_id:\"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\" exit_status:137 exited_at:{seconds:1757461589 nanos:329076158}" Sep 9 23:46:29.384059 containerd[1890]: time="2025-09-09T23:46:29.383975718Z" level=info msg="TearDown network for sandbox \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\" successfully" Sep 9 23:46:29.384059 containerd[1890]: time="2025-09-09T23:46:29.384012663Z" level=info msg="StopPodSandbox for \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\" returns successfully" Sep 9 23:46:29.384059 containerd[1890]: time="2025-09-09T23:46:29.384026592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" id:\"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" pid:3546 exit_status:137 exited_at:{seconds:1757461589 nanos:334936570}" Sep 9 23:46:29.519934 kubelet[3418]: I0909 23:46:29.519817 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-config-path\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.519934 kubelet[3418]: I0909 23:46:29.519848 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-host-proc-sys-net\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.519934 kubelet[3418]: I0909 23:46:29.519863 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-host-proc-sys-kernel\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.519934 kubelet[3418]: I0909 23:46:29.519885 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-etc-cni-netd\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.519934 kubelet[3418]: I0909 23:46:29.519900 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-lib-modules\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.519934 kubelet[3418]: I0909 23:46:29.519913 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a45986f-4b27-4c11-923c-2e57bf55fc1d-clustermesh-secrets\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.520449 kubelet[3418]: I0909 23:46:29.519931 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a45986f-4b27-4c11-923c-2e57bf55fc1d-hubble-tls\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.520449 kubelet[3418]: I0909 23:46:29.519942 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-xtables-lock\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.520449 kubelet[3418]: I0909 23:46:29.519956 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5wx2\" (UniqueName: \"kubernetes.io/projected/6a45986f-4b27-4c11-923c-2e57bf55fc1d-kube-api-access-l5wx2\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.520449 kubelet[3418]: I0909 23:46:29.519964 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-run\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.520449 kubelet[3418]: I0909 23:46:29.519973 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-cgroup\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.521083 kubelet[3418]: I0909 23:46:29.521059 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:29.521291 kubelet[3418]: I0909 23:46:29.521267 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:29.521514 kubelet[3418]: I0909 23:46:29.521307 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:29.522319 kubelet[3418]: I0909 23:46:29.522011 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:29.522319 kubelet[3418]: I0909 23:46:29.522038 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:29.522319 kubelet[3418]: I0909 23:46:29.522052 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-bpf-maps\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.522319 kubelet[3418]: I0909 23:46:29.522066 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-hostproc\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.522319 kubelet[3418]: I0909 23:46:29.522081 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a40f008b-cd9e-4d92-8bfc-a603441c3ef9-cilium-config-path\") pod \"a40f008b-cd9e-4d92-8bfc-a603441c3ef9\" (UID: \"a40f008b-cd9e-4d92-8bfc-a603441c3ef9\") " Sep 9 23:46:29.522319 kubelet[3418]: I0909 23:46:29.522095 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt47n\" (UniqueName: \"kubernetes.io/projected/a40f008b-cd9e-4d92-8bfc-a603441c3ef9-kube-api-access-tt47n\") pod \"a40f008b-cd9e-4d92-8bfc-a603441c3ef9\" (UID: \"a40f008b-cd9e-4d92-8bfc-a603441c3ef9\") " Sep 9 23:46:29.522454 kubelet[3418]: I0909 23:46:29.522122 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cni-path\") pod \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\" (UID: \"6a45986f-4b27-4c11-923c-2e57bf55fc1d\") " Sep 9 23:46:29.522454 kubelet[3418]: I0909 23:46:29.522155 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cni-path" (OuterVolumeSpecName: "cni-path") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:29.522693 kubelet[3418]: I0909 23:46:29.522675 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:29.522783 kubelet[3418]: I0909 23:46:29.522768 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:29.522839 kubelet[3418]: I0909 23:46:29.522830 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:29.525310 kubelet[3418]: I0909 23:46:29.523976 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-hostproc" (OuterVolumeSpecName: "hostproc") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:29.525995 kubelet[3418]: I0909 23:46:29.525129 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a40f008b-cd9e-4d92-8bfc-a603441c3ef9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a40f008b-cd9e-4d92-8bfc-a603441c3ef9" (UID: "a40f008b-cd9e-4d92-8bfc-a603441c3ef9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:46:29.526108 kubelet[3418]: I0909 23:46:29.525978 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:46:29.526217 kubelet[3418]: I0909 23:46:29.526204 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a45986f-4b27-4c11-923c-2e57bf55fc1d-kube-api-access-l5wx2" (OuterVolumeSpecName: "kube-api-access-l5wx2") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "kube-api-access-l5wx2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:46:29.527121 kubelet[3418]: I0909 23:46:29.527079 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a45986f-4b27-4c11-923c-2e57bf55fc1d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:46:29.527866 kubelet[3418]: I0909 23:46:29.527842 3418 scope.go:117] "RemoveContainer" containerID="f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc" Sep 9 23:46:29.529547 kubelet[3418]: I0909 23:46:29.529523 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a45986f-4b27-4c11-923c-2e57bf55fc1d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6a45986f-4b27-4c11-923c-2e57bf55fc1d" (UID: "6a45986f-4b27-4c11-923c-2e57bf55fc1d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 23:46:29.530160 containerd[1890]: time="2025-09-09T23:46:29.529633137Z" level=info msg="RemoveContainer for \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\"" Sep 9 23:46:29.534446 kubelet[3418]: I0909 23:46:29.534416 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40f008b-cd9e-4d92-8bfc-a603441c3ef9-kube-api-access-tt47n" (OuterVolumeSpecName: "kube-api-access-tt47n") pod "a40f008b-cd9e-4d92-8bfc-a603441c3ef9" (UID: "a40f008b-cd9e-4d92-8bfc-a603441c3ef9"). InnerVolumeSpecName "kube-api-access-tt47n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:46:29.539612 systemd[1]: Removed slice kubepods-burstable-pod6a45986f_4b27_4c11_923c_2e57bf55fc1d.slice - libcontainer container kubepods-burstable-pod6a45986f_4b27_4c11_923c_2e57bf55fc1d.slice. Sep 9 23:46:29.539821 systemd[1]: kubepods-burstable-pod6a45986f_4b27_4c11_923c_2e57bf55fc1d.slice: Consumed 4.475s CPU time, 121.7M memory peak, 128K read from disk, 12.9M written to disk. Sep 9 23:46:29.543831 containerd[1890]: time="2025-09-09T23:46:29.543779614Z" level=info msg="RemoveContainer for \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\" returns successfully" Sep 9 23:46:29.544258 kubelet[3418]: I0909 23:46:29.544141 3418 scope.go:117] "RemoveContainer" containerID="f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc" Sep 9 23:46:29.544564 containerd[1890]: time="2025-09-09T23:46:29.544481958Z" level=error msg="ContainerStatus for \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\": not found" Sep 9 23:46:29.544788 kubelet[3418]: E0909 23:46:29.544678 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\": not found" containerID="f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc" Sep 9 23:46:29.544788 kubelet[3418]: I0909 23:46:29.544708 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc"} err="failed to get container status \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"f64355ee340083509cf3c050a928032510ecfd7f76343439d8474518ce4798dc\": not found" Sep 9 23:46:29.544788 kubelet[3418]: I0909 23:46:29.544772 3418 scope.go:117] "RemoveContainer" containerID="46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de" Sep 9 23:46:29.547165 containerd[1890]: time="2025-09-09T23:46:29.547141539Z" level=info msg="RemoveContainer for \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\"" Sep 9 23:46:29.555267 containerd[1890]: time="2025-09-09T23:46:29.555233565Z" level=info msg="RemoveContainer for \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" returns successfully" Sep 9 23:46:29.556512 kubelet[3418]: I0909 23:46:29.556488 3418 scope.go:117] "RemoveContainer" containerID="b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0" Sep 9 23:46:29.557894 containerd[1890]: time="2025-09-09T23:46:29.557878001Z" level=info msg="RemoveContainer for \"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\"" Sep 9 23:46:29.566824 containerd[1890]: time="2025-09-09T23:46:29.566799928Z" level=info msg="RemoveContainer for \"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\" returns successfully" Sep 9 23:46:29.567070 kubelet[3418]: I0909 23:46:29.567045 3418 scope.go:117] "RemoveContainer" containerID="4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41" Sep 9 23:46:29.568568 containerd[1890]: time="2025-09-09T23:46:29.568554469Z" level=info msg="RemoveContainer for \"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\"" Sep 9 23:46:29.576395 containerd[1890]: time="2025-09-09T23:46:29.576368773Z" level=info msg="RemoveContainer for \"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\" returns successfully" Sep 9 23:46:29.576591 kubelet[3418]: I0909 23:46:29.576570 3418 scope.go:117] "RemoveContainer" containerID="8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436" Sep 9 23:46:29.577671 containerd[1890]: time="2025-09-09T23:46:29.577646442Z" level=info msg="RemoveContainer for \"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\"" Sep 9 23:46:29.585267 containerd[1890]: time="2025-09-09T23:46:29.585241554Z" level=info msg="RemoveContainer for \"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\" returns successfully" Sep 9 23:46:29.585413 kubelet[3418]: I0909 23:46:29.585394 3418 scope.go:117] "RemoveContainer" containerID="0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09" Sep 9 23:46:29.586459 containerd[1890]: time="2025-09-09T23:46:29.586442236Z" level=info msg="RemoveContainer for \"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\"" Sep 9 23:46:29.594192 containerd[1890]: time="2025-09-09T23:46:29.594158865Z" level=info msg="RemoveContainer for \"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\" returns successfully" Sep 9 23:46:29.594416 kubelet[3418]: I0909 23:46:29.594400 3418 scope.go:117] "RemoveContainer" containerID="46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de" Sep 9 23:46:29.594679 containerd[1890]: time="2025-09-09T23:46:29.594654546Z" level=error msg="ContainerStatus for \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\": not found" Sep 9 23:46:29.594831 kubelet[3418]: E0909 23:46:29.594808 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\": not found" containerID="46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de" Sep 9 23:46:29.594883 kubelet[3418]: I0909 23:46:29.594833 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de"} err="failed to get container status \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\": rpc error: code = NotFound desc = an error occurred when try to find container \"46dad5a3a6bf37cfadeeb78fe99f2ad5f94b1fb6b346265445a6c0ec6922a3de\": not found" Sep 9 23:46:29.594883 kubelet[3418]: I0909 23:46:29.594849 3418 scope.go:117] "RemoveContainer" containerID="b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0" Sep 9 23:46:29.595106 containerd[1890]: time="2025-09-09T23:46:29.595030663Z" level=error msg="ContainerStatus for \"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\": not found" Sep 9 23:46:29.595139 kubelet[3418]: E0909 23:46:29.595102 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\": not found" containerID="b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0" Sep 9 23:46:29.595139 kubelet[3418]: I0909 23:46:29.595119 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0"} err="failed to get container status \"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6a618b8c5cc830e89ff2c236e55d0b5425b99a97a244c5e88cb1e90f59069f0\": not found" Sep 9 23:46:29.595139 kubelet[3418]: I0909 23:46:29.595129 3418 scope.go:117] "RemoveContainer" containerID="4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41" Sep 9 23:46:29.595329 containerd[1890]: time="2025-09-09T23:46:29.595307497Z" level=error msg="ContainerStatus for \"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\": not found" Sep 9 23:46:29.595445 kubelet[3418]: E0909 23:46:29.595431 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\": not found" containerID="4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41" Sep 9 23:46:29.595483 kubelet[3418]: I0909 23:46:29.595446 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41"} err="failed to get container status \"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\": rpc error: code = NotFound desc = an error occurred when try to find container \"4eedf34957ccebef8e9ace5f8bc0dd6507ba076853a86417f2f9b1efc862ce41\": not found" Sep 9 23:46:29.595483 kubelet[3418]: I0909 23:46:29.595456 3418 scope.go:117] "RemoveContainer" containerID="8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436" Sep 9 23:46:29.595656 containerd[1890]: time="2025-09-09T23:46:29.595635892Z" level=error msg="ContainerStatus for \"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\": not found" Sep 9 23:46:29.595809 kubelet[3418]: E0909 23:46:29.595791 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\": not found" containerID="8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436" Sep 9 23:46:29.595809 kubelet[3418]: I0909 23:46:29.595803 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436"} err="failed to get container status \"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a8f2f4409c4c674aca18e5ecd8efea327aa22f37a664bcd716124e6781a3436\": not found" Sep 9 23:46:29.595809 kubelet[3418]: I0909 23:46:29.595811 3418 scope.go:117] "RemoveContainer" containerID="0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09" Sep 9 23:46:29.596067 containerd[1890]: time="2025-09-09T23:46:29.595950055Z" level=error msg="ContainerStatus for \"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\": not found" Sep 9 23:46:29.596250 kubelet[3418]: E0909 23:46:29.596232 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\": not found" containerID="0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09" Sep 9 23:46:29.596323 kubelet[3418]: I0909 23:46:29.596252 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09"} err="failed to get container status \"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f5046dc79d3c8e3f771245ce8d1d698dc64d43304756ec7115c8ebbe165de09\": not found" Sep 9 23:46:29.623065 kubelet[3418]: I0909 23:46:29.623042 3418 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-host-proc-sys-net\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623065 kubelet[3418]: I0909 23:46:29.623065 3418 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-host-proc-sys-kernel\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623145 kubelet[3418]: I0909 23:46:29.623073 3418 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-etc-cni-netd\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623145 kubelet[3418]: I0909 23:46:29.623080 3418 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-lib-modules\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623145 kubelet[3418]: I0909 23:46:29.623086 3418 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a45986f-4b27-4c11-923c-2e57bf55fc1d-clustermesh-secrets\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623145 kubelet[3418]: I0909 23:46:29.623092 3418 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a45986f-4b27-4c11-923c-2e57bf55fc1d-hubble-tls\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623145 kubelet[3418]: I0909 23:46:29.623098 3418 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-xtables-lock\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623145 kubelet[3418]: I0909 23:46:29.623103 3418 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l5wx2\" (UniqueName: \"kubernetes.io/projected/6a45986f-4b27-4c11-923c-2e57bf55fc1d-kube-api-access-l5wx2\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623145 kubelet[3418]: I0909 23:46:29.623110 3418 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-run\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623145 kubelet[3418]: I0909 23:46:29.623117 3418 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-cgroup\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623271 kubelet[3418]: I0909 23:46:29.623122 3418 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-bpf-maps\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623271 kubelet[3418]: I0909 23:46:29.623127 3418 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-hostproc\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623271 kubelet[3418]: I0909 23:46:29.623132 3418 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a40f008b-cd9e-4d92-8bfc-a603441c3ef9-cilium-config-path\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623271 kubelet[3418]: I0909 23:46:29.623137 3418 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tt47n\" (UniqueName: \"kubernetes.io/projected/a40f008b-cd9e-4d92-8bfc-a603441c3ef9-kube-api-access-tt47n\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623271 kubelet[3418]: I0909 23:46:29.623143 3418 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cni-path\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.623271 kubelet[3418]: I0909 23:46:29.623149 3418 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a45986f-4b27-4c11-923c-2e57bf55fc1d-cilium-config-path\") on node \"ci-4426.0.0-n-044e8b6791\" DevicePath \"\"" Sep 9 23:46:29.832772 systemd[1]: Removed slice kubepods-besteffort-poda40f008b_cd9e_4d92_8bfc_a603441c3ef9.slice - libcontainer container kubepods-besteffort-poda40f008b_cd9e_4d92_8bfc_a603441c3ef9.slice. Sep 9 23:46:30.249402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d-shm.mount: Deactivated successfully. Sep 9 23:46:30.249802 systemd[1]: var-lib-kubelet-pods-a40f008b\x2dcd9e\x2d4d92\x2d8bfc\x2da603441c3ef9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtt47n.mount: Deactivated successfully. Sep 9 23:46:30.249938 systemd[1]: var-lib-kubelet-pods-6a45986f\x2d4b27\x2d4c11\x2d923c\x2d2e57bf55fc1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl5wx2.mount: Deactivated successfully. Sep 9 23:46:30.250068 systemd[1]: var-lib-kubelet-pods-6a45986f\x2d4b27\x2d4c11\x2d923c\x2d2e57bf55fc1d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 23:46:30.250176 systemd[1]: var-lib-kubelet-pods-6a45986f\x2d4b27\x2d4c11\x2d923c\x2d2e57bf55fc1d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 23:46:31.176957 kubelet[3418]: I0909 23:46:31.176917 3418 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a45986f-4b27-4c11-923c-2e57bf55fc1d" path="/var/lib/kubelet/pods/6a45986f-4b27-4c11-923c-2e57bf55fc1d/volumes" Sep 9 23:46:31.177344 kubelet[3418]: I0909 23:46:31.177319 3418 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a40f008b-cd9e-4d92-8bfc-a603441c3ef9" path="/var/lib/kubelet/pods/a40f008b-cd9e-4d92-8bfc-a603441c3ef9/volumes" Sep 9 23:46:31.203172 sshd[4962]: Connection closed by 10.200.16.10 port 46304 Sep 9 23:46:31.203582 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:31.206650 systemd-logind[1867]: Session 24 logged out. Waiting for processes to exit. Sep 9 23:46:31.208215 systemd[1]: sshd@21-10.200.20.13:22-10.200.16.10:46304.service: Deactivated successfully. Sep 9 23:46:31.210035 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 23:46:31.211886 systemd-logind[1867]: Removed session 24. Sep 9 23:46:31.285474 systemd[1]: Started sshd@22-10.200.20.13:22-10.200.16.10:57042.service - OpenSSH per-connection server daemon (10.200.16.10:57042). Sep 9 23:46:31.740589 sshd[5117]: Accepted publickey for core from 10.200.16.10 port 57042 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:31.741655 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:31.745332 systemd-logind[1867]: New session 25 of user core. Sep 9 23:46:31.753097 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 23:46:32.371063 systemd[1]: Created slice kubepods-burstable-pod3a158f3f_80c3_4e11_83fd_8dd620f67e18.slice - libcontainer container kubepods-burstable-pod3a158f3f_80c3_4e11_83fd_8dd620f67e18.slice. Sep 9 23:46:32.414964 sshd[5120]: Connection closed by 10.200.16.10 port 57042 Sep 9 23:46:32.416802 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:32.421059 systemd[1]: sshd@22-10.200.20.13:22-10.200.16.10:57042.service: Deactivated successfully. Sep 9 23:46:32.424576 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 23:46:32.426076 systemd-logind[1867]: Session 25 logged out. Waiting for processes to exit. Sep 9 23:46:32.428603 systemd-logind[1867]: Removed session 25. Sep 9 23:46:32.437781 kubelet[3418]: I0909 23:46:32.437452 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pqvk\" (UniqueName: \"kubernetes.io/projected/3a158f3f-80c3-4e11-83fd-8dd620f67e18-kube-api-access-4pqvk\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.437781 kubelet[3418]: I0909 23:46:32.437485 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a158f3f-80c3-4e11-83fd-8dd620f67e18-xtables-lock\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.437781 kubelet[3418]: I0909 23:46:32.437500 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a158f3f-80c3-4e11-83fd-8dd620f67e18-clustermesh-secrets\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.437781 kubelet[3418]: I0909 23:46:32.437510 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a158f3f-80c3-4e11-83fd-8dd620f67e18-etc-cni-netd\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.437781 kubelet[3418]: I0909 23:46:32.437519 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a158f3f-80c3-4e11-83fd-8dd620f67e18-hubble-tls\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.437781 kubelet[3418]: I0909 23:46:32.437531 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a158f3f-80c3-4e11-83fd-8dd620f67e18-cilium-cgroup\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.438423 kubelet[3418]: I0909 23:46:32.437542 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3a158f3f-80c3-4e11-83fd-8dd620f67e18-cilium-ipsec-secrets\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.438423 kubelet[3418]: I0909 23:46:32.437550 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a158f3f-80c3-4e11-83fd-8dd620f67e18-host-proc-sys-kernel\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.438423 kubelet[3418]: I0909 23:46:32.437562 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a158f3f-80c3-4e11-83fd-8dd620f67e18-cilium-run\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.438423 kubelet[3418]: I0909 23:46:32.437572 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a158f3f-80c3-4e11-83fd-8dd620f67e18-hostproc\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.438423 kubelet[3418]: I0909 23:46:32.437582 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a158f3f-80c3-4e11-83fd-8dd620f67e18-bpf-maps\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.438423 kubelet[3418]: I0909 23:46:32.437590 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a158f3f-80c3-4e11-83fd-8dd620f67e18-lib-modules\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.438515 kubelet[3418]: I0909 23:46:32.437600 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a158f3f-80c3-4e11-83fd-8dd620f67e18-cni-path\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.438515 kubelet[3418]: I0909 23:46:32.437610 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a158f3f-80c3-4e11-83fd-8dd620f67e18-cilium-config-path\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.438515 kubelet[3418]: I0909 23:46:32.437620 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a158f3f-80c3-4e11-83fd-8dd620f67e18-host-proc-sys-net\") pod \"cilium-h2s2j\" (UID: \"3a158f3f-80c3-4e11-83fd-8dd620f67e18\") " pod="kube-system/cilium-h2s2j" Sep 9 23:46:32.502841 systemd[1]: Started sshd@23-10.200.20.13:22-10.200.16.10:57046.service - OpenSSH per-connection server daemon (10.200.16.10:57046). Sep 9 23:46:32.675168 containerd[1890]: time="2025-09-09T23:46:32.675117990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h2s2j,Uid:3a158f3f-80c3-4e11-83fd-8dd620f67e18,Namespace:kube-system,Attempt:0,}" Sep 9 23:46:32.721000 containerd[1890]: time="2025-09-09T23:46:32.720932986Z" level=info msg="connecting to shim 7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1" address="unix:///run/containerd/s/738d9015826d4cee288da8194c36de8bf04d3ba667ec2bb931c8772bd94f7d9b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:46:32.737108 systemd[1]: Started cri-containerd-7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1.scope - libcontainer container 7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1. Sep 9 23:46:32.759009 containerd[1890]: time="2025-09-09T23:46:32.757669258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h2s2j,Uid:3a158f3f-80c3-4e11-83fd-8dd620f67e18,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1\"" Sep 9 23:46:32.768780 containerd[1890]: time="2025-09-09T23:46:32.768755652Z" level=info msg="CreateContainer within sandbox \"7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:46:32.783508 containerd[1890]: time="2025-09-09T23:46:32.783476053Z" level=info msg="Container 18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:32.796367 containerd[1890]: time="2025-09-09T23:46:32.796330029Z" level=info msg="CreateContainer within sandbox \"7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297\"" Sep 9 23:46:32.797784 containerd[1890]: time="2025-09-09T23:46:32.797759319Z" level=info msg="StartContainer for \"18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297\"" Sep 9 23:46:32.799155 containerd[1890]: time="2025-09-09T23:46:32.799127415Z" level=info msg="connecting to shim 18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297" address="unix:///run/containerd/s/738d9015826d4cee288da8194c36de8bf04d3ba667ec2bb931c8772bd94f7d9b" protocol=ttrpc version=3 Sep 9 23:46:32.815095 systemd[1]: Started cri-containerd-18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297.scope - libcontainer container 18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297. Sep 9 23:46:32.838588 containerd[1890]: time="2025-09-09T23:46:32.838547052Z" level=info msg="StartContainer for \"18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297\" returns successfully" Sep 9 23:46:32.842996 systemd[1]: cri-containerd-18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297.scope: Deactivated successfully. Sep 9 23:46:32.845295 containerd[1890]: time="2025-09-09T23:46:32.845273998Z" level=info msg="received exit event container_id:\"18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297\" id:\"18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297\" pid:5196 exited_at:{seconds:1757461592 nanos:845091048}" Sep 9 23:46:32.845470 containerd[1890]: time="2025-09-09T23:46:32.845323304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297\" id:\"18af74b1d20740aa6e80380a5b221a6e9bcbd712a97d1c5c78394419a3d20297\" pid:5196 exited_at:{seconds:1757461592 nanos:845091048}" Sep 9 23:46:32.990849 sshd[5130]: Accepted publickey for core from 10.200.16.10 port 57046 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:32.991941 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:32.997945 systemd-logind[1867]: New session 26 of user core. Sep 9 23:46:33.006210 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 23:46:33.179052 containerd[1890]: time="2025-09-09T23:46:33.178939071Z" level=info msg="StopPodSandbox for \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\"" Sep 9 23:46:33.179185 containerd[1890]: time="2025-09-09T23:46:33.179144374Z" level=info msg="TearDown network for sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" successfully" Sep 9 23:46:33.179185 containerd[1890]: time="2025-09-09T23:46:33.179154719Z" level=info msg="StopPodSandbox for \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" returns successfully" Sep 9 23:46:33.179546 containerd[1890]: time="2025-09-09T23:46:33.179525460Z" level=info msg="RemovePodSandbox for \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\"" Sep 9 23:46:33.179583 containerd[1890]: time="2025-09-09T23:46:33.179548556Z" level=info msg="Forcibly stopping sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\"" Sep 9 23:46:33.179606 containerd[1890]: time="2025-09-09T23:46:33.179595598Z" level=info msg="TearDown network for sandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" successfully" Sep 9 23:46:33.180488 containerd[1890]: time="2025-09-09T23:46:33.180464700Z" level=info msg="Ensure that sandbox 5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89 in task-service has been cleanup successfully" Sep 9 23:46:33.192784 containerd[1890]: time="2025-09-09T23:46:33.192749968Z" level=info msg="RemovePodSandbox \"5ee953a37a514d5674858af4f56f8a4c1815acd72468436acab5e5e5089aed89\" returns successfully" Sep 9 23:46:33.193190 containerd[1890]: time="2025-09-09T23:46:33.193170711Z" level=info msg="StopPodSandbox for \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\"" Sep 9 23:46:33.193271 containerd[1890]: time="2025-09-09T23:46:33.193252634Z" level=info msg="TearDown network for sandbox \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\" successfully" Sep 9 23:46:33.193271 containerd[1890]: time="2025-09-09T23:46:33.193266538Z" level=info msg="StopPodSandbox for \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\" returns successfully" Sep 9 23:46:33.193543 containerd[1890]: time="2025-09-09T23:46:33.193523395Z" level=info msg="RemovePodSandbox for \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\"" Sep 9 23:46:33.193582 containerd[1890]: time="2025-09-09T23:46:33.193545028Z" level=info msg="Forcibly stopping sandbox \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\"" Sep 9 23:46:33.193610 containerd[1890]: time="2025-09-09T23:46:33.193596070Z" level=info msg="TearDown network for sandbox \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\" successfully" Sep 9 23:46:33.194368 containerd[1890]: time="2025-09-09T23:46:33.194344528Z" level=info msg="Ensure that sandbox 0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d in task-service has been cleanup successfully" Sep 9 23:46:33.204829 containerd[1890]: time="2025-09-09T23:46:33.204802948Z" level=info msg="RemovePodSandbox \"0983616d403143a6a8e34eab0c8f0a2fd7f9a1a801c31d091e49396e5c05078d\" returns successfully" Sep 9 23:46:33.266940 kubelet[3418]: E0909 23:46:33.266748 3418 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 23:46:33.335005 sshd[5230]: Connection closed by 10.200.16.10 port 57046 Sep 9 23:46:33.334338 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:33.337087 systemd-logind[1867]: Session 26 logged out. Waiting for processes to exit. Sep 9 23:46:33.338388 systemd[1]: sshd@23-10.200.20.13:22-10.200.16.10:57046.service: Deactivated successfully. Sep 9 23:46:33.340310 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 23:46:33.342260 systemd-logind[1867]: Removed session 26. Sep 9 23:46:33.425195 systemd[1]: Started sshd@24-10.200.20.13:22-10.200.16.10:57050.service - OpenSSH per-connection server daemon (10.200.16.10:57050). Sep 9 23:46:33.556426 containerd[1890]: time="2025-09-09T23:46:33.556324435Z" level=info msg="CreateContainer within sandbox \"7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:46:33.577282 containerd[1890]: time="2025-09-09T23:46:33.576788156Z" level=info msg="Container 578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:33.591615 containerd[1890]: time="2025-09-09T23:46:33.591576144Z" level=info msg="CreateContainer within sandbox \"7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e\"" Sep 9 23:46:33.592226 containerd[1890]: time="2025-09-09T23:46:33.592147075Z" level=info msg="StartContainer for \"578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e\"" Sep 9 23:46:33.593653 containerd[1890]: time="2025-09-09T23:46:33.593624871Z" level=info msg="connecting to shim 578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e" address="unix:///run/containerd/s/738d9015826d4cee288da8194c36de8bf04d3ba667ec2bb931c8772bd94f7d9b" protocol=ttrpc version=3 Sep 9 23:46:33.615110 systemd[1]: Started cri-containerd-578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e.scope - libcontainer container 578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e. Sep 9 23:46:33.639705 containerd[1890]: time="2025-09-09T23:46:33.639633034Z" level=info msg="StartContainer for \"578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e\" returns successfully" Sep 9 23:46:33.640445 systemd[1]: cri-containerd-578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e.scope: Deactivated successfully. Sep 9 23:46:33.643536 containerd[1890]: time="2025-09-09T23:46:33.641510603Z" level=info msg="received exit event container_id:\"578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e\" id:\"578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e\" pid:5254 exited_at:{seconds:1757461593 nanos:641306164}" Sep 9 23:46:33.643536 containerd[1890]: time="2025-09-09T23:46:33.641790821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e\" id:\"578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e\" pid:5254 exited_at:{seconds:1757461593 nanos:641306164}" Sep 9 23:46:33.922757 sshd[5239]: Accepted publickey for core from 10.200.16.10 port 57050 ssh2: RSA SHA256:KyX5lBKi2eDd1vr6ifAfO0y3trFgfVvc0oH4+isjbRs Sep 9 23:46:33.923954 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:33.927893 systemd-logind[1867]: New session 27 of user core. Sep 9 23:46:33.936085 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 23:46:34.542330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-578773c67b641ee67782ed554f6bcb2e54adeb2a46bc5a137c5a72034f0f970e-rootfs.mount: Deactivated successfully. Sep 9 23:46:34.563082 containerd[1890]: time="2025-09-09T23:46:34.562974954Z" level=info msg="CreateContainer within sandbox \"7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:46:34.586184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987955308.mount: Deactivated successfully. Sep 9 23:46:34.586739 containerd[1890]: time="2025-09-09T23:46:34.586220430Z" level=info msg="Container 0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:34.605558 containerd[1890]: time="2025-09-09T23:46:34.605516285Z" level=info msg="CreateContainer within sandbox \"7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29\"" Sep 9 23:46:34.607466 containerd[1890]: time="2025-09-09T23:46:34.607440253Z" level=info msg="StartContainer for \"0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29\"" Sep 9 23:46:34.608402 containerd[1890]: time="2025-09-09T23:46:34.608380117Z" level=info msg="connecting to shim 0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29" address="unix:///run/containerd/s/738d9015826d4cee288da8194c36de8bf04d3ba667ec2bb931c8772bd94f7d9b" protocol=ttrpc version=3 Sep 9 23:46:34.623175 systemd[1]: Started cri-containerd-0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29.scope - libcontainer container 0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29. Sep 9 23:46:34.648187 systemd[1]: cri-containerd-0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29.scope: Deactivated successfully. Sep 9 23:46:34.649927 containerd[1890]: time="2025-09-09T23:46:34.649896733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29\" id:\"0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29\" pid:5308 exited_at:{seconds:1757461594 nanos:649290049}" Sep 9 23:46:34.651014 containerd[1890]: time="2025-09-09T23:46:34.650295259Z" level=info msg="received exit event container_id:\"0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29\" id:\"0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29\" pid:5308 exited_at:{seconds:1757461594 nanos:649290049}" Sep 9 23:46:34.653001 containerd[1890]: time="2025-09-09T23:46:34.652970124Z" level=info msg="StartContainer for \"0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29\" returns successfully" Sep 9 23:46:35.543211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bf47ab52d2ca030e9fada4c8dd99edbb53f1c34d178e7d6fb5565a1ac313f29-rootfs.mount: Deactivated successfully. Sep 9 23:46:35.565445 containerd[1890]: time="2025-09-09T23:46:35.565013014Z" level=info msg="CreateContainer within sandbox \"7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:46:35.587751 containerd[1890]: time="2025-09-09T23:46:35.587714031Z" level=info msg="Container 3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:35.590196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528915674.mount: Deactivated successfully. Sep 9 23:46:35.603869 containerd[1890]: time="2025-09-09T23:46:35.603838420Z" level=info msg="CreateContainer within sandbox \"7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2\"" Sep 9 23:46:35.604500 containerd[1890]: time="2025-09-09T23:46:35.604477770Z" level=info msg="StartContainer for \"3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2\"" Sep 9 23:46:35.606097 containerd[1890]: time="2025-09-09T23:46:35.606074967Z" level=info msg="connecting to shim 3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2" address="unix:///run/containerd/s/738d9015826d4cee288da8194c36de8bf04d3ba667ec2bb931c8772bd94f7d9b" protocol=ttrpc version=3 Sep 9 23:46:35.624090 systemd[1]: Started cri-containerd-3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2.scope - libcontainer container 3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2. Sep 9 23:46:35.642769 systemd[1]: cri-containerd-3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2.scope: Deactivated successfully. Sep 9 23:46:35.644001 containerd[1890]: time="2025-09-09T23:46:35.643921388Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2\" id:\"3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2\" pid:5349 exited_at:{seconds:1757461595 nanos:643584801}" Sep 9 23:46:35.649047 containerd[1890]: time="2025-09-09T23:46:35.648948629Z" level=info msg="received exit event container_id:\"3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2\" id:\"3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2\" pid:5349 exited_at:{seconds:1757461595 nanos:643584801}" Sep 9 23:46:35.651026 containerd[1890]: time="2025-09-09T23:46:35.650933815Z" level=info msg="StartContainer for \"3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2\" returns successfully" Sep 9 23:46:36.250153 kubelet[3418]: I0909 23:46:36.250110 3418 setters.go:618] "Node became not ready" node="ci-4426.0.0-n-044e8b6791" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T23:46:36Z","lastTransitionTime":"2025-09-09T23:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 23:46:36.543311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f8e6e4494dc9224a38ae69c3650d75de7825fe7dfa049bc77397e2159a61ed2-rootfs.mount: Deactivated successfully. Sep 9 23:46:36.570652 containerd[1890]: time="2025-09-09T23:46:36.570551696Z" level=info msg="CreateContainer within sandbox \"7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:46:36.590354 containerd[1890]: time="2025-09-09T23:46:36.590137969Z" level=info msg="Container 08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:46:36.592320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3044647211.mount: Deactivated successfully. Sep 9 23:46:36.605600 containerd[1890]: time="2025-09-09T23:46:36.605569614Z" level=info msg="CreateContainer within sandbox \"7eeac0ab0b39b579fdf9a5a21e27004969104637bc378073f3de257d40481be1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3\"" Sep 9 23:46:36.606653 containerd[1890]: time="2025-09-09T23:46:36.605999428Z" level=info msg="StartContainer for \"08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3\"" Sep 9 23:46:36.606653 containerd[1890]: time="2025-09-09T23:46:36.606581624Z" level=info msg="connecting to shim 08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3" address="unix:///run/containerd/s/738d9015826d4cee288da8194c36de8bf04d3ba667ec2bb931c8772bd94f7d9b" protocol=ttrpc version=3 Sep 9 23:46:36.626090 systemd[1]: Started cri-containerd-08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3.scope - libcontainer container 08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3. Sep 9 23:46:36.652482 containerd[1890]: time="2025-09-09T23:46:36.652249996Z" level=info msg="StartContainer for \"08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3\" returns successfully" Sep 9 23:46:36.704008 containerd[1890]: time="2025-09-09T23:46:36.703433280Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3\" id:\"46fa52d8cde74e60a283e638cef2647f3825f764cfa6803be61fc23ce264033f\" pid:5419 exited_at:{seconds:1757461596 nanos:703160127}" Sep 9 23:46:37.008001 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 23:46:38.449698 containerd[1890]: time="2025-09-09T23:46:38.449575325Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3\" id:\"92d30b479fc4a55d7aa68ac0d7d68a103f2ea2b3deca8b0c0a55817f561db9dd\" pid:5562 exit_status:1 exited_at:{seconds:1757461598 nanos:449213921}" Sep 9 23:46:39.451025 systemd-networkd[1714]: lxc_health: Link UP Sep 9 23:46:39.455140 systemd-networkd[1714]: lxc_health: Gained carrier Sep 9 23:46:40.554979 containerd[1890]: time="2025-09-09T23:46:40.554934777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3\" id:\"091f63cb37f7874646056138b4ea3562a592a5e5fbded50b350f4782cbf4b4a8\" pid:5955 exited_at:{seconds:1757461600 nanos:554085501}" Sep 9 23:46:40.593103 systemd-networkd[1714]: lxc_health: Gained IPv6LL Sep 9 23:46:40.694785 kubelet[3418]: I0909 23:46:40.694710 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h2s2j" podStartSLOduration=8.694690344 podStartE2EDuration="8.694690344s" podCreationTimestamp="2025-09-09 23:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:46:37.583209968 +0000 UTC m=+184.483646612" watchObservedRunningTime="2025-09-09 23:46:40.694690344 +0000 UTC m=+187.595126980" Sep 9 23:46:42.646933 containerd[1890]: time="2025-09-09T23:46:42.646825556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3\" id:\"14cae7915fd14138c23a3a196ab4cc8e266aa9073a5e1827874daed9231258a6\" pid:5987 exited_at:{seconds:1757461602 nanos:646456176}" Sep 9 23:46:44.731183 containerd[1890]: time="2025-09-09T23:46:44.731147253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08ec4b08e7384be89a853f65c8fecb29518fd10b0c1f2a2671592829d6e0cbd3\" id:\"2bf8fb0766250687b2026ee4481248471ea764e3198a74c1a9fff0cb22dccf11\" pid:6009 exited_at:{seconds:1757461604 nanos:730593372}" Sep 9 23:46:44.805708 sshd[5288]: Connection closed by 10.200.16.10 port 57050 Sep 9 23:46:44.806363 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:44.809556 systemd-logind[1867]: Session 27 logged out. Waiting for processes to exit. Sep 9 23:46:44.810153 systemd[1]: sshd@24-10.200.20.13:22-10.200.16.10:57050.service: Deactivated successfully. Sep 9 23:46:44.812039 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 23:46:44.813856 systemd-logind[1867]: Removed session 27.