Sep 3 23:23:29.000722 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Sep 3 23:23:29.000739 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 3 22:04:24 -00 2025 Sep 3 23:23:29.000745 kernel: KASLR enabled Sep 3 23:23:29.000749 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 3 23:23:29.000754 kernel: printk: legacy bootconsole [pl11] enabled Sep 3 23:23:29.000758 kernel: efi: EFI v2.7 by EDK II Sep 3 23:23:29.000763 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 3 23:23:29.000767 kernel: random: crng init done Sep 3 23:23:29.000770 kernel: secureboot: Secure boot disabled Sep 3 23:23:29.000774 kernel: ACPI: Early table checksum verification disabled Sep 3 23:23:29.000778 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 3 23:23:29.000782 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:23:29.000786 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:23:29.000790 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 3 23:23:29.000795 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:23:29.000800 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:23:29.000804 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:23:29.000809 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:23:29.000813 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:23:29.000817 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:23:29.000821 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 3 23:23:29.000825 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 3 23:23:29.000829 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 3 23:23:29.000833 kernel: ACPI: Use ACPI SPCR as default console: No Sep 3 23:23:29.000838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 3 23:23:29.000842 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Sep 3 23:23:29.000846 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Sep 3 23:23:29.000850 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 3 23:23:29.000854 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 3 23:23:29.000859 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 3 23:23:29.000863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 3 23:23:29.000867 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 3 23:23:29.000871 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 3 23:23:29.000875 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 3 23:23:29.000879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 3 23:23:29.000883 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 3 23:23:29.000888 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Sep 3 23:23:29.000892 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Sep 3 23:23:29.000896 kernel: Zone ranges: Sep 3 23:23:29.000900 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 3 23:23:29.000907 kernel: DMA32 empty Sep 3 23:23:29.000911 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 3 23:23:29.000924 kernel: Device empty Sep 3 23:23:29.000928 kernel: Movable zone start for each node Sep 3 23:23:29.000933 kernel: Early memory node ranges Sep 3 23:23:29.000938 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 3 23:23:29.000943 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 3 23:23:29.000947 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 3 23:23:29.000951 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 3 23:23:29.000955 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 3 23:23:29.000960 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 3 23:23:29.000964 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 3 23:23:29.000968 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 3 23:23:29.000972 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 3 23:23:29.000977 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 3 23:23:29.000981 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 3 23:23:29.000985 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Sep 3 23:23:29.000990 kernel: psci: probing for conduit method from ACPI. Sep 3 23:23:29.000994 kernel: psci: PSCIv1.1 detected in firmware. Sep 3 23:23:29.000999 kernel: psci: Using standard PSCI v0.2 function IDs Sep 3 23:23:29.001003 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 3 23:23:29.001007 kernel: psci: SMC Calling Convention v1.4 Sep 3 23:23:29.001012 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 3 23:23:29.001016 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 3 23:23:29.001020 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 3 23:23:29.001025 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 3 23:23:29.001029 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 3 23:23:29.001033 kernel: Detected PIPT I-cache on CPU0 Sep 3 23:23:29.001039 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Sep 3 23:23:29.001043 kernel: CPU features: detected: GIC system register CPU interface Sep 3 23:23:29.001047 kernel: CPU features: detected: Spectre-v4 Sep 3 23:23:29.001052 kernel: CPU features: detected: Spectre-BHB Sep 3 23:23:29.001056 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 3 23:23:29.001060 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 3 23:23:29.001065 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Sep 3 23:23:29.001069 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 3 23:23:29.001073 kernel: alternatives: applying boot alternatives Sep 3 23:23:29.001079 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:23:29.001083 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 3 23:23:29.001089 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 3 23:23:29.001093 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 3 23:23:29.001097 kernel: Fallback order for Node 0: 0 Sep 3 23:23:29.001102 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Sep 3 23:23:29.001106 kernel: Policy zone: Normal Sep 3 23:23:29.001110 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 3 23:23:29.001114 kernel: software IO TLB: area num 2. Sep 3 23:23:29.001119 kernel: software IO TLB: mapped [mem 0x0000000036280000-0x000000003a280000] (64MB) Sep 3 23:23:29.001123 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 3 23:23:29.001127 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 3 23:23:29.001132 kernel: rcu: RCU event tracing is enabled. Sep 3 23:23:29.001138 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 3 23:23:29.001142 kernel: Trampoline variant of Tasks RCU enabled. Sep 3 23:23:29.001146 kernel: Tracing variant of Tasks RCU enabled. Sep 3 23:23:29.001151 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 3 23:23:29.001155 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 3 23:23:29.001160 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:23:29.001164 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:23:29.001168 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 3 23:23:29.001173 kernel: GICv3: 960 SPIs implemented Sep 3 23:23:29.001177 kernel: GICv3: 0 Extended SPIs implemented Sep 3 23:23:29.001181 kernel: Root IRQ handler: gic_handle_irq Sep 3 23:23:29.001185 kernel: GICv3: GICv3 features: 16 PPIs, RSS Sep 3 23:23:29.001190 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Sep 3 23:23:29.001195 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 3 23:23:29.001199 kernel: ITS: No ITS available, not enabling LPIs Sep 3 23:23:29.001204 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 3 23:23:29.001208 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Sep 3 23:23:29.001213 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 3 23:23:29.001217 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Sep 3 23:23:29.001221 kernel: Console: colour dummy device 80x25 Sep 3 23:23:29.001226 kernel: printk: legacy console [tty1] enabled Sep 3 23:23:29.001230 kernel: ACPI: Core revision 20240827 Sep 3 23:23:29.001235 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Sep 3 23:23:29.001240 kernel: pid_max: default: 32768 minimum: 301 Sep 3 23:23:29.001245 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 3 23:23:29.001249 kernel: landlock: Up and running. Sep 3 23:23:29.001254 kernel: SELinux: Initializing. Sep 3 23:23:29.001258 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:23:29.001266 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:23:29.001271 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Sep 3 23:23:29.001276 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Sep 3 23:23:29.001281 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 3 23:23:29.001286 kernel: rcu: Hierarchical SRCU implementation. Sep 3 23:23:29.001290 kernel: rcu: Max phase no-delay instances is 400. Sep 3 23:23:29.001296 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 3 23:23:29.001301 kernel: Remapping and enabling EFI services. Sep 3 23:23:29.001305 kernel: smp: Bringing up secondary CPUs ... Sep 3 23:23:29.001310 kernel: Detected PIPT I-cache on CPU1 Sep 3 23:23:29.001314 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 3 23:23:29.001320 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Sep 3 23:23:29.001325 kernel: smp: Brought up 1 node, 2 CPUs Sep 3 23:23:29.001329 kernel: SMP: Total of 2 processors activated. Sep 3 23:23:29.001334 kernel: CPU: All CPU(s) started at EL1 Sep 3 23:23:29.001339 kernel: CPU features: detected: 32-bit EL0 Support Sep 3 23:23:29.001343 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 3 23:23:29.001348 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 3 23:23:29.001353 kernel: CPU features: detected: Common not Private translations Sep 3 23:23:29.001357 kernel: CPU features: detected: CRC32 instructions Sep 3 23:23:29.001363 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Sep 3 23:23:29.001368 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 3 23:23:29.001372 kernel: CPU features: detected: LSE atomic instructions Sep 3 23:23:29.001377 kernel: CPU features: detected: Privileged Access Never Sep 3 23:23:29.001382 kernel: CPU features: detected: Speculation barrier (SB) Sep 3 23:23:29.001386 kernel: CPU features: detected: TLB range maintenance instructions Sep 3 23:23:29.001391 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 3 23:23:29.001396 kernel: CPU features: detected: Scalable Vector Extension Sep 3 23:23:29.001401 kernel: alternatives: applying system-wide alternatives Sep 3 23:23:29.001406 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 3 23:23:29.001411 kernel: SVE: maximum available vector length 16 bytes per vector Sep 3 23:23:29.001415 kernel: SVE: default vector length 16 bytes per vector Sep 3 23:23:29.001420 kernel: Memory: 3959604K/4194160K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 213368K reserved, 16384K cma-reserved) Sep 3 23:23:29.001425 kernel: devtmpfs: initialized Sep 3 23:23:29.001430 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 3 23:23:29.001435 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 3 23:23:29.001439 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 3 23:23:29.001444 kernel: 0 pages in range for non-PLT usage Sep 3 23:23:29.001449 kernel: 508560 pages in range for PLT usage Sep 3 23:23:29.001454 kernel: pinctrl core: initialized pinctrl subsystem Sep 3 23:23:29.001459 kernel: SMBIOS 3.1.0 present. Sep 3 23:23:29.001464 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 3 23:23:29.001468 kernel: DMI: Memory slots populated: 2/2 Sep 3 23:23:29.001473 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 3 23:23:29.001478 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 3 23:23:29.001482 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 3 23:23:29.001487 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 3 23:23:29.001493 kernel: audit: initializing netlink subsys (disabled) Sep 3 23:23:29.001497 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Sep 3 23:23:29.001502 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 3 23:23:29.001507 kernel: cpuidle: using governor menu Sep 3 23:23:29.001511 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 3 23:23:29.001516 kernel: ASID allocator initialised with 32768 entries Sep 3 23:23:29.001521 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 3 23:23:29.001525 kernel: Serial: AMBA PL011 UART driver Sep 3 23:23:29.001530 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 3 23:23:29.001535 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 3 23:23:29.001540 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 3 23:23:29.001545 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 3 23:23:29.001550 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 3 23:23:29.001554 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 3 23:23:29.001559 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 3 23:23:29.001564 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 3 23:23:29.001568 kernel: ACPI: Added _OSI(Module Device) Sep 3 23:23:29.001573 kernel: ACPI: Added _OSI(Processor Device) Sep 3 23:23:29.001578 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 3 23:23:29.001583 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 3 23:23:29.001588 kernel: ACPI: Interpreter enabled Sep 3 23:23:29.001592 kernel: ACPI: Using GIC for interrupt routing Sep 3 23:23:29.001597 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 3 23:23:29.001602 kernel: printk: legacy console [ttyAMA0] enabled Sep 3 23:23:29.001606 kernel: printk: legacy bootconsole [pl11] disabled Sep 3 23:23:29.001611 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 3 23:23:29.001616 kernel: ACPI: CPU0 has been hot-added Sep 3 23:23:29.001621 kernel: ACPI: CPU1 has been hot-added Sep 3 23:23:29.001626 kernel: iommu: Default domain type: Translated Sep 3 23:23:29.001631 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 3 23:23:29.001635 kernel: efivars: Registered efivars operations Sep 3 23:23:29.001640 kernel: vgaarb: loaded Sep 3 23:23:29.001645 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 3 23:23:29.001649 kernel: VFS: Disk quotas dquot_6.6.0 Sep 3 23:23:29.001654 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 3 23:23:29.001658 kernel: pnp: PnP ACPI init Sep 3 23:23:29.001664 kernel: pnp: PnP ACPI: found 0 devices Sep 3 23:23:29.001668 kernel: NET: Registered PF_INET protocol family Sep 3 23:23:29.001673 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 3 23:23:29.001678 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 3 23:23:29.001683 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 3 23:23:29.001688 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 3 23:23:29.001692 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 3 23:23:29.001697 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 3 23:23:29.001702 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:23:29.001707 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:23:29.001712 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 3 23:23:29.001716 kernel: PCI: CLS 0 bytes, default 64 Sep 3 23:23:29.001721 kernel: kvm [1]: HYP mode not available Sep 3 23:23:29.001726 kernel: Initialise system trusted keyrings Sep 3 23:23:29.001730 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 3 23:23:29.001735 kernel: Key type asymmetric registered Sep 3 23:23:29.001739 kernel: Asymmetric key parser 'x509' registered Sep 3 23:23:29.001744 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 3 23:23:29.001750 kernel: io scheduler mq-deadline registered Sep 3 23:23:29.001755 kernel: io scheduler kyber registered Sep 3 23:23:29.001759 kernel: io scheduler bfq registered Sep 3 23:23:29.001764 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 3 23:23:29.001769 kernel: thunder_xcv, ver 1.0 Sep 3 23:23:29.001773 kernel: thunder_bgx, ver 1.0 Sep 3 23:23:29.001778 kernel: nicpf, ver 1.0 Sep 3 23:23:29.001782 kernel: nicvf, ver 1.0 Sep 3 23:23:29.001880 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 3 23:23:29.001937 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-03T23:23:28 UTC (1756941808) Sep 3 23:23:29.001944 kernel: efifb: probing for efifb Sep 3 23:23:29.001949 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 3 23:23:29.001954 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 3 23:23:29.001958 kernel: efifb: scrolling: redraw Sep 3 23:23:29.001963 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 3 23:23:29.001968 kernel: Console: switching to colour frame buffer device 128x48 Sep 3 23:23:29.001973 kernel: fb0: EFI VGA frame buffer device Sep 3 23:23:29.001979 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 3 23:23:29.001983 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 3 23:23:29.001988 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 3 23:23:29.001993 kernel: watchdog: NMI not fully supported Sep 3 23:23:29.001997 kernel: NET: Registered PF_INET6 protocol family Sep 3 23:23:29.002002 kernel: watchdog: Hard watchdog permanently disabled Sep 3 23:23:29.002007 kernel: Segment Routing with IPv6 Sep 3 23:23:29.002011 kernel: In-situ OAM (IOAM) with IPv6 Sep 3 23:23:29.002016 kernel: NET: Registered PF_PACKET protocol family Sep 3 23:23:29.002021 kernel: Key type dns_resolver registered Sep 3 23:23:29.002026 kernel: registered taskstats version 1 Sep 3 23:23:29.002031 kernel: Loading compiled-in X.509 certificates Sep 3 23:23:29.002036 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 08fc774dab168e64ce30c382a4517d40e72c4744' Sep 3 23:23:29.002040 kernel: Demotion targets for Node 0: null Sep 3 23:23:29.002045 kernel: Key type .fscrypt registered Sep 3 23:23:29.002050 kernel: Key type fscrypt-provisioning registered Sep 3 23:23:29.002054 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 3 23:23:29.002059 kernel: ima: Allocated hash algorithm: sha1 Sep 3 23:23:29.002064 kernel: ima: No architecture policies found Sep 3 23:23:29.002069 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 3 23:23:29.002074 kernel: clk: Disabling unused clocks Sep 3 23:23:29.002078 kernel: PM: genpd: Disabling unused power domains Sep 3 23:23:29.002083 kernel: Warning: unable to open an initial console. Sep 3 23:23:29.002088 kernel: Freeing unused kernel memory: 38976K Sep 3 23:23:29.002092 kernel: Run /init as init process Sep 3 23:23:29.002097 kernel: with arguments: Sep 3 23:23:29.002102 kernel: /init Sep 3 23:23:29.002107 kernel: with environment: Sep 3 23:23:29.002112 kernel: HOME=/ Sep 3 23:23:29.002116 kernel: TERM=linux Sep 3 23:23:29.002121 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 3 23:23:29.002127 systemd[1]: Successfully made /usr/ read-only. Sep 3 23:23:29.002133 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:23:29.002139 systemd[1]: Detected virtualization microsoft. Sep 3 23:23:29.002145 systemd[1]: Detected architecture arm64. Sep 3 23:23:29.002149 systemd[1]: Running in initrd. Sep 3 23:23:29.002154 systemd[1]: No hostname configured, using default hostname. Sep 3 23:23:29.002160 systemd[1]: Hostname set to . Sep 3 23:23:29.002165 systemd[1]: Initializing machine ID from random generator. Sep 3 23:23:29.002170 systemd[1]: Queued start job for default target initrd.target. Sep 3 23:23:29.002175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:23:29.002180 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:23:29.002185 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 3 23:23:29.002191 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:23:29.002197 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 3 23:23:29.002202 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 3 23:23:29.002208 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 3 23:23:29.002213 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 3 23:23:29.002218 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:23:29.002224 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:23:29.002229 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:23:29.002234 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:23:29.002239 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:23:29.002244 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:23:29.002249 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:23:29.002254 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:23:29.002260 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 3 23:23:29.002265 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 3 23:23:29.002271 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:23:29.002276 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:23:29.002281 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:23:29.002286 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:23:29.002291 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 3 23:23:29.002296 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:23:29.002301 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 3 23:23:29.002306 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 3 23:23:29.002312 systemd[1]: Starting systemd-fsck-usr.service... Sep 3 23:23:29.002317 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:23:29.002323 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:23:29.002337 systemd-journald[224]: Collecting audit messages is disabled. Sep 3 23:23:29.002352 systemd-journald[224]: Journal started Sep 3 23:23:29.002365 systemd-journald[224]: Runtime Journal (/run/log/journal/1bc2093ae8aa4f0484e351ca0467728e) is 8M, max 78.5M, 70.5M free. Sep 3 23:23:29.009947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:29.014617 systemd-modules-load[226]: Inserted module 'overlay' Sep 3 23:23:29.038925 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 3 23:23:29.038954 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:23:29.045658 kernel: Bridge firewalling registered Sep 3 23:23:29.045723 systemd-modules-load[226]: Inserted module 'br_netfilter' Sep 3 23:23:29.049906 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 3 23:23:29.061410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:23:29.067762 systemd[1]: Finished systemd-fsck-usr.service. Sep 3 23:23:29.079363 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:23:29.083828 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:29.090597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 3 23:23:29.098634 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:23:29.113341 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:23:29.135757 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:23:29.144190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:29.149875 systemd-tmpfiles[253]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 3 23:23:29.156547 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:23:29.164573 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:23:29.174571 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:23:29.190717 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 3 23:23:29.210660 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:23:29.219903 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:23:29.236392 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:23:29.264944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:23:29.272650 systemd-resolved[263]: Positive Trust Anchors: Sep 3 23:23:29.272658 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:23:29.272677 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:23:29.274855 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 3 23:23:29.276249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:23:29.280477 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:23:29.359928 kernel: SCSI subsystem initialized Sep 3 23:23:29.364935 kernel: Loading iSCSI transport class v2.0-870. Sep 3 23:23:29.371943 kernel: iscsi: registered transport (tcp) Sep 3 23:23:29.384428 kernel: iscsi: registered transport (qla4xxx) Sep 3 23:23:29.384460 kernel: QLogic iSCSI HBA Driver Sep 3 23:23:29.396643 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:23:29.410773 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:23:29.422273 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:23:29.463324 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 3 23:23:29.468456 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 3 23:23:29.529934 kernel: raid6: neonx8 gen() 18540 MB/s Sep 3 23:23:29.548925 kernel: raid6: neonx4 gen() 18571 MB/s Sep 3 23:23:29.567922 kernel: raid6: neonx2 gen() 17083 MB/s Sep 3 23:23:29.587004 kernel: raid6: neonx1 gen() 15014 MB/s Sep 3 23:23:29.606002 kernel: raid6: int64x8 gen() 10533 MB/s Sep 3 23:23:29.624998 kernel: raid6: int64x4 gen() 10620 MB/s Sep 3 23:23:29.644929 kernel: raid6: int64x2 gen() 8980 MB/s Sep 3 23:23:29.665639 kernel: raid6: int64x1 gen() 7006 MB/s Sep 3 23:23:29.665648 kernel: raid6: using algorithm neonx4 gen() 18571 MB/s Sep 3 23:23:29.686964 kernel: raid6: .... xor() 15149 MB/s, rmw enabled Sep 3 23:23:29.687009 kernel: raid6: using neon recovery algorithm Sep 3 23:23:29.694362 kernel: xor: measuring software checksum speed Sep 3 23:23:29.694397 kernel: 8regs : 28669 MB/sec Sep 3 23:23:29.696634 kernel: 32regs : 28818 MB/sec Sep 3 23:23:29.699146 kernel: arm64_neon : 37635 MB/sec Sep 3 23:23:29.702948 kernel: xor: using function: arm64_neon (37635 MB/sec) Sep 3 23:23:29.739935 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 3 23:23:29.745099 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:23:29.754029 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:23:29.776410 systemd-udevd[474]: Using default interface naming scheme 'v255'. Sep 3 23:23:29.779942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:23:29.788581 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 3 23:23:29.824580 dracut-pre-trigger[490]: rd.md=0: removing MD RAID activation Sep 3 23:23:29.841793 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:23:29.847250 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:23:29.886933 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:23:29.898267 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 3 23:23:29.952448 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:23:29.960314 kernel: hv_vmbus: Vmbus version:5.3 Sep 3 23:23:29.960336 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 3 23:23:29.956072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:29.973633 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:29.996892 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 3 23:23:29.996908 kernel: hv_vmbus: registering driver hid_hyperv Sep 3 23:23:29.996922 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 3 23:23:29.996929 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 3 23:23:29.996935 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 3 23:23:29.980265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:30.020339 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 3 23:23:30.020452 kernel: PTP clock support registered Sep 3 23:23:30.020460 kernel: hv_vmbus: registering driver hv_netvsc Sep 3 23:23:30.029590 kernel: hv_utils: Registering HyperV Utility Driver Sep 3 23:23:30.035172 kernel: hv_vmbus: registering driver hv_utils Sep 3 23:23:30.035205 kernel: hv_vmbus: registering driver hv_storvsc Sep 3 23:23:30.047130 kernel: scsi host0: storvsc_host_t Sep 3 23:23:30.047302 kernel: hv_utils: Heartbeat IC version 3.0 Sep 3 23:23:30.047311 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 3 23:23:30.047327 kernel: hv_utils: Shutdown IC version 3.2 Sep 3 23:23:30.047333 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 3 23:23:29.915087 kernel: hv_utils: TimeSync IC version 4.0 Sep 3 23:23:29.925474 kernel: scsi host1: storvsc_host_t Sep 3 23:23:29.925598 systemd-journald[224]: Time jumped backwards, rotating. Sep 3 23:23:29.913116 systemd-resolved[263]: Clock change detected. Flushing caches. Sep 3 23:23:29.919998 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:29.944672 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 3 23:23:29.944844 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 3 23:23:29.944921 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 3 23:23:29.949703 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 3 23:23:29.949810 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 3 23:23:29.952459 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 3 23:23:29.956824 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 3 23:23:29.956960 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 3 23:23:29.957215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#69 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 3 23:23:29.969213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#76 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 3 23:23:29.979844 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 3 23:23:29.979872 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 3 23:23:29.986969 kernel: hv_netvsc 000d3af6-aee3-000d-3af6-aee3000d3af6 eth0: VF slot 1 added Sep 3 23:23:29.997267 kernel: hv_vmbus: registering driver hv_pci Sep 3 23:23:29.997296 kernel: hv_pci f6ba9473-af56-4f70-be02-9400621642f4: PCI VMBus probing: Using version 0x10004 Sep 3 23:23:30.016992 kernel: hv_pci f6ba9473-af56-4f70-be02-9400621642f4: PCI host bridge to bus af56:00 Sep 3 23:23:30.017121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#30 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 3 23:23:30.017185 kernel: pci_bus af56:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 3 23:23:30.021812 kernel: pci_bus af56:00: No busn resource found for root bus, will use [bus 00-ff] Sep 3 23:23:30.028494 kernel: pci af56:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Sep 3 23:23:30.034240 kernel: pci af56:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 3 23:23:30.034270 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 3 23:23:30.042079 kernel: pci af56:00:02.0: enabling Extended Tags Sep 3 23:23:30.057209 kernel: pci af56:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at af56:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Sep 3 23:23:30.067557 kernel: pci_bus af56:00: busn_res: [bus 00-ff] end is updated to 00 Sep 3 23:23:30.067693 kernel: pci af56:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Sep 3 23:23:30.124923 kernel: mlx5_core af56:00:02.0: enabling device (0000 -> 0002) Sep 3 23:23:30.132529 kernel: mlx5_core af56:00:02.0: PTM is not supported by PCIe Sep 3 23:23:30.132680 kernel: mlx5_core af56:00:02.0: firmware version: 16.30.5006 Sep 3 23:23:30.299277 kernel: hv_netvsc 000d3af6-aee3-000d-3af6-aee3000d3af6 eth0: VF registering: eth1 Sep 3 23:23:30.306719 kernel: mlx5_core af56:00:02.0 eth1: joined to eth0 Sep 3 23:23:30.306880 kernel: mlx5_core af56:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 3 23:23:30.316255 kernel: mlx5_core af56:00:02.0 enP44886s1: renamed from eth1 Sep 3 23:23:30.616911 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 3 23:23:30.643818 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 3 23:23:30.659297 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 3 23:23:30.669560 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 3 23:23:30.674411 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 3 23:23:30.685560 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 3 23:23:30.699844 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:23:30.704598 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:23:30.714226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:23:30.727346 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 3 23:23:30.734317 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 3 23:23:30.760342 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:23:30.775236 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#120 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 3 23:23:30.785246 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 3 23:23:31.798144 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#53 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 3 23:23:33.098230 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 3 23:23:33.098431 disk-uuid[669]: The operation has completed successfully. Sep 3 23:23:33.170673 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 3 23:23:33.170762 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 3 23:23:33.195792 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 3 23:23:33.216351 sh[827]: Success Sep 3 23:23:33.249044 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 3 23:23:33.249097 kernel: device-mapper: uevent: version 1.0.3 Sep 3 23:23:33.253818 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 3 23:23:33.262219 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 3 23:23:33.604017 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 3 23:23:33.612792 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 3 23:23:33.626779 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 3 23:23:33.652244 kernel: BTRFS: device fsid e8b97e78-d30f-4a41-b431-d82f3afef949 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (845) Sep 3 23:23:33.652271 kernel: BTRFS info (device dm-0): first mount of filesystem e8b97e78-d30f-4a41-b431-d82f3afef949 Sep 3 23:23:33.656069 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:34.156375 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 3 23:23:34.156440 kernel: BTRFS info (device dm-0): enabling free space tree Sep 3 23:23:34.191960 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 3 23:23:34.195894 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:23:34.202741 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 3 23:23:34.203430 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 3 23:23:34.227875 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 3 23:23:34.255229 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (868) Sep 3 23:23:34.264612 kernel: BTRFS info (device sda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:34.264645 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:34.313652 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:23:34.325324 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:23:34.343620 kernel: BTRFS info (device sda6): turning on async discard Sep 3 23:23:34.343635 kernel: BTRFS info (device sda6): enabling free space tree Sep 3 23:23:34.343641 kernel: BTRFS info (device sda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:34.344753 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 3 23:23:34.353018 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 3 23:23:34.370312 systemd-networkd[1010]: lo: Link UP Sep 3 23:23:34.370317 systemd-networkd[1010]: lo: Gained carrier Sep 3 23:23:34.371008 systemd-networkd[1010]: Enumeration completed Sep 3 23:23:34.372591 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:23:34.375467 systemd-networkd[1010]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:34.375470 systemd-networkd[1010]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:23:34.379761 systemd[1]: Reached target network.target - Network. Sep 3 23:23:34.445212 kernel: mlx5_core af56:00:02.0 enP44886s1: Link up Sep 3 23:23:34.480229 kernel: hv_netvsc 000d3af6-aee3-000d-3af6-aee3000d3af6 eth0: Data path switched to VF: enP44886s1 Sep 3 23:23:34.480111 systemd-networkd[1010]: enP44886s1: Link UP Sep 3 23:23:34.480168 systemd-networkd[1010]: eth0: Link UP Sep 3 23:23:34.480254 systemd-networkd[1010]: eth0: Gained carrier Sep 3 23:23:34.480265 systemd-networkd[1010]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:34.488330 systemd-networkd[1010]: enP44886s1: Gained carrier Sep 3 23:23:34.515229 systemd-networkd[1010]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 3 23:23:35.577972 ignition[1015]: Ignition 2.21.0 Sep 3 23:23:35.577988 ignition[1015]: Stage: fetch-offline Sep 3 23:23:35.581680 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:23:35.578065 ignition[1015]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:35.589391 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 3 23:23:35.578071 ignition[1015]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:23:35.578158 ignition[1015]: parsed url from cmdline: "" Sep 3 23:23:35.578160 ignition[1015]: no config URL provided Sep 3 23:23:35.578164 ignition[1015]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:23:35.578169 ignition[1015]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:23:35.578173 ignition[1015]: failed to fetch config: resource requires networking Sep 3 23:23:35.578421 ignition[1015]: Ignition finished successfully Sep 3 23:23:35.624878 ignition[1025]: Ignition 2.21.0 Sep 3 23:23:35.624893 ignition[1025]: Stage: fetch Sep 3 23:23:35.625071 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:35.627281 systemd-networkd[1010]: eth0: Gained IPv6LL Sep 3 23:23:35.625079 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:23:35.625171 ignition[1025]: parsed url from cmdline: "" Sep 3 23:23:35.625175 ignition[1025]: no config URL provided Sep 3 23:23:35.625179 ignition[1025]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:23:35.625185 ignition[1025]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:23:35.625215 ignition[1025]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 3 23:23:35.710471 ignition[1025]: GET result: OK Sep 3 23:23:35.710536 ignition[1025]: config has been read from IMDS userdata Sep 3 23:23:35.710559 ignition[1025]: parsing config with SHA512: 60bc6547472b29e45229aa5ef731bb46089e62f2e976328efc34f886fb45bf0226a666512a20099cf9fc87f0a70071f189da67d2f6b263dc449c825cbd079da1 Sep 3 23:23:35.715183 unknown[1025]: fetched base config from "system" Sep 3 23:23:35.715452 ignition[1025]: fetch: fetch complete Sep 3 23:23:35.715188 unknown[1025]: fetched base config from "system" Sep 3 23:23:35.715455 ignition[1025]: fetch: fetch passed Sep 3 23:23:35.715192 unknown[1025]: fetched user config from "azure" Sep 3 23:23:35.715486 ignition[1025]: Ignition finished successfully Sep 3 23:23:35.718844 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 3 23:23:35.725698 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 3 23:23:35.756017 ignition[1031]: Ignition 2.21.0 Sep 3 23:23:35.756032 ignition[1031]: Stage: kargs Sep 3 23:23:35.756163 ignition[1031]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:35.762249 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 3 23:23:35.756170 ignition[1031]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:23:35.770011 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 3 23:23:35.758528 ignition[1031]: kargs: kargs passed Sep 3 23:23:35.758765 ignition[1031]: Ignition finished successfully Sep 3 23:23:35.795750 ignition[1038]: Ignition 2.21.0 Sep 3 23:23:35.795760 ignition[1038]: Stage: disks Sep 3 23:23:35.800545 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 3 23:23:35.795939 ignition[1038]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:35.804612 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 3 23:23:35.795953 ignition[1038]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:23:35.812372 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 3 23:23:35.797119 ignition[1038]: disks: disks passed Sep 3 23:23:35.820089 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:23:35.797158 ignition[1038]: Ignition finished successfully Sep 3 23:23:35.827646 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:23:35.834926 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:23:35.843422 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 3 23:23:35.914618 systemd-fsck[1046]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 3 23:23:35.921348 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 3 23:23:35.926637 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 3 23:23:38.066216 kernel: EXT4-fs (sda9): mounted filesystem d953e3b7-a0cb-45f7-b3a7-216a9a578dda r/w with ordered data mode. Quota mode: none. Sep 3 23:23:38.066512 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 3 23:23:38.069990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 3 23:23:38.104420 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:23:38.128292 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 3 23:23:38.148381 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1060) Sep 3 23:23:38.148397 kernel: BTRFS info (device sda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:38.148404 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:38.144445 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 3 23:23:38.155826 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 3 23:23:38.171454 kernel: BTRFS info (device sda6): turning on async discard Sep 3 23:23:38.171470 kernel: BTRFS info (device sda6): enabling free space tree Sep 3 23:23:38.155852 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:23:38.171680 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:23:38.178927 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 3 23:23:38.185840 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 3 23:23:38.781712 coreos-metadata[1075]: Sep 03 23:23:38.781 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 3 23:23:38.787443 coreos-metadata[1075]: Sep 03 23:23:38.787 INFO Fetch successful Sep 3 23:23:38.787443 coreos-metadata[1075]: Sep 03 23:23:38.787 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 3 23:23:38.798617 coreos-metadata[1075]: Sep 03 23:23:38.798 INFO Fetch successful Sep 3 23:23:38.802450 coreos-metadata[1075]: Sep 03 23:23:38.799 INFO wrote hostname ci-4372.1.0-n-46801d0988 to /sysroot/etc/hostname Sep 3 23:23:38.803249 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 3 23:23:39.182883 initrd-setup-root[1091]: cut: /sysroot/etc/passwd: No such file or directory Sep 3 23:23:39.234589 initrd-setup-root[1098]: cut: /sysroot/etc/group: No such file or directory Sep 3 23:23:39.239624 initrd-setup-root[1105]: cut: /sysroot/etc/shadow: No such file or directory Sep 3 23:23:39.258661 initrd-setup-root[1112]: cut: /sysroot/etc/gshadow: No such file or directory Sep 3 23:23:40.351109 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 3 23:23:40.356591 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 3 23:23:40.376686 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 3 23:23:40.382113 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 3 23:23:40.395209 kernel: BTRFS info (device sda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:40.411745 ignition[1179]: INFO : Ignition 2.21.0 Sep 3 23:23:40.411745 ignition[1179]: INFO : Stage: mount Sep 3 23:23:40.411745 ignition[1179]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:40.411745 ignition[1179]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:23:40.411745 ignition[1179]: INFO : mount: mount passed Sep 3 23:23:40.411745 ignition[1179]: INFO : Ignition finished successfully Sep 3 23:23:40.415410 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 3 23:23:40.421845 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 3 23:23:40.445282 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 3 23:23:40.458296 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:23:40.484225 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1191) Sep 3 23:23:40.492869 kernel: BTRFS info (device sda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:40.492887 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:40.500989 kernel: BTRFS info (device sda6): turning on async discard Sep 3 23:23:40.501017 kernel: BTRFS info (device sda6): enabling free space tree Sep 3 23:23:40.502310 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:23:40.524711 ignition[1209]: INFO : Ignition 2.21.0 Sep 3 23:23:40.524711 ignition[1209]: INFO : Stage: files Sep 3 23:23:40.524711 ignition[1209]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:40.524711 ignition[1209]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:23:40.540073 ignition[1209]: DEBUG : files: compiled without relabeling support, skipping Sep 3 23:23:40.568272 ignition[1209]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 3 23:23:40.573438 ignition[1209]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 3 23:23:40.637476 ignition[1209]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 3 23:23:40.642542 ignition[1209]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 3 23:23:40.642542 ignition[1209]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 3 23:23:40.638535 unknown[1209]: wrote ssh authorized keys file for user: core Sep 3 23:23:40.698718 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 3 23:23:40.705641 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 3 23:23:40.744540 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 3 23:23:40.829110 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 3 23:23:40.836739 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:23:40.836739 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 3 23:23:41.012069 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 3 23:23:41.078017 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:23:41.084566 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 3 23:23:41.084566 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 3 23:23:41.084566 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:23:41.084566 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:23:41.084566 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:23:41.084566 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:23:41.084566 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:23:41.084566 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:23:41.136671 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:23:41.136671 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:23:41.136671 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 3 23:23:41.136671 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 3 23:23:41.136671 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 3 23:23:41.136671 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 3 23:23:41.582699 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 3 23:23:41.819692 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 3 23:23:41.819692 ignition[1209]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 3 23:23:41.862796 ignition[1209]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:23:41.882764 ignition[1209]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:23:41.882764 ignition[1209]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 3 23:23:41.905246 ignition[1209]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 3 23:23:41.905246 ignition[1209]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 3 23:23:41.905246 ignition[1209]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:23:41.905246 ignition[1209]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:23:41.905246 ignition[1209]: INFO : files: files passed Sep 3 23:23:41.905246 ignition[1209]: INFO : Ignition finished successfully Sep 3 23:23:41.891432 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 3 23:23:41.897387 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 3 23:23:41.916590 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 3 23:23:41.934320 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 3 23:23:41.934395 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 3 23:23:41.965694 initrd-setup-root-after-ignition[1238]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:41.965694 initrd-setup-root-after-ignition[1238]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:41.977186 initrd-setup-root-after-ignition[1242]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:41.971541 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:23:41.981881 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 3 23:23:41.986447 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 3 23:23:42.025416 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 3 23:23:42.025517 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 3 23:23:42.033431 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 3 23:23:42.041285 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 3 23:23:42.048320 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 3 23:23:42.048978 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 3 23:23:42.077922 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:23:42.083579 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 3 23:23:42.101931 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:23:42.106257 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:23:42.114304 systemd[1]: Stopped target timers.target - Timer Units. Sep 3 23:23:42.121547 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 3 23:23:42.121621 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:23:42.132069 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 3 23:23:42.135970 systemd[1]: Stopped target basic.target - Basic System. Sep 3 23:23:42.143237 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 3 23:23:42.150835 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:23:42.157958 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 3 23:23:42.165852 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:23:42.173794 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 3 23:23:42.181482 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:23:42.189763 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 3 23:23:42.197355 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 3 23:23:42.205148 systemd[1]: Stopped target swap.target - Swaps. Sep 3 23:23:42.211715 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 3 23:23:42.211809 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:23:42.221667 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:23:42.225726 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:23:42.233534 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 3 23:23:42.236982 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:23:42.241722 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 3 23:23:42.241797 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 3 23:23:42.253313 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 3 23:23:42.253390 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:23:42.258234 systemd[1]: ignition-files.service: Deactivated successfully. Sep 3 23:23:42.258303 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 3 23:23:42.265191 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 3 23:23:42.317376 ignition[1262]: INFO : Ignition 2.21.0 Sep 3 23:23:42.317376 ignition[1262]: INFO : Stage: umount Sep 3 23:23:42.317376 ignition[1262]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:42.317376 ignition[1262]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 3 23:23:42.317376 ignition[1262]: INFO : umount: umount passed Sep 3 23:23:42.317376 ignition[1262]: INFO : Ignition finished successfully Sep 3 23:23:42.265266 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 3 23:23:42.275303 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 3 23:23:42.297793 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 3 23:23:42.308027 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 3 23:23:42.308137 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:23:42.316323 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 3 23:23:42.316399 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:23:42.326380 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 3 23:23:42.326442 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 3 23:23:42.332140 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 3 23:23:42.332249 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 3 23:23:42.340969 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 3 23:23:42.341006 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 3 23:23:42.349814 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 3 23:23:42.349848 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 3 23:23:42.356150 systemd[1]: Stopped target network.target - Network. Sep 3 23:23:42.362345 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 3 23:23:42.362380 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:23:42.371368 systemd[1]: Stopped target paths.target - Path Units. Sep 3 23:23:42.378314 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 3 23:23:42.385468 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:23:42.390183 systemd[1]: Stopped target slices.target - Slice Units. Sep 3 23:23:42.397591 systemd[1]: Stopped target sockets.target - Socket Units. Sep 3 23:23:42.404254 systemd[1]: iscsid.socket: Deactivated successfully. Sep 3 23:23:42.404294 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:23:42.412072 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 3 23:23:42.412098 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:23:42.419014 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 3 23:23:42.419055 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 3 23:23:42.425905 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 3 23:23:42.425928 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 3 23:23:42.433128 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 3 23:23:42.439948 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 3 23:23:42.448019 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 3 23:23:42.448634 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 3 23:23:42.448706 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 3 23:23:42.456671 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 3 23:23:42.456755 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 3 23:23:42.479499 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 3 23:23:42.479754 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 3 23:23:42.479853 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 3 23:23:42.490048 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 3 23:23:42.490229 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 3 23:23:42.490315 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 3 23:23:42.501375 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 3 23:23:42.509754 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 3 23:23:42.662621 kernel: hv_netvsc 000d3af6-aee3-000d-3af6-aee3000d3af6 eth0: Data path switched from VF: enP44886s1 Sep 3 23:23:42.509788 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:23:42.516779 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 3 23:23:42.516819 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 3 23:23:42.528290 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 3 23:23:42.539395 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 3 23:23:42.539449 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:23:42.547564 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:23:42.547606 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:42.555266 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 3 23:23:42.555302 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 3 23:23:42.559716 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 3 23:23:42.559744 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:23:42.571807 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:23:42.580747 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:23:42.580797 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:23:42.591998 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 3 23:23:42.597067 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:23:42.608121 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 3 23:23:42.608149 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 3 23:23:42.616561 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 3 23:23:42.616588 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:23:42.624068 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 3 23:23:42.624106 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:23:42.636105 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 3 23:23:42.636143 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 3 23:23:42.652096 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 3 23:23:42.652141 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:23:42.666967 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 3 23:23:42.672999 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 3 23:23:42.673048 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:23:42.684927 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 3 23:23:42.684965 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:23:42.693142 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 3 23:23:42.693184 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:23:42.702414 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 3 23:23:42.702454 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:23:42.707358 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:23:42.707399 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:42.720174 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 3 23:23:42.720234 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 3 23:23:42.720260 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 3 23:23:42.720283 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:23:42.720510 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 3 23:23:42.720584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 3 23:23:42.767678 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 3 23:23:42.767961 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 3 23:23:42.775428 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 3 23:23:42.783391 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 3 23:23:42.824364 systemd[1]: Switching root. Sep 3 23:23:42.925065 systemd-journald[224]: Journal stopped Sep 3 23:23:52.754857 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Sep 3 23:23:52.754876 kernel: SELinux: policy capability network_peer_controls=1 Sep 3 23:23:52.754885 kernel: SELinux: policy capability open_perms=1 Sep 3 23:23:52.754892 kernel: SELinux: policy capability extended_socket_class=1 Sep 3 23:23:52.754897 kernel: SELinux: policy capability always_check_network=0 Sep 3 23:23:52.754902 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 3 23:23:52.754908 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 3 23:23:52.754914 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 3 23:23:52.754919 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 3 23:23:52.754924 kernel: SELinux: policy capability userspace_initial_context=0 Sep 3 23:23:52.754931 systemd[1]: Successfully loaded SELinux policy in 139.458ms. Sep 3 23:23:52.754937 kernel: audit: type=1403 audit(1756941826.565:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 3 23:23:52.754943 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.042ms. Sep 3 23:23:52.754949 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:23:52.754956 systemd[1]: Detected virtualization microsoft. Sep 3 23:23:52.754963 systemd[1]: Detected architecture arm64. Sep 3 23:23:52.754968 systemd[1]: Detected first boot. Sep 3 23:23:52.754975 systemd[1]: Hostname set to . Sep 3 23:23:52.754980 systemd[1]: Initializing machine ID from random generator. Sep 3 23:23:52.754986 zram_generator::config[1305]: No configuration found. Sep 3 23:23:52.754993 kernel: NET: Registered PF_VSOCK protocol family Sep 3 23:23:52.754998 systemd[1]: Populated /etc with preset unit settings. Sep 3 23:23:52.755005 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 3 23:23:52.755012 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 3 23:23:52.755017 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 3 23:23:52.755023 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 3 23:23:52.755029 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 3 23:23:52.755035 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 3 23:23:52.755041 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 3 23:23:52.755048 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 3 23:23:52.755054 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 3 23:23:52.755060 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 3 23:23:52.755066 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 3 23:23:52.755072 systemd[1]: Created slice user.slice - User and Session Slice. Sep 3 23:23:52.755078 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:23:52.755084 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:23:52.755090 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 3 23:23:52.755097 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 3 23:23:52.755103 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 3 23:23:52.755110 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:23:52.755117 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 3 23:23:52.755123 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:23:52.755129 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:23:52.755136 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 3 23:23:52.755142 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 3 23:23:52.755149 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 3 23:23:52.755155 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 3 23:23:52.755162 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:23:52.755168 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:23:52.755174 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:23:52.755180 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:23:52.755186 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 3 23:23:52.755192 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 3 23:23:52.755959 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 3 23:23:52.755978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:23:52.755985 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:23:52.755994 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:23:52.756000 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 3 23:23:52.756010 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 3 23:23:52.756016 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 3 23:23:52.756022 systemd[1]: Mounting media.mount - External Media Directory... Sep 3 23:23:52.756029 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 3 23:23:52.756035 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 3 23:23:52.756041 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 3 23:23:52.756048 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 3 23:23:52.756054 systemd[1]: Reached target machines.target - Containers. Sep 3 23:23:52.756062 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 3 23:23:52.756068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:52.756074 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:23:52.756080 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 3 23:23:52.756086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:23:52.756092 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:23:52.756099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:23:52.756105 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 3 23:23:52.756112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:23:52.756118 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 3 23:23:52.756124 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 3 23:23:52.756131 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 3 23:23:52.756137 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 3 23:23:52.756143 systemd[1]: Stopped systemd-fsck-usr.service. Sep 3 23:23:52.756150 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:52.756156 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:23:52.756163 kernel: fuse: init (API version 7.41) Sep 3 23:23:52.756169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:23:52.756176 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:23:52.756181 kernel: loop: module loaded Sep 3 23:23:52.756187 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 3 23:23:52.756194 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 3 23:23:52.756216 kernel: ACPI: bus type drm_connector registered Sep 3 23:23:52.756222 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:23:52.756229 systemd[1]: verity-setup.service: Deactivated successfully. Sep 3 23:23:52.756235 systemd[1]: Stopped verity-setup.service. Sep 3 23:23:52.756263 systemd-journald[1385]: Collecting audit messages is disabled. Sep 3 23:23:52.756276 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 3 23:23:52.756284 systemd-journald[1385]: Journal started Sep 3 23:23:52.756298 systemd-journald[1385]: Runtime Journal (/run/log/journal/28301f97aafe41eba0158bce0a909171) is 8M, max 78.5M, 70.5M free. Sep 3 23:23:52.021385 systemd[1]: Queued start job for default target multi-user.target. Sep 3 23:23:52.028605 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 3 23:23:52.028946 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 3 23:23:52.029222 systemd[1]: systemd-journald.service: Consumed 2.147s CPU time. Sep 3 23:23:52.772486 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:23:52.773048 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 3 23:23:52.777215 systemd[1]: Mounted media.mount - External Media Directory. Sep 3 23:23:52.781189 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 3 23:23:52.785364 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 3 23:23:52.790333 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 3 23:23:52.794071 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 3 23:23:52.798640 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:23:52.803781 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 3 23:23:52.805239 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 3 23:23:52.809924 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:23:52.810046 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:23:52.814385 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:23:52.814499 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:23:52.818745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:23:52.818854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:23:52.823550 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 3 23:23:52.823669 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 3 23:23:52.827995 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:23:52.828112 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:23:52.832348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:23:52.836948 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:23:52.843286 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 3 23:23:52.848237 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 3 23:23:52.858139 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:23:52.866806 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:23:52.872192 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 3 23:23:52.879634 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 3 23:23:52.886307 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 3 23:23:52.886333 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:23:52.891035 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 3 23:23:52.897097 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 3 23:23:52.901113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:52.916771 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 3 23:23:52.926675 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 3 23:23:52.931245 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:23:52.931904 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 3 23:23:52.936098 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:23:52.937359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:23:52.944923 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 3 23:23:52.949983 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:23:52.957652 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 3 23:23:52.962575 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 3 23:23:52.969605 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 3 23:23:52.974507 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 3 23:23:52.980073 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 3 23:23:53.006218 kernel: loop0: detected capacity change from 0 to 211168 Sep 3 23:23:53.016761 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 3 23:23:53.018174 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 3 23:23:53.032339 systemd-journald[1385]: Time spent on flushing to /var/log/journal/28301f97aafe41eba0158bce0a909171 is 35.314ms for 948 entries. Sep 3 23:23:53.032339 systemd-journald[1385]: System Journal (/var/log/journal/28301f97aafe41eba0158bce0a909171) is 11.8M, max 2.6G, 2.6G free. Sep 3 23:23:53.110079 systemd-journald[1385]: Received client request to flush runtime journal. Sep 3 23:23:53.110122 systemd-journald[1385]: /var/log/journal/28301f97aafe41eba0158bce0a909171/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 3 23:23:53.110139 systemd-journald[1385]: Rotating system journal. Sep 3 23:23:53.110154 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 3 23:23:53.061399 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:53.111397 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 3 23:23:53.130219 kernel: loop1: detected capacity change from 0 to 107312 Sep 3 23:23:53.194519 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Sep 3 23:23:53.194535 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Sep 3 23:23:53.213238 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:23:53.220399 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 3 23:23:53.646218 kernel: loop2: detected capacity change from 0 to 138376 Sep 3 23:23:53.902665 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 3 23:23:53.907802 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:23:53.924650 systemd-tmpfiles[1467]: ACLs are not supported, ignoring. Sep 3 23:23:53.924663 systemd-tmpfiles[1467]: ACLs are not supported, ignoring. Sep 3 23:23:53.927420 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:23:54.272221 kernel: loop3: detected capacity change from 0 to 28936 Sep 3 23:23:54.763948 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 3 23:23:54.769650 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:23:54.795722 systemd-udevd[1473]: Using default interface naming scheme 'v255'. Sep 3 23:23:54.843240 kernel: loop4: detected capacity change from 0 to 211168 Sep 3 23:23:54.857324 kernel: loop5: detected capacity change from 0 to 107312 Sep 3 23:23:54.867228 kernel: loop6: detected capacity change from 0 to 138376 Sep 3 23:23:54.881222 kernel: loop7: detected capacity change from 0 to 28936 Sep 3 23:23:54.888544 (sd-merge)[1474]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 3 23:23:54.888902 (sd-merge)[1474]: Merged extensions into '/usr'. Sep 3 23:23:54.892431 systemd[1]: Reload requested from client PID 1444 ('systemd-sysext') (unit systemd-sysext.service)... Sep 3 23:23:54.892528 systemd[1]: Reloading... Sep 3 23:23:54.943225 zram_generator::config[1499]: No configuration found. Sep 3 23:23:55.027940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:55.107165 systemd[1]: Reloading finished in 214 ms. Sep 3 23:23:55.139133 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 3 23:23:55.149232 systemd[1]: Starting ensure-sysext.service... Sep 3 23:23:55.154319 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:23:55.181793 systemd-tmpfiles[1556]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 3 23:23:55.181812 systemd-tmpfiles[1556]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 3 23:23:55.181986 systemd-tmpfiles[1556]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 3 23:23:55.182119 systemd-tmpfiles[1556]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 3 23:23:55.182543 systemd-tmpfiles[1556]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 3 23:23:55.182680 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Sep 3 23:23:55.182716 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Sep 3 23:23:55.193455 systemd[1]: Reload requested from client PID 1555 ('systemctl') (unit ensure-sysext.service)... Sep 3 23:23:55.193466 systemd[1]: Reloading... Sep 3 23:23:55.240227 zram_generator::config[1580]: No configuration found. Sep 3 23:23:55.248323 systemd-tmpfiles[1556]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:23:55.248331 systemd-tmpfiles[1556]: Skipping /boot Sep 3 23:23:55.255977 systemd-tmpfiles[1556]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:23:55.255987 systemd-tmpfiles[1556]: Skipping /boot Sep 3 23:23:55.313539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:55.375605 systemd[1]: Reloading finished in 181 ms. Sep 3 23:23:55.403633 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:23:55.412708 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:23:55.458753 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 3 23:23:55.469844 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 3 23:23:55.476736 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:23:55.481660 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 3 23:23:55.491605 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 3 23:23:55.496133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:55.497220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:23:55.507623 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:23:55.513975 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:23:55.524374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:23:55.528426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:55.528516 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:55.528625 systemd[1]: Reached target time-set.target - System Time Set. Sep 3 23:23:55.533587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:23:55.534029 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:23:55.538608 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:23:55.539263 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:23:55.544875 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:23:55.545008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:23:55.552083 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:23:55.552550 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:23:55.560867 systemd[1]: Finished ensure-sysext.service. Sep 3 23:23:55.567378 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:23:55.567483 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:23:55.568766 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 3 23:23:55.577317 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 3 23:23:55.616010 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 3 23:23:55.714952 systemd-resolved[1645]: Positive Trust Anchors: Sep 3 23:23:55.714966 systemd-resolved[1645]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:23:55.714986 systemd-resolved[1645]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:23:55.777659 systemd-resolved[1645]: Using system hostname 'ci-4372.1.0-n-46801d0988'. Sep 3 23:23:55.778800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:23:55.783468 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:23:55.790907 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:23:55.802183 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:23:55.828669 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 3 23:23:55.842632 augenrules[1713]: No rules Sep 3 23:23:55.843972 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:23:55.844986 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:23:55.941464 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 3 23:23:55.968212 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#74 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 3 23:23:56.026220 kernel: mousedev: PS/2 mouse device common for all mice Sep 3 23:23:56.026281 kernel: hv_vmbus: registering driver hv_balloon Sep 3 23:23:56.027618 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 3 23:23:56.050286 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:56.055217 kernel: hv_vmbus: registering driver hyperv_fb Sep 3 23:23:56.064485 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 3 23:23:56.064545 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 3 23:23:56.072565 systemd-networkd[1706]: lo: Link UP Sep 3 23:23:56.072785 systemd-networkd[1706]: lo: Gained carrier Sep 3 23:23:56.075092 systemd-networkd[1706]: Enumeration completed Sep 3 23:23:56.077276 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:23:56.081986 systemd-networkd[1706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:56.082816 systemd-networkd[1706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:23:56.087986 systemd[1]: Reached target network.target - Network. Sep 3 23:23:56.096687 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 3 23:23:56.099926 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 3 23:23:56.099984 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 3 23:23:56.109221 kernel: Console: switching to colour dummy device 80x25 Sep 3 23:23:56.112356 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 3 23:23:56.114852 kernel: Console: switching to colour frame buffer device 128x48 Sep 3 23:23:56.145238 kernel: mlx5_core af56:00:02.0 enP44886s1: Link up Sep 3 23:23:56.147697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:23:56.148482 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:56.159430 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:23:56.167239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:56.173212 kernel: hv_netvsc 000d3af6-aee3-000d-3af6-aee3000d3af6 eth0: Data path switched to VF: enP44886s1 Sep 3 23:23:56.173864 systemd-networkd[1706]: enP44886s1: Link UP Sep 3 23:23:56.174057 systemd-networkd[1706]: eth0: Link UP Sep 3 23:23:56.174134 systemd-networkd[1706]: eth0: Gained carrier Sep 3 23:23:56.174252 systemd-networkd[1706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:56.177655 systemd-networkd[1706]: enP44886s1: Gained carrier Sep 3 23:23:56.183332 systemd-networkd[1706]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 3 23:23:56.211488 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 3 23:23:56.219881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 3 23:23:56.225106 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 3 23:23:56.272236 kernel: MACsec IEEE 802.1AE Sep 3 23:23:56.319537 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 3 23:23:57.321315 systemd-networkd[1706]: eth0: Gained IPv6LL Sep 3 23:23:57.323594 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 3 23:23:57.328771 systemd[1]: Reached target network-online.target - Network is Online. Sep 3 23:23:57.531454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:58.029071 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 3 23:23:58.033911 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:24:03.043011 ldconfig[1439]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 3 23:24:03.056282 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 3 23:24:03.063425 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 3 23:24:03.098424 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 3 23:24:03.103538 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:24:03.108007 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 3 23:24:03.112982 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 3 23:24:03.117927 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 3 23:24:03.122530 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 3 23:24:03.127212 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 3 23:24:03.132247 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 3 23:24:03.132277 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:24:03.136303 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:24:03.170503 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 3 23:24:03.175570 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 3 23:24:03.181044 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 3 23:24:03.186224 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 3 23:24:03.190895 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 3 23:24:03.196437 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 3 23:24:03.201768 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 3 23:24:03.206842 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 3 23:24:03.210969 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:24:03.214541 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:24:03.218608 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:24:03.218627 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:24:03.251729 systemd[1]: Starting chronyd.service - NTP client/server... Sep 3 23:24:03.262288 systemd[1]: Starting containerd.service - containerd container runtime... Sep 3 23:24:03.270314 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 3 23:24:03.279309 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 3 23:24:03.283871 (chronyd)[1835]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 3 23:24:03.285750 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 3 23:24:03.292295 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 3 23:24:03.307341 chronyd[1845]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 3 23:24:03.307455 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 3 23:24:03.311722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 3 23:24:03.312485 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 3 23:24:03.316385 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 3 23:24:03.318099 KVP[1847]: KVP starting; pid is:1847 Sep 3 23:24:03.319038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:03.324648 KVP[1847]: KVP LIC Version: 3.1 Sep 3 23:24:03.325223 kernel: hv_utils: KVP IC version 4.0 Sep 3 23:24:03.325992 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 3 23:24:03.331940 jq[1843]: false Sep 3 23:24:03.332385 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 3 23:24:03.343450 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 3 23:24:03.349861 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 3 23:24:03.356321 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 3 23:24:03.362291 chronyd[1845]: Timezone right/UTC failed leap second check, ignoring Sep 3 23:24:03.362423 chronyd[1845]: Loaded seccomp filter (level 2) Sep 3 23:24:03.363556 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 3 23:24:03.368254 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 3 23:24:03.368549 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 3 23:24:03.371012 systemd[1]: Starting update-engine.service - Update Engine... Sep 3 23:24:03.379938 extend-filesystems[1846]: Found /dev/sda6 Sep 3 23:24:03.385905 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 3 23:24:03.393522 jq[1864]: true Sep 3 23:24:03.395027 systemd[1]: Started chronyd.service - NTP client/server. Sep 3 23:24:03.401443 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 3 23:24:03.408493 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 3 23:24:03.410560 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 3 23:24:03.412239 systemd[1]: motdgen.service: Deactivated successfully. Sep 3 23:24:03.412387 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 3 23:24:03.417741 extend-filesystems[1846]: Found /dev/sda9 Sep 3 23:24:03.420752 extend-filesystems[1846]: Checking size of /dev/sda9 Sep 3 23:24:03.424848 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 3 23:24:03.424995 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 3 23:24:03.438014 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 3 23:24:03.453944 systemd-logind[1859]: New seat seat0. Sep 3 23:24:03.454650 (ntainerd)[1881]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 3 23:24:03.459710 update_engine[1861]: I20250903 23:24:03.458398 1861 main.cc:92] Flatcar Update Engine starting Sep 3 23:24:03.459675 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 3 23:24:03.459968 extend-filesystems[1846]: Old size kept for /dev/sda9 Sep 3 23:24:03.459830 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 3 23:24:03.468684 jq[1880]: true Sep 3 23:24:03.464275 systemd-logind[1859]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 3 23:24:03.468610 systemd[1]: Started systemd-logind.service - User Login Management. Sep 3 23:24:03.557997 tar[1876]: linux-arm64/LICENSE Sep 3 23:24:03.557997 tar[1876]: linux-arm64/helm Sep 3 23:24:03.615340 bash[1924]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:24:03.619543 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 3 23:24:03.644228 sshd_keygen[1885]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 3 23:24:03.643315 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 3 23:24:03.660447 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 3 23:24:03.667507 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 3 23:24:03.673234 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 3 23:24:03.696611 systemd[1]: issuegen.service: Deactivated successfully. Sep 3 23:24:03.696980 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 3 23:24:03.707777 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 3 23:24:03.717439 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 3 23:24:03.764963 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 3 23:24:03.773995 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 3 23:24:03.783469 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 3 23:24:03.790535 systemd[1]: Reached target getty.target - Login Prompts. Sep 3 23:24:03.951220 dbus-daemon[1838]: [system] SELinux support is enabled Sep 3 23:24:03.951640 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 3 23:24:03.960193 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 3 23:24:03.961086 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 3 23:24:03.966502 update_engine[1861]: I20250903 23:24:03.966457 1861 update_check_scheduler.cc:74] Next update check in 6m49s Sep 3 23:24:03.968224 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 3 23:24:03.968242 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 3 23:24:03.976295 systemd[1]: Started update-engine.service - Update Engine. Sep 3 23:24:03.980975 dbus-daemon[1838]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 3 23:24:03.984698 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 3 23:24:04.020594 tar[1876]: linux-arm64/README.md Sep 3 23:24:04.030627 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 3 23:24:04.057122 coreos-metadata[1837]: Sep 03 23:24:04.056 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 3 23:24:04.059725 coreos-metadata[1837]: Sep 03 23:24:04.059 INFO Fetch successful Sep 3 23:24:04.059877 coreos-metadata[1837]: Sep 03 23:24:04.059 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 3 23:24:04.063413 coreos-metadata[1837]: Sep 03 23:24:04.063 INFO Fetch successful Sep 3 23:24:04.063679 coreos-metadata[1837]: Sep 03 23:24:04.063 INFO Fetching http://168.63.129.16/machine/0d8d592c-d648-470c-a9ab-fdb34ffce42d/239bfccb%2D97e8%2D4d2e%2Dbbc7%2D2d4204c70c98.%5Fci%2D4372.1.0%2Dn%2D46801d0988?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 3 23:24:04.064742 coreos-metadata[1837]: Sep 03 23:24:04.064 INFO Fetch successful Sep 3 23:24:04.064968 coreos-metadata[1837]: Sep 03 23:24:04.064 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 3 23:24:04.072378 coreos-metadata[1837]: Sep 03 23:24:04.072 INFO Fetch successful Sep 3 23:24:04.193489 locksmithd[2003]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 3 23:24:04.235247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:04.361804 containerd[1881]: time="2025-09-03T23:24:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 3 23:24:04.362339 containerd[1881]: time="2025-09-03T23:24:04.362310584Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 3 23:24:04.367411 containerd[1881]: time="2025-09-03T23:24:04.367378560Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.304µs" Sep 3 23:24:04.367411 containerd[1881]: time="2025-09-03T23:24:04.367405608Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 3 23:24:04.367485 containerd[1881]: time="2025-09-03T23:24:04.367419568Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 3 23:24:04.367573 containerd[1881]: time="2025-09-03T23:24:04.367554536Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 3 23:24:04.367591 containerd[1881]: time="2025-09-03T23:24:04.367572800Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 3 23:24:04.367603 containerd[1881]: time="2025-09-03T23:24:04.367591608Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:24:04.367648 containerd[1881]: time="2025-09-03T23:24:04.367635008Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:24:04.367648 containerd[1881]: time="2025-09-03T23:24:04.367646256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:24:04.367824 containerd[1881]: time="2025-09-03T23:24:04.367807144Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:24:04.367839 containerd[1881]: time="2025-09-03T23:24:04.367823520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:24:04.367839 containerd[1881]: time="2025-09-03T23:24:04.367831800Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:24:04.367839 containerd[1881]: time="2025-09-03T23:24:04.367837816Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 3 23:24:04.367909 containerd[1881]: time="2025-09-03T23:24:04.367897672Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 3 23:24:04.368082 containerd[1881]: time="2025-09-03T23:24:04.368066824Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:24:04.368105 containerd[1881]: time="2025-09-03T23:24:04.368093880Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:24:04.368120 containerd[1881]: time="2025-09-03T23:24:04.368104080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 3 23:24:04.368134 containerd[1881]: time="2025-09-03T23:24:04.368127120Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 3 23:24:04.368502 containerd[1881]: time="2025-09-03T23:24:04.368452160Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 3 23:24:04.368548 containerd[1881]: time="2025-09-03T23:24:04.368521280Z" level=info msg="metadata content store policy set" policy=shared Sep 3 23:24:04.385914 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:24:04.395705 containerd[1881]: time="2025-09-03T23:24:04.395677296Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 3 23:24:04.395758 containerd[1881]: time="2025-09-03T23:24:04.395719304Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 3 23:24:04.395758 containerd[1881]: time="2025-09-03T23:24:04.395730272Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 3 23:24:04.395758 containerd[1881]: time="2025-09-03T23:24:04.395739256Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 3 23:24:04.395758 containerd[1881]: time="2025-09-03T23:24:04.395748272Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 3 23:24:04.395758 containerd[1881]: time="2025-09-03T23:24:04.395757984Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 3 23:24:04.395862 containerd[1881]: time="2025-09-03T23:24:04.395768520Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 3 23:24:04.395862 containerd[1881]: time="2025-09-03T23:24:04.395776616Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 3 23:24:04.395862 containerd[1881]: time="2025-09-03T23:24:04.395784672Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 3 23:24:04.395862 containerd[1881]: time="2025-09-03T23:24:04.395791912Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 3 23:24:04.395862 containerd[1881]: time="2025-09-03T23:24:04.395797872Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 3 23:24:04.395862 containerd[1881]: time="2025-09-03T23:24:04.395806816Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 3 23:24:04.395931 containerd[1881]: time="2025-09-03T23:24:04.395908176Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 3 23:24:04.395931 containerd[1881]: time="2025-09-03T23:24:04.395924240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 3 23:24:04.395956 containerd[1881]: time="2025-09-03T23:24:04.395938936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 3 23:24:04.395956 containerd[1881]: time="2025-09-03T23:24:04.395946984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 3 23:24:04.395956 containerd[1881]: time="2025-09-03T23:24:04.395953912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 3 23:24:04.395989 containerd[1881]: time="2025-09-03T23:24:04.395961680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 3 23:24:04.395989 containerd[1881]: time="2025-09-03T23:24:04.395969680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 3 23:24:04.395989 containerd[1881]: time="2025-09-03T23:24:04.395977488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 3 23:24:04.395989 containerd[1881]: time="2025-09-03T23:24:04.395985232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 3 23:24:04.396039 containerd[1881]: time="2025-09-03T23:24:04.395992336Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 3 23:24:04.396039 containerd[1881]: time="2025-09-03T23:24:04.396000024Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 3 23:24:04.396062 containerd[1881]: time="2025-09-03T23:24:04.396050760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 3 23:24:04.396075 containerd[1881]: time="2025-09-03T23:24:04.396066720Z" level=info msg="Start snapshots syncer" Sep 3 23:24:04.396087 containerd[1881]: time="2025-09-03T23:24:04.396083496Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 3 23:24:04.396278 containerd[1881]: time="2025-09-03T23:24:04.396252464Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 3 23:24:04.396375 containerd[1881]: time="2025-09-03T23:24:04.396289448Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 3 23:24:04.396375 containerd[1881]: time="2025-09-03T23:24:04.396347712Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 3 23:24:04.396455 containerd[1881]: time="2025-09-03T23:24:04.396439816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 3 23:24:04.396477 containerd[1881]: time="2025-09-03T23:24:04.396459320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 3 23:24:04.396477 containerd[1881]: time="2025-09-03T23:24:04.396466552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 3 23:24:04.396477 containerd[1881]: time="2025-09-03T23:24:04.396473864Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 3 23:24:04.396517 containerd[1881]: time="2025-09-03T23:24:04.396482176Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 3 23:24:04.396517 containerd[1881]: time="2025-09-03T23:24:04.396494768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 3 23:24:04.396517 containerd[1881]: time="2025-09-03T23:24:04.396502200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 3 23:24:04.396555 containerd[1881]: time="2025-09-03T23:24:04.396523136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 3 23:24:04.396555 containerd[1881]: time="2025-09-03T23:24:04.396531672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 3 23:24:04.396555 containerd[1881]: time="2025-09-03T23:24:04.396538992Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 3 23:24:04.396591 containerd[1881]: time="2025-09-03T23:24:04.396567632Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:24:04.396591 containerd[1881]: time="2025-09-03T23:24:04.396577568Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:24:04.396591 containerd[1881]: time="2025-09-03T23:24:04.396583304Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:24:04.396591 containerd[1881]: time="2025-09-03T23:24:04.396589576Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:24:04.396641 containerd[1881]: time="2025-09-03T23:24:04.396594936Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 3 23:24:04.396641 containerd[1881]: time="2025-09-03T23:24:04.396605544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 3 23:24:04.396641 containerd[1881]: time="2025-09-03T23:24:04.396612960Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 3 23:24:04.396641 containerd[1881]: time="2025-09-03T23:24:04.396624448Z" level=info msg="runtime interface created" Sep 3 23:24:04.396641 containerd[1881]: time="2025-09-03T23:24:04.396628080Z" level=info msg="created NRI interface" Sep 3 23:24:04.396641 containerd[1881]: time="2025-09-03T23:24:04.396634632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 3 23:24:04.396709 containerd[1881]: time="2025-09-03T23:24:04.396642808Z" level=info msg="Connect containerd service" Sep 3 23:24:04.396709 containerd[1881]: time="2025-09-03T23:24:04.396661928Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 3 23:24:04.398290 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 3 23:24:04.398626 containerd[1881]: time="2025-09-03T23:24:04.398554504Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:24:04.403624 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 3 23:24:04.711207 kubelet[2022]: E0903 23:24:04.711144 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:24:04.713348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:24:04.713457 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:24:04.715242 systemd[1]: kubelet.service: Consumed 539ms CPU time, 258M memory peak. Sep 3 23:24:05.250642 containerd[1881]: time="2025-09-03T23:24:05.250490616Z" level=info msg="Start subscribing containerd event" Sep 3 23:24:05.250642 containerd[1881]: time="2025-09-03T23:24:05.250554352Z" level=info msg="Start recovering state" Sep 3 23:24:05.250793 containerd[1881]: time="2025-09-03T23:24:05.250663016Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 3 23:24:05.250852 containerd[1881]: time="2025-09-03T23:24:05.250836624Z" level=info msg="Start event monitor" Sep 3 23:24:05.250912 containerd[1881]: time="2025-09-03T23:24:05.250899976Z" level=info msg="Start cni network conf syncer for default" Sep 3 23:24:05.250952 containerd[1881]: time="2025-09-03T23:24:05.250940832Z" level=info msg="Start streaming server" Sep 3 23:24:05.251004 containerd[1881]: time="2025-09-03T23:24:05.250852760Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 3 23:24:05.251025 containerd[1881]: time="2025-09-03T23:24:05.250989904Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 3 23:24:05.251025 containerd[1881]: time="2025-09-03T23:24:05.251012144Z" level=info msg="runtime interface starting up..." Sep 3 23:24:05.251025 containerd[1881]: time="2025-09-03T23:24:05.251016536Z" level=info msg="starting plugins..." Sep 3 23:24:05.251068 containerd[1881]: time="2025-09-03T23:24:05.251040248Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 3 23:24:05.255556 containerd[1881]: time="2025-09-03T23:24:05.251143240Z" level=info msg="containerd successfully booted in 0.890534s" Sep 3 23:24:05.251269 systemd[1]: Started containerd.service - containerd container runtime. Sep 3 23:24:05.258047 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 3 23:24:05.263599 systemd[1]: Startup finished in 1.580s (kernel) + 17.954s (initrd) + 18.837s (userspace) = 38.373s. Sep 3 23:24:05.832912 waagent[1996]: 2025-09-03T23:24:05.831926Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 3 23:24:05.836133 waagent[1996]: 2025-09-03T23:24:05.836092Z INFO Daemon Daemon OS: flatcar 4372.1.0 Sep 3 23:24:05.839345 waagent[1996]: 2025-09-03T23:24:05.839315Z INFO Daemon Daemon Python: 3.11.12 Sep 3 23:24:05.842343 waagent[1996]: 2025-09-03T23:24:05.842298Z INFO Daemon Daemon Run daemon Sep 3 23:24:05.847057 waagent[1996]: 2025-09-03T23:24:05.845034Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.1.0' Sep 3 23:24:05.851269 waagent[1996]: 2025-09-03T23:24:05.851227Z INFO Daemon Daemon Using waagent for provisioning Sep 3 23:24:05.855138 waagent[1996]: 2025-09-03T23:24:05.855104Z INFO Daemon Daemon Activate resource disk Sep 3 23:24:05.858222 waagent[1996]: 2025-09-03T23:24:05.858188Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 3 23:24:05.865568 waagent[1996]: 2025-09-03T23:24:05.865534Z INFO Daemon Daemon Found device: None Sep 3 23:24:05.868473 waagent[1996]: 2025-09-03T23:24:05.868445Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 3 23:24:05.874026 waagent[1996]: 2025-09-03T23:24:05.873999Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 3 23:24:05.881673 waagent[1996]: 2025-09-03T23:24:05.881636Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 3 23:24:05.885475 waagent[1996]: 2025-09-03T23:24:05.885448Z INFO Daemon Daemon Running default provisioning handler Sep 3 23:24:05.891757 login[1999]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:05.898843 waagent[1996]: 2025-09-03T23:24:05.897271Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 3 23:24:05.898057 login[2000]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:05.907836 waagent[1996]: 2025-09-03T23:24:05.907797Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 3 23:24:05.909407 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 3 23:24:05.913271 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 3 23:24:05.915485 waagent[1996]: 2025-09-03T23:24:05.914409Z INFO Daemon Daemon cloud-init is enabled: False Sep 3 23:24:05.918276 waagent[1996]: 2025-09-03T23:24:05.918231Z INFO Daemon Daemon Copying ovf-env.xml Sep 3 23:24:05.921156 systemd-logind[1859]: New session 1 of user core. Sep 3 23:24:05.924164 systemd-logind[1859]: New session 2 of user core. Sep 3 23:24:05.960436 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 3 23:24:05.963403 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 3 23:24:05.987755 (systemd)[2059]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 3 23:24:05.989780 systemd-logind[1859]: New session c1 of user core. Sep 3 23:24:06.026367 waagent[1996]: 2025-09-03T23:24:06.026321Z INFO Daemon Daemon Successfully mounted dvd Sep 3 23:24:06.066827 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 3 23:24:06.070977 waagent[1996]: 2025-09-03T23:24:06.069864Z INFO Daemon Daemon Detect protocol endpoint Sep 3 23:24:06.073379 waagent[1996]: 2025-09-03T23:24:06.073342Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 3 23:24:06.077271 waagent[1996]: 2025-09-03T23:24:06.077240Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 3 23:24:06.081562 waagent[1996]: 2025-09-03T23:24:06.081537Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 3 23:24:06.085013 waagent[1996]: 2025-09-03T23:24:06.084948Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 3 23:24:06.088215 waagent[1996]: 2025-09-03T23:24:06.088183Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 3 23:24:06.131630 waagent[1996]: 2025-09-03T23:24:06.131590Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 3 23:24:06.136626 waagent[1996]: 2025-09-03T23:24:06.136606Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 3 23:24:06.140705 waagent[1996]: 2025-09-03T23:24:06.140679Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 3 23:24:06.247207 waagent[1996]: 2025-09-03T23:24:06.247070Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 3 23:24:06.251416 waagent[1996]: 2025-09-03T23:24:06.251384Z INFO Daemon Daemon Forcing an update of the goal state. Sep 3 23:24:06.257645 waagent[1996]: 2025-09-03T23:24:06.257612Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 3 23:24:06.304508 systemd[2059]: Queued start job for default target default.target. Sep 3 23:24:06.307707 waagent[1996]: 2025-09-03T23:24:06.307671Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 3 23:24:06.311695 waagent[1996]: 2025-09-03T23:24:06.311663Z INFO Daemon Sep 3 23:24:06.312887 systemd[2059]: Created slice app.slice - User Application Slice. Sep 3 23:24:06.312908 systemd[2059]: Reached target paths.target - Paths. Sep 3 23:24:06.313012 systemd[2059]: Reached target timers.target - Timers. Sep 3 23:24:06.313921 waagent[1996]: 2025-09-03T23:24:06.313880Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0f9b1663-cee0-4cbc-be8d-2286a78fb8dd eTag: 7938807094077918884 source: Fabric] Sep 3 23:24:06.314242 systemd[2059]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 3 23:24:06.321394 waagent[1996]: 2025-09-03T23:24:06.321360Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 3 23:24:06.322604 systemd[2059]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 3 23:24:06.322732 systemd[2059]: Reached target sockets.target - Sockets. Sep 3 23:24:06.322825 systemd[2059]: Reached target basic.target - Basic System. Sep 3 23:24:06.322890 systemd[2059]: Reached target default.target - Main User Target. Sep 3 23:24:06.322910 systemd[2059]: Startup finished in 328ms. Sep 3 23:24:06.323113 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 3 23:24:06.324462 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 3 23:24:06.324999 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 3 23:24:06.327971 waagent[1996]: 2025-09-03T23:24:06.327015Z INFO Daemon Sep 3 23:24:06.329391 waagent[1996]: 2025-09-03T23:24:06.329349Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 3 23:24:06.339864 waagent[1996]: 2025-09-03T23:24:06.339794Z INFO Daemon Daemon Downloading artifacts profile blob Sep 3 23:24:06.681612 waagent[1996]: 2025-09-03T23:24:06.681561Z INFO Daemon Downloaded certificate {'thumbprint': '469DC3E609D870F695B6AA1CDA91E82A1B56451E', 'hasPrivateKey': True} Sep 3 23:24:06.687970 waagent[1996]: 2025-09-03T23:24:06.687937Z INFO Daemon Fetch goal state completed Sep 3 23:24:06.695682 waagent[1996]: 2025-09-03T23:24:06.695651Z INFO Daemon Daemon Starting provisioning Sep 3 23:24:06.699016 waagent[1996]: 2025-09-03T23:24:06.698987Z INFO Daemon Daemon Handle ovf-env.xml. Sep 3 23:24:06.702327 waagent[1996]: 2025-09-03T23:24:06.702303Z INFO Daemon Daemon Set hostname [ci-4372.1.0-n-46801d0988] Sep 3 23:24:06.736046 waagent[1996]: 2025-09-03T23:24:06.736009Z INFO Daemon Daemon Publish hostname [ci-4372.1.0-n-46801d0988] Sep 3 23:24:06.740353 waagent[1996]: 2025-09-03T23:24:06.740319Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 3 23:24:06.744407 waagent[1996]: 2025-09-03T23:24:06.744377Z INFO Daemon Daemon Primary interface is [eth0] Sep 3 23:24:06.753105 systemd-networkd[1706]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:24:06.753116 systemd-networkd[1706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:24:06.753141 systemd-networkd[1706]: eth0: DHCP lease lost Sep 3 23:24:06.753895 waagent[1996]: 2025-09-03T23:24:06.753855Z INFO Daemon Daemon Create user account if not exists Sep 3 23:24:06.757659 waagent[1996]: 2025-09-03T23:24:06.757627Z INFO Daemon Daemon User core already exists, skip useradd Sep 3 23:24:06.761361 waagent[1996]: 2025-09-03T23:24:06.761333Z INFO Daemon Daemon Configure sudoer Sep 3 23:24:06.768328 waagent[1996]: 2025-09-03T23:24:06.768287Z INFO Daemon Daemon Configure sshd Sep 3 23:24:06.774894 waagent[1996]: 2025-09-03T23:24:06.774851Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 3 23:24:06.782946 waagent[1996]: 2025-09-03T23:24:06.782915Z INFO Daemon Daemon Deploy ssh public key. Sep 3 23:24:06.787728 systemd-networkd[1706]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 3 23:24:07.922220 waagent[1996]: 2025-09-03T23:24:07.920628Z INFO Daemon Daemon Provisioning complete Sep 3 23:24:07.932155 waagent[1996]: 2025-09-03T23:24:07.932120Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 3 23:24:07.936472 waagent[1996]: 2025-09-03T23:24:07.936439Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 3 23:24:07.942891 waagent[1996]: 2025-09-03T23:24:07.942865Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 3 23:24:08.038554 waagent[2103]: 2025-09-03T23:24:08.038498Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 3 23:24:08.038764 waagent[2103]: 2025-09-03T23:24:08.038596Z INFO ExtHandler ExtHandler OS: flatcar 4372.1.0 Sep 3 23:24:08.038764 waagent[2103]: 2025-09-03T23:24:08.038633Z INFO ExtHandler ExtHandler Python: 3.11.12 Sep 3 23:24:08.038764 waagent[2103]: 2025-09-03T23:24:08.038665Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 3 23:24:08.154283 waagent[2103]: 2025-09-03T23:24:08.154232Z INFO ExtHandler ExtHandler Distro: flatcar-4372.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 3 23:24:08.154413 waagent[2103]: 2025-09-03T23:24:08.154386Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 3 23:24:08.154446 waagent[2103]: 2025-09-03T23:24:08.154435Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 3 23:24:08.159378 waagent[2103]: 2025-09-03T23:24:08.159332Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 3 23:24:08.163329 waagent[2103]: 2025-09-03T23:24:08.163299Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 3 23:24:08.163670 waagent[2103]: 2025-09-03T23:24:08.163638Z INFO ExtHandler Sep 3 23:24:08.163719 waagent[2103]: 2025-09-03T23:24:08.163703Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 85dad22b-3e88-49dc-be4d-6d8b6b93d58a eTag: 7938807094077918884 source: Fabric] Sep 3 23:24:08.163932 waagent[2103]: 2025-09-03T23:24:08.163907Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 3 23:24:08.164341 waagent[2103]: 2025-09-03T23:24:08.164310Z INFO ExtHandler Sep 3 23:24:08.164379 waagent[2103]: 2025-09-03T23:24:08.164363Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 3 23:24:08.167110 waagent[2103]: 2025-09-03T23:24:08.167083Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 3 23:24:08.217618 waagent[2103]: 2025-09-03T23:24:08.217525Z INFO ExtHandler Downloaded certificate {'thumbprint': '469DC3E609D870F695B6AA1CDA91E82A1B56451E', 'hasPrivateKey': True} Sep 3 23:24:08.217918 waagent[2103]: 2025-09-03T23:24:08.217882Z INFO ExtHandler Fetch goal state completed Sep 3 23:24:08.227948 waagent[2103]: 2025-09-03T23:24:08.227903Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Sep 3 23:24:08.231163 waagent[2103]: 2025-09-03T23:24:08.231118Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2103 Sep 3 23:24:08.231286 waagent[2103]: 2025-09-03T23:24:08.231258Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 3 23:24:08.231524 waagent[2103]: 2025-09-03T23:24:08.231495Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 3 23:24:08.232583 waagent[2103]: 2025-09-03T23:24:08.232548Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 3 23:24:08.232897 waagent[2103]: 2025-09-03T23:24:08.232868Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 3 23:24:08.233007 waagent[2103]: 2025-09-03T23:24:08.232984Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 3 23:24:08.233437 waagent[2103]: 2025-09-03T23:24:08.233406Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 3 23:24:08.309622 waagent[2103]: 2025-09-03T23:24:08.309590Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 3 23:24:08.309760 waagent[2103]: 2025-09-03T23:24:08.309733Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 3 23:24:08.313897 waagent[2103]: 2025-09-03T23:24:08.313870Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 3 23:24:08.318636 systemd[1]: Reload requested from client PID 2118 ('systemctl') (unit waagent.service)... Sep 3 23:24:08.318843 systemd[1]: Reloading... Sep 3 23:24:08.389227 zram_generator::config[2152]: No configuration found. Sep 3 23:24:08.463901 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:24:08.545701 systemd[1]: Reloading finished in 226 ms. Sep 3 23:24:08.566485 waagent[2103]: 2025-09-03T23:24:08.566415Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 3 23:24:08.566564 waagent[2103]: 2025-09-03T23:24:08.566544Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 3 23:24:09.511414 waagent[2103]: 2025-09-03T23:24:09.511344Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 3 23:24:09.511715 waagent[2103]: 2025-09-03T23:24:09.511636Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 3 23:24:09.512313 waagent[2103]: 2025-09-03T23:24:09.512254Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 3 23:24:09.512537 waagent[2103]: 2025-09-03T23:24:09.512362Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 3 23:24:09.512716 waagent[2103]: 2025-09-03T23:24:09.512677Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 3 23:24:09.512768 waagent[2103]: 2025-09-03T23:24:09.512734Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 3 23:24:09.513058 waagent[2103]: 2025-09-03T23:24:09.513027Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 3 23:24:09.513176 waagent[2103]: 2025-09-03T23:24:09.513101Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 3 23:24:09.513176 waagent[2103]: 2025-09-03T23:24:09.513135Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 3 23:24:09.513409 waagent[2103]: 2025-09-03T23:24:09.513384Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 3 23:24:09.513452 waagent[2103]: 2025-09-03T23:24:09.513301Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 3 23:24:09.513785 waagent[2103]: 2025-09-03T23:24:09.513753Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 3 23:24:09.513785 waagent[2103]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 3 23:24:09.513785 waagent[2103]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 3 23:24:09.513785 waagent[2103]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 3 23:24:09.513785 waagent[2103]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 3 23:24:09.513785 waagent[2103]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 3 23:24:09.513785 waagent[2103]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 3 23:24:09.514079 waagent[2103]: 2025-09-03T23:24:09.514049Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 3 23:24:09.514150 waagent[2103]: 2025-09-03T23:24:09.514120Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 3 23:24:09.514282 waagent[2103]: 2025-09-03T23:24:09.514242Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 3 23:24:09.514458 waagent[2103]: 2025-09-03T23:24:09.514336Z INFO EnvHandler ExtHandler Configure routes Sep 3 23:24:09.514521 waagent[2103]: 2025-09-03T23:24:09.514498Z INFO EnvHandler ExtHandler Gateway:None Sep 3 23:24:09.514908 waagent[2103]: 2025-09-03T23:24:09.514879Z INFO EnvHandler ExtHandler Routes:None Sep 3 23:24:09.519530 waagent[2103]: 2025-09-03T23:24:09.519487Z INFO ExtHandler ExtHandler Sep 3 23:24:09.519843 waagent[2103]: 2025-09-03T23:24:09.519814Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bafa57c3-8902-44e1-b0f9-68dd5dab0cad correlation e2d04fc5-0bd5-4e7d-9946-755afc1db807 created: 2025-09-03T23:22:42.515137Z] Sep 3 23:24:09.520273 waagent[2103]: 2025-09-03T23:24:09.520241Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 3 23:24:09.520730 waagent[2103]: 2025-09-03T23:24:09.520701Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 3 23:24:09.548480 waagent[2103]: 2025-09-03T23:24:09.548436Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 3 23:24:09.548480 waagent[2103]: Try `iptables -h' or 'iptables --help' for more information.) Sep 3 23:24:09.548767 waagent[2103]: 2025-09-03T23:24:09.548730Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 7FCE6AA2-81E0-4BA3-8998-C58D67FDF843;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 3 23:24:09.622572 waagent[2103]: 2025-09-03T23:24:09.622525Z INFO MonitorHandler ExtHandler Network interfaces: Sep 3 23:24:09.622572 waagent[2103]: Executing ['ip', '-a', '-o', 'link']: Sep 3 23:24:09.622572 waagent[2103]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 3 23:24:09.622572 waagent[2103]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:ae:e3 brd ff:ff:ff:ff:ff:ff Sep 3 23:24:09.622572 waagent[2103]: 3: enP44886s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:ae:e3 brd ff:ff:ff:ff:ff:ff\ altname enP44886p0s2 Sep 3 23:24:09.622572 waagent[2103]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 3 23:24:09.622572 waagent[2103]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 3 23:24:09.622572 waagent[2103]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 3 23:24:09.622572 waagent[2103]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 3 23:24:09.622572 waagent[2103]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 3 23:24:09.622572 waagent[2103]: 2: eth0 inet6 fe80::20d:3aff:fef6:aee3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 3 23:24:09.664244 waagent[2103]: 2025-09-03T23:24:09.664192Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 3 23:24:09.664244 waagent[2103]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 3 23:24:09.664244 waagent[2103]: pkts bytes target prot opt in out source destination Sep 3 23:24:09.664244 waagent[2103]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 3 23:24:09.664244 waagent[2103]: pkts bytes target prot opt in out source destination Sep 3 23:24:09.664244 waagent[2103]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 3 23:24:09.664244 waagent[2103]: pkts bytes target prot opt in out source destination Sep 3 23:24:09.664244 waagent[2103]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 3 23:24:09.664244 waagent[2103]: 9 816 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 3 23:24:09.664244 waagent[2103]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 3 23:24:09.666688 waagent[2103]: 2025-09-03T23:24:09.666650Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 3 23:24:09.666688 waagent[2103]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 3 23:24:09.666688 waagent[2103]: pkts bytes target prot opt in out source destination Sep 3 23:24:09.666688 waagent[2103]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 3 23:24:09.666688 waagent[2103]: pkts bytes target prot opt in out source destination Sep 3 23:24:09.666688 waagent[2103]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 3 23:24:09.666688 waagent[2103]: pkts bytes target prot opt in out source destination Sep 3 23:24:09.666688 waagent[2103]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 3 23:24:09.666688 waagent[2103]: 14 1463 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 3 23:24:09.666688 waagent[2103]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 3 23:24:09.666863 waagent[2103]: 2025-09-03T23:24:09.666839Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 3 23:24:14.859647 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 3 23:24:14.860974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:14.956178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:14.959001 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:24:15.098394 kubelet[2251]: E0903 23:24:15.098324 2251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:24:15.100924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:24:15.101033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:24:15.101484 systemd[1]: kubelet.service: Consumed 106ms CPU time, 105.5M memory peak. Sep 3 23:24:25.109666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 3 23:24:25.111010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:25.202054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:25.204384 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:24:25.293615 kubelet[2266]: E0903 23:24:25.293573 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:24:25.295941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:24:25.296134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:24:25.296615 systemd[1]: kubelet.service: Consumed 103ms CPU time, 107.1M memory peak. Sep 3 23:24:27.165143 chronyd[1845]: Selected source PHC0 Sep 3 23:24:29.495237 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 3 23:24:29.496834 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:51916.service - OpenSSH per-connection server daemon (10.200.16.10:51916). Sep 3 23:24:30.168050 sshd[2273]: Accepted publickey for core from 10.200.16.10 port 51916 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:24:30.169087 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:30.173247 systemd-logind[1859]: New session 3 of user core. Sep 3 23:24:30.177293 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 3 23:24:30.606501 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:37218.service - OpenSSH per-connection server daemon (10.200.16.10:37218). Sep 3 23:24:31.057157 sshd[2278]: Accepted publickey for core from 10.200.16.10 port 37218 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:24:31.058181 sshd-session[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:31.061925 systemd-logind[1859]: New session 4 of user core. Sep 3 23:24:31.071312 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 3 23:24:31.397041 sshd[2280]: Connection closed by 10.200.16.10 port 37218 Sep 3 23:24:31.397491 sshd-session[2278]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:31.400342 systemd-logind[1859]: Session 4 logged out. Waiting for processes to exit. Sep 3 23:24:31.400669 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:37218.service: Deactivated successfully. Sep 3 23:24:31.401873 systemd[1]: session-4.scope: Deactivated successfully. Sep 3 23:24:31.403692 systemd-logind[1859]: Removed session 4. Sep 3 23:24:31.478370 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:37224.service - OpenSSH per-connection server daemon (10.200.16.10:37224). Sep 3 23:24:31.930838 sshd[2286]: Accepted publickey for core from 10.200.16.10 port 37224 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:24:31.931807 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:31.935221 systemd-logind[1859]: New session 5 of user core. Sep 3 23:24:31.943312 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 3 23:24:32.269492 sshd[2288]: Connection closed by 10.200.16.10 port 37224 Sep 3 23:24:32.269171 sshd-session[2286]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:32.272346 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:37224.service: Deactivated successfully. Sep 3 23:24:32.273560 systemd[1]: session-5.scope: Deactivated successfully. Sep 3 23:24:32.274138 systemd-logind[1859]: Session 5 logged out. Waiting for processes to exit. Sep 3 23:24:32.275104 systemd-logind[1859]: Removed session 5. Sep 3 23:24:32.360346 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:37234.service - OpenSSH per-connection server daemon (10.200.16.10:37234). Sep 3 23:24:32.852903 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 37234 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:24:32.853927 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:32.857267 systemd-logind[1859]: New session 6 of user core. Sep 3 23:24:32.864300 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 3 23:24:33.216168 sshd[2296]: Connection closed by 10.200.16.10 port 37234 Sep 3 23:24:33.216632 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:33.219107 systemd-logind[1859]: Session 6 logged out. Waiting for processes to exit. Sep 3 23:24:33.219222 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:37234.service: Deactivated successfully. Sep 3 23:24:33.220368 systemd[1]: session-6.scope: Deactivated successfully. Sep 3 23:24:33.222490 systemd-logind[1859]: Removed session 6. Sep 3 23:24:33.315353 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:37240.service - OpenSSH per-connection server daemon (10.200.16.10:37240). Sep 3 23:24:33.806550 sshd[2302]: Accepted publickey for core from 10.200.16.10 port 37240 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:24:33.807561 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:33.811058 systemd-logind[1859]: New session 7 of user core. Sep 3 23:24:33.820318 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 3 23:24:34.302965 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 3 23:24:34.303181 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:24:34.327139 sudo[2305]: pam_unix(sudo:session): session closed for user root Sep 3 23:24:34.416533 sshd[2304]: Connection closed by 10.200.16.10 port 37240 Sep 3 23:24:34.416989 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:34.420094 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:37240.service: Deactivated successfully. Sep 3 23:24:34.421629 systemd[1]: session-7.scope: Deactivated successfully. Sep 3 23:24:34.422923 systemd-logind[1859]: Session 7 logged out. Waiting for processes to exit. Sep 3 23:24:34.424063 systemd-logind[1859]: Removed session 7. Sep 3 23:24:34.514902 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:37254.service - OpenSSH per-connection server daemon (10.200.16.10:37254). Sep 3 23:24:34.998081 sshd[2311]: Accepted publickey for core from 10.200.16.10 port 37254 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:24:34.999140 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:35.002537 systemd-logind[1859]: New session 8 of user core. Sep 3 23:24:35.009376 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 3 23:24:35.267864 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 3 23:24:35.268559 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:24:35.274763 sudo[2315]: pam_unix(sudo:session): session closed for user root Sep 3 23:24:35.278036 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 3 23:24:35.278251 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:24:35.284210 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:24:35.312405 augenrules[2337]: No rules Sep 3 23:24:35.313398 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:24:35.313686 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:24:35.314580 sudo[2314]: pam_unix(sudo:session): session closed for user root Sep 3 23:24:35.315737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 3 23:24:35.317598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:35.388582 sshd[2313]: Connection closed by 10.200.16.10 port 37254 Sep 3 23:24:35.388979 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:35.392792 systemd-logind[1859]: Session 8 logged out. Waiting for processes to exit. Sep 3 23:24:35.393190 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:37254.service: Deactivated successfully. Sep 3 23:24:35.395334 systemd[1]: session-8.scope: Deactivated successfully. Sep 3 23:24:35.397726 systemd-logind[1859]: Removed session 8. Sep 3 23:24:35.399686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:35.405386 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:24:35.427091 kubelet[2353]: E0903 23:24:35.427041 2353 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:24:35.429023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:24:35.429289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:24:35.429623 systemd[1]: kubelet.service: Consumed 95ms CPU time, 107M memory peak. Sep 3 23:24:35.474660 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:37266.service - OpenSSH per-connection server daemon (10.200.16.10:37266). Sep 3 23:24:35.963737 sshd[2361]: Accepted publickey for core from 10.200.16.10 port 37266 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:24:35.964819 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:35.968503 systemd-logind[1859]: New session 9 of user core. Sep 3 23:24:35.978409 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 3 23:24:36.238363 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 3 23:24:36.238570 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:24:37.725982 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 3 23:24:37.735575 (dockerd)[2381]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 3 23:24:38.588226 dockerd[2381]: time="2025-09-03T23:24:38.587904800Z" level=info msg="Starting up" Sep 3 23:24:38.589684 dockerd[2381]: time="2025-09-03T23:24:38.589664172Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 3 23:24:38.628061 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1303713393-merged.mount: Deactivated successfully. Sep 3 23:24:38.644445 systemd[1]: var-lib-docker-metacopy\x2dcheck662131429-merged.mount: Deactivated successfully. Sep 3 23:24:38.670992 dockerd[2381]: time="2025-09-03T23:24:38.670806366Z" level=info msg="Loading containers: start." Sep 3 23:24:38.744219 kernel: Initializing XFRM netlink socket Sep 3 23:24:39.230898 systemd-networkd[1706]: docker0: Link UP Sep 3 23:24:39.245994 dockerd[2381]: time="2025-09-03T23:24:39.245896053Z" level=info msg="Loading containers: done." Sep 3 23:24:39.266726 dockerd[2381]: time="2025-09-03T23:24:39.266686782Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 3 23:24:39.266843 dockerd[2381]: time="2025-09-03T23:24:39.266753448Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 3 23:24:39.266863 dockerd[2381]: time="2025-09-03T23:24:39.266848090Z" level=info msg="Initializing buildkit" Sep 3 23:24:39.313926 dockerd[2381]: time="2025-09-03T23:24:39.313877781Z" level=info msg="Completed buildkit initialization" Sep 3 23:24:39.318728 dockerd[2381]: time="2025-09-03T23:24:39.318690014Z" level=info msg="Daemon has completed initialization" Sep 3 23:24:39.318728 dockerd[2381]: time="2025-09-03T23:24:39.318765992Z" level=info msg="API listen on /run/docker.sock" Sep 3 23:24:39.318909 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 3 23:24:39.626485 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2878573343-merged.mount: Deactivated successfully. Sep 3 23:24:40.002969 containerd[1881]: time="2025-09-03T23:24:40.002687607Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 3 23:24:40.789001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771653510.mount: Deactivated successfully. Sep 3 23:24:41.749813 containerd[1881]: time="2025-09-03T23:24:41.749262858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:41.751845 containerd[1881]: time="2025-09-03T23:24:41.751820850Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352613" Sep 3 23:24:41.755280 containerd[1881]: time="2025-09-03T23:24:41.755070540Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:41.760882 containerd[1881]: time="2025-09-03T23:24:41.760852957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:41.761624 containerd[1881]: time="2025-09-03T23:24:41.761519125Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.758781997s" Sep 3 23:24:41.761624 containerd[1881]: time="2025-09-03T23:24:41.761547190Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 3 23:24:41.762896 containerd[1881]: time="2025-09-03T23:24:41.762838254Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 3 23:24:42.969242 containerd[1881]: time="2025-09-03T23:24:42.968626967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:42.972546 containerd[1881]: time="2025-09-03T23:24:42.972522056Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536977" Sep 3 23:24:42.975532 containerd[1881]: time="2025-09-03T23:24:42.975514179Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:42.979547 containerd[1881]: time="2025-09-03T23:24:42.979522016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:42.979992 containerd[1881]: time="2025-09-03T23:24:42.979965411Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.217106892s" Sep 3 23:24:42.979992 containerd[1881]: time="2025-09-03T23:24:42.979993604Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 3 23:24:42.980437 containerd[1881]: time="2025-09-03T23:24:42.980406750Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 3 23:24:44.031409 containerd[1881]: time="2025-09-03T23:24:44.031359952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:44.034829 containerd[1881]: time="2025-09-03T23:24:44.034801132Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292014" Sep 3 23:24:44.038508 containerd[1881]: time="2025-09-03T23:24:44.038465953Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:44.043745 containerd[1881]: time="2025-09-03T23:24:44.043694861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:44.044780 containerd[1881]: time="2025-09-03T23:24:44.044191371Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.063761573s" Sep 3 23:24:44.044780 containerd[1881]: time="2025-09-03T23:24:44.044230293Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 3 23:24:44.045033 containerd[1881]: time="2025-09-03T23:24:44.045014433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 3 23:24:44.226420 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 3 23:24:45.052739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805655234.mount: Deactivated successfully. Sep 3 23:24:45.609521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 3 23:24:45.610719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:45.772066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:45.789513 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:24:45.813484 kubelet[2658]: E0903 23:24:45.813455 2658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:24:45.815416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:24:45.815515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:24:45.815920 systemd[1]: kubelet.service: Consumed 98ms CPU time, 105M memory peak. Sep 3 23:24:46.460680 containerd[1881]: time="2025-09-03T23:24:46.460633981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:46.463769 containerd[1881]: time="2025-09-03T23:24:46.463743930Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199959" Sep 3 23:24:46.466765 containerd[1881]: time="2025-09-03T23:24:46.466741004Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:46.471861 containerd[1881]: time="2025-09-03T23:24:46.471833895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:46.472351 containerd[1881]: time="2025-09-03T23:24:46.472070085Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 2.426901478s" Sep 3 23:24:46.472351 containerd[1881]: time="2025-09-03T23:24:46.472092070Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 3 23:24:46.472520 containerd[1881]: time="2025-09-03T23:24:46.472501489Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 3 23:24:47.238884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834168184.mount: Deactivated successfully. Sep 3 23:24:48.113413 containerd[1881]: time="2025-09-03T23:24:48.113352713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:48.116771 containerd[1881]: time="2025-09-03T23:24:48.116740998Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 3 23:24:48.119888 containerd[1881]: time="2025-09-03T23:24:48.119848883Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:48.123791 containerd[1881]: time="2025-09-03T23:24:48.123756437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:48.124479 containerd[1881]: time="2025-09-03T23:24:48.124373126Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.651848861s" Sep 3 23:24:48.124479 containerd[1881]: time="2025-09-03T23:24:48.124401591Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 3 23:24:48.125007 containerd[1881]: time="2025-09-03T23:24:48.124980839Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 3 23:24:48.673842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3654056471.mount: Deactivated successfully. Sep 3 23:24:48.695219 containerd[1881]: time="2025-09-03T23:24:48.695123628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:24:48.698747 containerd[1881]: time="2025-09-03T23:24:48.698720766Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 3 23:24:48.702667 containerd[1881]: time="2025-09-03T23:24:48.702632537Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:24:48.707882 containerd[1881]: time="2025-09-03T23:24:48.707344233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:24:48.707882 containerd[1881]: time="2025-09-03T23:24:48.707779661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 582.708452ms" Sep 3 23:24:48.707882 containerd[1881]: time="2025-09-03T23:24:48.707801910Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 3 23:24:48.708469 containerd[1881]: time="2025-09-03T23:24:48.708427967Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 3 23:24:48.900429 update_engine[1861]: I20250903 23:24:48.900374 1861 update_attempter.cc:509] Updating boot flags... Sep 3 23:24:49.320866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount151053228.mount: Deactivated successfully. Sep 3 23:24:51.520830 containerd[1881]: time="2025-09-03T23:24:51.520779702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:51.524106 containerd[1881]: time="2025-09-03T23:24:51.524081587Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465295" Sep 3 23:24:51.527454 containerd[1881]: time="2025-09-03T23:24:51.527415656Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:51.533138 containerd[1881]: time="2025-09-03T23:24:51.532905522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:51.533511 containerd[1881]: time="2025-09-03T23:24:51.533488075Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.824919353s" Sep 3 23:24:51.533511 containerd[1881]: time="2025-09-03T23:24:51.533512195Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 3 23:24:54.827611 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:54.828093 systemd[1]: kubelet.service: Consumed 98ms CPU time, 105M memory peak. Sep 3 23:24:54.834319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:54.847302 systemd[1]: Reload requested from client PID 2918 ('systemctl') (unit session-9.scope)... Sep 3 23:24:54.847391 systemd[1]: Reloading... Sep 3 23:24:54.933215 zram_generator::config[2965]: No configuration found. Sep 3 23:24:55.000498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:24:55.083542 systemd[1]: Reloading finished in 235 ms. Sep 3 23:24:55.134607 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:55.136415 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:24:55.136672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:55.136709 systemd[1]: kubelet.service: Consumed 75ms CPU time, 95M memory peak. Sep 3 23:24:55.139385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:55.421275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:55.424594 (kubelet)[3034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:24:55.550231 kubelet[3034]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:24:55.550231 kubelet[3034]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:24:55.550231 kubelet[3034]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:24:55.550231 kubelet[3034]: I0903 23:24:55.549496 3034 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:24:56.002539 kubelet[3034]: I0903 23:24:56.002510 3034 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 3 23:24:56.002677 kubelet[3034]: I0903 23:24:56.002668 3034 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:24:56.002900 kubelet[3034]: I0903 23:24:56.002886 3034 server.go:956] "Client rotation is on, will bootstrap in background" Sep 3 23:24:56.023075 kubelet[3034]: E0903 23:24:56.023050 3034 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 3 23:24:56.026041 kubelet[3034]: I0903 23:24:56.026015 3034 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:24:56.032802 kubelet[3034]: I0903 23:24:56.032784 3034 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:24:56.035041 kubelet[3034]: I0903 23:24:56.035023 3034 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:24:56.035200 kubelet[3034]: I0903 23:24:56.035177 3034 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:24:56.035315 kubelet[3034]: I0903 23:24:56.035210 3034 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-46801d0988","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:24:56.035393 kubelet[3034]: I0903 23:24:56.035319 3034 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:24:56.035393 kubelet[3034]: I0903 23:24:56.035327 3034 container_manager_linux.go:303] "Creating device plugin manager" Sep 3 23:24:56.035439 kubelet[3034]: I0903 23:24:56.035427 3034 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:24:56.037816 kubelet[3034]: I0903 23:24:56.037802 3034 kubelet.go:480] "Attempting to sync node with API server" Sep 3 23:24:56.037843 kubelet[3034]: I0903 23:24:56.037818 3034 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:24:56.037843 kubelet[3034]: I0903 23:24:56.037840 3034 kubelet.go:386] "Adding apiserver pod source" Sep 3 23:24:56.038945 kubelet[3034]: I0903 23:24:56.038802 3034 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:24:56.042443 kubelet[3034]: I0903 23:24:56.042426 3034 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:24:56.042852 kubelet[3034]: I0903 23:24:56.042837 3034 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 3 23:24:56.042980 kubelet[3034]: W0903 23:24:56.042969 3034 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 3 23:24:56.044694 kubelet[3034]: I0903 23:24:56.044681 3034 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:24:56.044787 kubelet[3034]: I0903 23:24:56.044779 3034 server.go:1289] "Started kubelet" Sep 3 23:24:56.044964 kubelet[3034]: E0903 23:24:56.044946 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-46801d0988&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 3 23:24:56.046300 kubelet[3034]: E0903 23:24:56.046277 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 3 23:24:56.046375 kubelet[3034]: I0903 23:24:56.046351 3034 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:24:56.047077 kubelet[3034]: I0903 23:24:56.046949 3034 server.go:317] "Adding debug handlers to kubelet server" Sep 3 23:24:56.049074 kubelet[3034]: I0903 23:24:56.049022 3034 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:24:56.049510 kubelet[3034]: I0903 23:24:56.049480 3034 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:24:56.050879 kubelet[3034]: E0903 23:24:56.049720 3034 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-n-46801d0988.1861e95884bfbea5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-n-46801d0988,UID:ci-4372.1.0-n-46801d0988,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-n-46801d0988,},FirstTimestamp:2025-09-03 23:24:56.044756645 +0000 UTC m=+0.617513773,LastTimestamp:2025-09-03 23:24:56.044756645 +0000 UTC m=+0.617513773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-n-46801d0988,}" Sep 3 23:24:56.053720 kubelet[3034]: I0903 23:24:56.053702 3034 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:24:56.055371 kubelet[3034]: E0903 23:24:56.055351 3034 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:24:56.056816 kubelet[3034]: I0903 23:24:56.056805 3034 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:24:56.056940 kubelet[3034]: I0903 23:24:56.056929 3034 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:24:56.058518 kubelet[3034]: I0903 23:24:56.058503 3034 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:24:56.058625 kubelet[3034]: I0903 23:24:56.058615 3034 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:24:56.059126 kubelet[3034]: E0903 23:24:56.059106 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 3 23:24:56.060374 kubelet[3034]: E0903 23:24:56.060355 3034 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-46801d0988\" not found" Sep 3 23:24:56.060646 kubelet[3034]: E0903 23:24:56.060622 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-46801d0988?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Sep 3 23:24:56.061268 kubelet[3034]: I0903 23:24:56.061102 3034 factory.go:223] Registration of the systemd container factory successfully Sep 3 23:24:56.061268 kubelet[3034]: I0903 23:24:56.061162 3034 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:24:56.062246 kubelet[3034]: I0903 23:24:56.062191 3034 factory.go:223] Registration of the containerd container factory successfully Sep 3 23:24:56.081026 kubelet[3034]: I0903 23:24:56.081013 3034 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:24:56.081164 kubelet[3034]: I0903 23:24:56.081154 3034 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:24:56.081237 kubelet[3034]: I0903 23:24:56.081230 3034 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:24:56.097572 kubelet[3034]: I0903 23:24:56.097557 3034 policy_none.go:49] "None policy: Start" Sep 3 23:24:56.097653 kubelet[3034]: I0903 23:24:56.097644 3034 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:24:56.097702 kubelet[3034]: I0903 23:24:56.097695 3034 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:24:56.160522 kubelet[3034]: E0903 23:24:56.160490 3034 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-46801d0988\" not found" Sep 3 23:24:56.192095 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 3 23:24:56.202507 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 3 23:24:56.206191 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 3 23:24:56.215011 kubelet[3034]: E0903 23:24:56.214771 3034 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 3 23:24:56.215127 kubelet[3034]: I0903 23:24:56.215106 3034 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:24:56.215228 kubelet[3034]: I0903 23:24:56.215123 3034 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:24:56.215353 kubelet[3034]: I0903 23:24:56.215336 3034 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:24:56.216707 kubelet[3034]: E0903 23:24:56.216658 3034 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:24:56.216819 kubelet[3034]: E0903 23:24:56.216808 3034 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-n-46801d0988\" not found" Sep 3 23:24:56.261652 kubelet[3034]: E0903 23:24:56.261502 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-46801d0988?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Sep 3 23:24:56.264475 kubelet[3034]: I0903 23:24:56.264442 3034 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 3 23:24:56.265812 kubelet[3034]: I0903 23:24:56.265798 3034 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 3 23:24:56.265812 kubelet[3034]: I0903 23:24:56.265830 3034 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 3 23:24:56.265812 kubelet[3034]: I0903 23:24:56.265853 3034 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:24:56.265812 kubelet[3034]: I0903 23:24:56.265858 3034 kubelet.go:2436] "Starting kubelet main sync loop" Sep 3 23:24:56.266573 kubelet[3034]: E0903 23:24:56.266501 3034 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 3 23:24:56.268138 kubelet[3034]: E0903 23:24:56.268110 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 3 23:24:56.317296 kubelet[3034]: I0903 23:24:56.317263 3034 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.317662 kubelet[3034]: E0903 23:24:56.317636 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.461519 kubelet[3034]: I0903 23:24:56.461450 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87560c0fba0a496c0dfa472d0ea03dc2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-46801d0988\" (UID: \"87560c0fba0a496c0dfa472d0ea03dc2\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.461519 kubelet[3034]: I0903 23:24:56.461482 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87560c0fba0a496c0dfa472d0ea03dc2-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-46801d0988\" (UID: \"87560c0fba0a496c0dfa472d0ea03dc2\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.461519 kubelet[3034]: I0903 23:24:56.461497 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87560c0fba0a496c0dfa472d0ea03dc2-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-46801d0988\" (UID: \"87560c0fba0a496c0dfa472d0ea03dc2\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.645596 kubelet[3034]: I0903 23:24:56.519884 3034 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.645596 kubelet[3034]: E0903 23:24:56.520173 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.662427 kubelet[3034]: E0903 23:24:56.662390 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-46801d0988?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Sep 3 23:24:56.743415 systemd[1]: Created slice kubepods-burstable-pod87560c0fba0a496c0dfa472d0ea03dc2.slice - libcontainer container kubepods-burstable-pod87560c0fba0a496c0dfa472d0ea03dc2.slice. Sep 3 23:24:56.750720 kubelet[3034]: E0903 23:24:56.750697 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-46801d0988\" not found" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.751437 containerd[1881]: time="2025-09-03T23:24:56.751402767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-46801d0988,Uid:87560c0fba0a496c0dfa472d0ea03dc2,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:56.764090 kubelet[3034]: I0903 23:24:56.764064 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/635680a7de52894aaa83163d125956ca-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-46801d0988\" (UID: \"635680a7de52894aaa83163d125956ca\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.764212 kubelet[3034]: I0903 23:24:56.764160 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/635680a7de52894aaa83163d125956ca-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-46801d0988\" (UID: \"635680a7de52894aaa83163d125956ca\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.764212 kubelet[3034]: I0903 23:24:56.764181 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/635680a7de52894aaa83163d125956ca-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-46801d0988\" (UID: \"635680a7de52894aaa83163d125956ca\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.764349 kubelet[3034]: I0903 23:24:56.764194 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/635680a7de52894aaa83163d125956ca-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-46801d0988\" (UID: \"635680a7de52894aaa83163d125956ca\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.764349 kubelet[3034]: I0903 23:24:56.764317 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/635680a7de52894aaa83163d125956ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-46801d0988\" (UID: \"635680a7de52894aaa83163d125956ca\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.922289 kubelet[3034]: I0903 23:24:56.922109 3034 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:56.922661 kubelet[3034]: E0903 23:24:56.922634 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:57.033667 kubelet[3034]: E0903 23:24:57.033623 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-46801d0988&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 3 23:24:57.212561 kubelet[3034]: E0903 23:24:57.212443 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 3 23:24:57.289392 kubelet[3034]: E0903 23:24:57.289354 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 3 23:24:57.325841 kubelet[3034]: E0903 23:24:57.325810 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 3 23:24:57.548732 kubelet[3034]: E0903 23:24:57.462981 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-46801d0988?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Sep 3 23:24:57.558191 systemd[1]: Created slice kubepods-burstable-pod635680a7de52894aaa83163d125956ca.slice - libcontainer container kubepods-burstable-pod635680a7de52894aaa83163d125956ca.slice. Sep 3 23:24:57.562294 kubelet[3034]: E0903 23:24:57.562269 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-46801d0988\" not found" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:57.562969 containerd[1881]: time="2025-09-03T23:24:57.562934026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-46801d0988,Uid:635680a7de52894aaa83163d125956ca,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:57.567411 kubelet[3034]: I0903 23:24:57.567387 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee8b3474b8468ddeec0f81c5c018a2b8-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-46801d0988\" (UID: \"ee8b3474b8468ddeec0f81c5c018a2b8\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-46801d0988" Sep 3 23:24:57.571331 systemd[1]: Created slice kubepods-burstable-podee8b3474b8468ddeec0f81c5c018a2b8.slice - libcontainer container kubepods-burstable-podee8b3474b8468ddeec0f81c5c018a2b8.slice. Sep 3 23:24:57.572772 kubelet[3034]: E0903 23:24:57.572754 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-46801d0988\" not found" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:57.638398 containerd[1881]: time="2025-09-03T23:24:57.638366904Z" level=info msg="connecting to shim 80db2e4acb63b88f6cb85ad8d682c5f51fa190bad8d04fb487f57b8ceb203d7c" address="unix:///run/containerd/s/8237be8d236ba177d76466747248cf4584e55b2d340047844e345155cc0756d2" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:57.638528 containerd[1881]: time="2025-09-03T23:24:57.638503620Z" level=info msg="connecting to shim 64dedfddda57df9670e11af97d9c23b6914492ed9ad8492d53cd7c97e90dc653" address="unix:///run/containerd/s/5c5692dc4908e4774cb06c92b7c3889c58419ee18bbc7ac03ffe65b4fdb7bd5d" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:57.658332 systemd[1]: Started cri-containerd-64dedfddda57df9670e11af97d9c23b6914492ed9ad8492d53cd7c97e90dc653.scope - libcontainer container 64dedfddda57df9670e11af97d9c23b6914492ed9ad8492d53cd7c97e90dc653. Sep 3 23:24:57.660897 systemd[1]: Started cri-containerd-80db2e4acb63b88f6cb85ad8d682c5f51fa190bad8d04fb487f57b8ceb203d7c.scope - libcontainer container 80db2e4acb63b88f6cb85ad8d682c5f51fa190bad8d04fb487f57b8ceb203d7c. Sep 3 23:24:57.693121 containerd[1881]: time="2025-09-03T23:24:57.693089553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-46801d0988,Uid:635680a7de52894aaa83163d125956ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"64dedfddda57df9670e11af97d9c23b6914492ed9ad8492d53cd7c97e90dc653\"" Sep 3 23:24:57.700650 containerd[1881]: time="2025-09-03T23:24:57.700617292Z" level=info msg="CreateContainer within sandbox \"64dedfddda57df9670e11af97d9c23b6914492ed9ad8492d53cd7c97e90dc653\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 3 23:24:57.701442 containerd[1881]: time="2025-09-03T23:24:57.701415411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-46801d0988,Uid:87560c0fba0a496c0dfa472d0ea03dc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"80db2e4acb63b88f6cb85ad8d682c5f51fa190bad8d04fb487f57b8ceb203d7c\"" Sep 3 23:24:57.707834 containerd[1881]: time="2025-09-03T23:24:57.707810094Z" level=info msg="CreateContainer within sandbox \"80db2e4acb63b88f6cb85ad8d682c5f51fa190bad8d04fb487f57b8ceb203d7c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 3 23:24:57.724632 kubelet[3034]: I0903 23:24:57.724607 3034 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:57.724891 kubelet[3034]: E0903 23:24:57.724865 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:57.731471 containerd[1881]: time="2025-09-03T23:24:57.731443854Z" level=info msg="Container 3ea66cff4a714931951a2e263888de271ee725bb01eb4ad33afdc1fa8e243d9a: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:57.735893 containerd[1881]: time="2025-09-03T23:24:57.735486911Z" level=info msg="Container 8076a7825761ebab8e956b6f79732c9694d4deae4091000a543b690267af8501: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:57.748551 containerd[1881]: time="2025-09-03T23:24:57.748522341Z" level=info msg="CreateContainer within sandbox \"64dedfddda57df9670e11af97d9c23b6914492ed9ad8492d53cd7c97e90dc653\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ea66cff4a714931951a2e263888de271ee725bb01eb4ad33afdc1fa8e243d9a\"" Sep 3 23:24:57.748995 containerd[1881]: time="2025-09-03T23:24:57.748972098Z" level=info msg="StartContainer for \"3ea66cff4a714931951a2e263888de271ee725bb01eb4ad33afdc1fa8e243d9a\"" Sep 3 23:24:57.750137 containerd[1881]: time="2025-09-03T23:24:57.750115778Z" level=info msg="connecting to shim 3ea66cff4a714931951a2e263888de271ee725bb01eb4ad33afdc1fa8e243d9a" address="unix:///run/containerd/s/5c5692dc4908e4774cb06c92b7c3889c58419ee18bbc7ac03ffe65b4fdb7bd5d" protocol=ttrpc version=3 Sep 3 23:24:57.764305 systemd[1]: Started cri-containerd-3ea66cff4a714931951a2e263888de271ee725bb01eb4ad33afdc1fa8e243d9a.scope - libcontainer container 3ea66cff4a714931951a2e263888de271ee725bb01eb4ad33afdc1fa8e243d9a. Sep 3 23:24:57.765360 containerd[1881]: time="2025-09-03T23:24:57.765332805Z" level=info msg="CreateContainer within sandbox \"80db2e4acb63b88f6cb85ad8d682c5f51fa190bad8d04fb487f57b8ceb203d7c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8076a7825761ebab8e956b6f79732c9694d4deae4091000a543b690267af8501\"" Sep 3 23:24:57.765900 containerd[1881]: time="2025-09-03T23:24:57.765760425Z" level=info msg="StartContainer for \"8076a7825761ebab8e956b6f79732c9694d4deae4091000a543b690267af8501\"" Sep 3 23:24:57.767050 containerd[1881]: time="2025-09-03T23:24:57.767025773Z" level=info msg="connecting to shim 8076a7825761ebab8e956b6f79732c9694d4deae4091000a543b690267af8501" address="unix:///run/containerd/s/8237be8d236ba177d76466747248cf4584e55b2d340047844e345155cc0756d2" protocol=ttrpc version=3 Sep 3 23:24:57.787326 systemd[1]: Started cri-containerd-8076a7825761ebab8e956b6f79732c9694d4deae4091000a543b690267af8501.scope - libcontainer container 8076a7825761ebab8e956b6f79732c9694d4deae4091000a543b690267af8501. Sep 3 23:24:57.804310 containerd[1881]: time="2025-09-03T23:24:57.803851983Z" level=info msg="StartContainer for \"3ea66cff4a714931951a2e263888de271ee725bb01eb4ad33afdc1fa8e243d9a\" returns successfully" Sep 3 23:24:57.828660 containerd[1881]: time="2025-09-03T23:24:57.828622455Z" level=info msg="StartContainer for \"8076a7825761ebab8e956b6f79732c9694d4deae4091000a543b690267af8501\" returns successfully" Sep 3 23:24:57.873894 containerd[1881]: time="2025-09-03T23:24:57.873864285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-46801d0988,Uid:ee8b3474b8468ddeec0f81c5c018a2b8,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:57.922607 containerd[1881]: time="2025-09-03T23:24:57.922576197Z" level=info msg="connecting to shim 9910e1ecd2b54fe740335d68ba308bdab414818fdec5c9522e7d134785ec58a6" address="unix:///run/containerd/s/30fed8db95224f8ecf719f6613c9d83b512d47e254ae00b0d0c542598eec3083" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:57.948308 systemd[1]: Started cri-containerd-9910e1ecd2b54fe740335d68ba308bdab414818fdec5c9522e7d134785ec58a6.scope - libcontainer container 9910e1ecd2b54fe740335d68ba308bdab414818fdec5c9522e7d134785ec58a6. Sep 3 23:24:58.006592 containerd[1881]: time="2025-09-03T23:24:58.006560571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-46801d0988,Uid:ee8b3474b8468ddeec0f81c5c018a2b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9910e1ecd2b54fe740335d68ba308bdab414818fdec5c9522e7d134785ec58a6\"" Sep 3 23:24:58.015530 containerd[1881]: time="2025-09-03T23:24:58.015505622Z" level=info msg="CreateContainer within sandbox \"9910e1ecd2b54fe740335d68ba308bdab414818fdec5c9522e7d134785ec58a6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 3 23:24:58.034738 containerd[1881]: time="2025-09-03T23:24:58.034715858Z" level=info msg="Container 9b1074d6fb4c82a6a627f716e92048d3273eac2e32b787bd96b917b7d27fcfcf: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:58.054957 containerd[1881]: time="2025-09-03T23:24:58.054881920Z" level=info msg="CreateContainer within sandbox \"9910e1ecd2b54fe740335d68ba308bdab414818fdec5c9522e7d134785ec58a6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9b1074d6fb4c82a6a627f716e92048d3273eac2e32b787bd96b917b7d27fcfcf\"" Sep 3 23:24:58.055512 containerd[1881]: time="2025-09-03T23:24:58.055431199Z" level=info msg="StartContainer for \"9b1074d6fb4c82a6a627f716e92048d3273eac2e32b787bd96b917b7d27fcfcf\"" Sep 3 23:24:58.056175 containerd[1881]: time="2025-09-03T23:24:58.056156268Z" level=info msg="connecting to shim 9b1074d6fb4c82a6a627f716e92048d3273eac2e32b787bd96b917b7d27fcfcf" address="unix:///run/containerd/s/30fed8db95224f8ecf719f6613c9d83b512d47e254ae00b0d0c542598eec3083" protocol=ttrpc version=3 Sep 3 23:24:58.075306 systemd[1]: Started cri-containerd-9b1074d6fb4c82a6a627f716e92048d3273eac2e32b787bd96b917b7d27fcfcf.scope - libcontainer container 9b1074d6fb4c82a6a627f716e92048d3273eac2e32b787bd96b917b7d27fcfcf. Sep 3 23:24:58.143061 containerd[1881]: time="2025-09-03T23:24:58.143035811Z" level=info msg="StartContainer for \"9b1074d6fb4c82a6a627f716e92048d3273eac2e32b787bd96b917b7d27fcfcf\" returns successfully" Sep 3 23:24:58.276688 kubelet[3034]: E0903 23:24:58.276658 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-46801d0988\" not found" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:58.280592 kubelet[3034]: E0903 23:24:58.280571 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-46801d0988\" not found" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:58.284722 kubelet[3034]: E0903 23:24:58.284643 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-46801d0988\" not found" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:58.629335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027233518.mount: Deactivated successfully. Sep 3 23:24:59.108362 kubelet[3034]: E0903 23:24:59.108239 3034 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.1.0-n-46801d0988\" not found" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:59.286276 kubelet[3034]: E0903 23:24:59.286246 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-46801d0988\" not found" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:59.286478 kubelet[3034]: E0903 23:24:59.286449 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-46801d0988\" not found" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:59.328497 kubelet[3034]: I0903 23:24:59.327413 3034 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:59.346944 kubelet[3034]: I0903 23:24:59.346854 3034 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-46801d0988" Sep 3 23:24:59.347173 kubelet[3034]: E0903 23:24:59.347048 3034 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.1.0-n-46801d0988\": node \"ci-4372.1.0-n-46801d0988\" not found" Sep 3 23:24:59.460707 kubelet[3034]: I0903 23:24:59.460684 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-46801d0988" Sep 3 23:24:59.498610 kubelet[3034]: E0903 23:24:59.498584 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-46801d0988\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.1.0-n-46801d0988" Sep 3 23:24:59.498610 kubelet[3034]: I0903 23:24:59.498608 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:24:59.501391 kubelet[3034]: E0903 23:24:59.501359 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-46801d0988\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:24:59.501391 kubelet[3034]: I0903 23:24:59.501379 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:24:59.502692 kubelet[3034]: E0903 23:24:59.502670 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.1.0-n-46801d0988\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:25:00.047289 kubelet[3034]: I0903 23:25:00.047237 3034 apiserver.go:52] "Watching apiserver" Sep 3 23:25:00.059543 kubelet[3034]: I0903 23:25:00.059509 3034 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:25:00.286755 kubelet[3034]: I0903 23:25:00.286638 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-46801d0988" Sep 3 23:25:00.295352 kubelet[3034]: I0903 23:25:00.295305 3034 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 3 23:25:01.468935 systemd[1]: Reload requested from client PID 3308 ('systemctl') (unit session-9.scope)... Sep 3 23:25:01.468949 systemd[1]: Reloading... Sep 3 23:25:01.523928 kubelet[3034]: I0903 23:25:01.523904 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:25:01.534624 kubelet[3034]: I0903 23:25:01.534603 3034 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 3 23:25:01.558222 zram_generator::config[3357]: No configuration found. Sep 3 23:25:01.620989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:25:01.711217 systemd[1]: Reloading finished in 242 ms. Sep 3 23:25:01.734134 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:25:01.744977 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:25:01.745153 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:25:01.745218 systemd[1]: kubelet.service: Consumed 745ms CPU time, 127.4M memory peak. Sep 3 23:25:01.746435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:25:01.846274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:25:01.848667 (kubelet)[3418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:25:01.875939 kubelet[3418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:25:01.875939 kubelet[3418]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:25:01.875939 kubelet[3418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:25:01.876170 kubelet[3418]: I0903 23:25:01.876092 3418 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:25:01.881633 kubelet[3418]: I0903 23:25:01.881610 3418 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 3 23:25:01.881633 kubelet[3418]: I0903 23:25:01.881629 3418 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:25:01.881764 kubelet[3418]: I0903 23:25:01.881749 3418 server.go:956] "Client rotation is on, will bootstrap in background" Sep 3 23:25:01.882643 kubelet[3418]: I0903 23:25:01.882627 3418 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 3 23:25:01.884041 kubelet[3418]: I0903 23:25:01.884018 3418 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:25:01.887224 kubelet[3418]: I0903 23:25:01.887128 3418 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:25:01.889346 kubelet[3418]: I0903 23:25:01.889329 3418 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:25:01.889480 kubelet[3418]: I0903 23:25:01.889458 3418 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:25:01.889581 kubelet[3418]: I0903 23:25:01.889478 3418 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-46801d0988","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:25:01.889650 kubelet[3418]: I0903 23:25:01.889585 3418 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:25:01.889650 kubelet[3418]: I0903 23:25:01.889591 3418 container_manager_linux.go:303] "Creating device plugin manager" Sep 3 23:25:01.889650 kubelet[3418]: I0903 23:25:01.889622 3418 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:25:01.889832 kubelet[3418]: I0903 23:25:01.889710 3418 kubelet.go:480] "Attempting to sync node with API server" Sep 3 23:25:01.889832 kubelet[3418]: I0903 23:25:01.889720 3418 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:25:01.889832 kubelet[3418]: I0903 23:25:01.889741 3418 kubelet.go:386] "Adding apiserver pod source" Sep 3 23:25:01.889832 kubelet[3418]: I0903 23:25:01.889750 3418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:25:01.891159 kubelet[3418]: I0903 23:25:01.891143 3418 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:25:01.891645 kubelet[3418]: I0903 23:25:01.891632 3418 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 3 23:25:01.893576 kubelet[3418]: I0903 23:25:01.893564 3418 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:25:01.893756 kubelet[3418]: I0903 23:25:01.893693 3418 server.go:1289] "Started kubelet" Sep 3 23:25:01.896652 kubelet[3418]: I0903 23:25:01.896639 3418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:25:01.898027 kubelet[3418]: I0903 23:25:01.897623 3418 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:25:01.898752 kubelet[3418]: I0903 23:25:01.898728 3418 server.go:317] "Adding debug handlers to kubelet server" Sep 3 23:25:01.902324 kubelet[3418]: I0903 23:25:01.902291 3418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:25:01.902579 kubelet[3418]: I0903 23:25:01.902565 3418 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:25:01.903370 kubelet[3418]: E0903 23:25:01.903347 3418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-46801d0988\" not found" Sep 3 23:25:01.903681 kubelet[3418]: I0903 23:25:01.903459 3418 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:25:01.905521 kubelet[3418]: I0903 23:25:01.905248 3418 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:25:01.905624 kubelet[3418]: I0903 23:25:01.905610 3418 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:25:01.905722 kubelet[3418]: I0903 23:25:01.905709 3418 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:25:01.913325 kubelet[3418]: I0903 23:25:01.913308 3418 factory.go:223] Registration of the systemd container factory successfully Sep 3 23:25:01.913585 kubelet[3418]: I0903 23:25:01.913567 3418 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:25:01.917628 kubelet[3418]: I0903 23:25:01.917590 3418 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 3 23:25:01.918546 kubelet[3418]: I0903 23:25:01.918525 3418 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 3 23:25:01.918546 kubelet[3418]: I0903 23:25:01.918541 3418 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 3 23:25:01.918626 kubelet[3418]: I0903 23:25:01.918557 3418 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:25:01.918626 kubelet[3418]: I0903 23:25:01.918562 3418 kubelet.go:2436] "Starting kubelet main sync loop" Sep 3 23:25:01.918626 kubelet[3418]: E0903 23:25:01.918591 3418 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:25:01.919476 kubelet[3418]: I0903 23:25:01.919295 3418 factory.go:223] Registration of the containerd container factory successfully Sep 3 23:25:01.961550 kubelet[3418]: I0903 23:25:01.961535 3418 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:25:01.962645 kubelet[3418]: I0903 23:25:01.962625 3418 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:25:01.962809 kubelet[3418]: I0903 23:25:01.962798 3418 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:25:01.962970 kubelet[3418]: I0903 23:25:01.962959 3418 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 3 23:25:01.963399 kubelet[3418]: I0903 23:25:01.963219 3418 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 3 23:25:01.963641 kubelet[3418]: I0903 23:25:01.963620 3418 policy_none.go:49] "None policy: Start" Sep 3 23:25:01.963876 kubelet[3418]: I0903 23:25:01.963850 3418 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:25:01.964434 kubelet[3418]: I0903 23:25:01.964006 3418 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:25:01.964434 kubelet[3418]: I0903 23:25:01.964102 3418 state_mem.go:75] "Updated machine memory state" Sep 3 23:25:01.969649 kubelet[3418]: E0903 23:25:01.969450 3418 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 3 23:25:01.970666 kubelet[3418]: I0903 23:25:01.970651 3418 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:25:01.970738 kubelet[3418]: I0903 23:25:01.970664 3418 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:25:01.970847 kubelet[3418]: I0903 23:25:01.970832 3418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:25:01.972124 kubelet[3418]: E0903 23:25:01.972048 3418 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:25:02.020219 kubelet[3418]: I0903 23:25:02.019472 3418 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.020366 kubelet[3418]: I0903 23:25:02.019536 3418 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.020467 kubelet[3418]: I0903 23:25:02.019578 3418 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.034132 kubelet[3418]: I0903 23:25:02.034111 3418 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 3 23:25:02.034365 kubelet[3418]: E0903 23:25:02.034152 3418 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-46801d0988\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.035001 kubelet[3418]: I0903 23:25:02.034938 3418 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 3 23:25:02.035470 kubelet[3418]: I0903 23:25:02.035449 3418 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 3 23:25:02.035577 kubelet[3418]: E0903 23:25:02.035566 3418 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-46801d0988\" already exists" pod="kube-system/kube-scheduler-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.073217 kubelet[3418]: I0903 23:25:02.073189 3418 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.086108 kubelet[3418]: I0903 23:25:02.086062 3418 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.086174 kubelet[3418]: I0903 23:25:02.086131 3418 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.207344 kubelet[3418]: I0903 23:25:02.207311 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87560c0fba0a496c0dfa472d0ea03dc2-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-46801d0988\" (UID: \"87560c0fba0a496c0dfa472d0ea03dc2\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.207534 kubelet[3418]: I0903 23:25:02.207404 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/635680a7de52894aaa83163d125956ca-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-46801d0988\" (UID: \"635680a7de52894aaa83163d125956ca\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.207534 kubelet[3418]: I0903 23:25:02.207425 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/635680a7de52894aaa83163d125956ca-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-46801d0988\" (UID: \"635680a7de52894aaa83163d125956ca\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.208409 kubelet[3418]: I0903 23:25:02.208273 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/635680a7de52894aaa83163d125956ca-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-46801d0988\" (UID: \"635680a7de52894aaa83163d125956ca\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.208409 kubelet[3418]: I0903 23:25:02.208307 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/635680a7de52894aaa83163d125956ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-46801d0988\" (UID: \"635680a7de52894aaa83163d125956ca\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.208409 kubelet[3418]: I0903 23:25:02.208374 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87560c0fba0a496c0dfa472d0ea03dc2-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-46801d0988\" (UID: \"87560c0fba0a496c0dfa472d0ea03dc2\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.208409 kubelet[3418]: I0903 23:25:02.208386 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87560c0fba0a496c0dfa472d0ea03dc2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-46801d0988\" (UID: \"87560c0fba0a496c0dfa472d0ea03dc2\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.208642 kubelet[3418]: I0903 23:25:02.208581 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/635680a7de52894aaa83163d125956ca-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-46801d0988\" (UID: \"635680a7de52894aaa83163d125956ca\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.208642 kubelet[3418]: I0903 23:25:02.208601 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee8b3474b8468ddeec0f81c5c018a2b8-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-46801d0988\" (UID: \"ee8b3474b8468ddeec0f81c5c018a2b8\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.876173 sudo[3454]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 3 23:25:02.876410 sudo[3454]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 3 23:25:02.899054 kubelet[3418]: I0903 23:25:02.898250 3418 apiserver.go:52] "Watching apiserver" Sep 3 23:25:02.906032 kubelet[3418]: I0903 23:25:02.905969 3418 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:25:02.949007 kubelet[3418]: I0903 23:25:02.948404 3418 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.959488 kubelet[3418]: I0903 23:25:02.959443 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" podStartSLOduration=1.959433081 podStartE2EDuration="1.959433081s" podCreationTimestamp="2025-09-03 23:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:25:02.959300869 +0000 UTC m=+1.107794781" watchObservedRunningTime="2025-09-03 23:25:02.959433081 +0000 UTC m=+1.107927001" Sep 3 23:25:02.963769 kubelet[3418]: I0903 23:25:02.963742 3418 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 3 23:25:02.963962 kubelet[3418]: E0903 23:25:02.963785 3418 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-46801d0988\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-n-46801d0988" Sep 3 23:25:02.975880 kubelet[3418]: I0903 23:25:02.975839 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-46801d0988" podStartSLOduration=0.975829263 podStartE2EDuration="975.829263ms" podCreationTimestamp="2025-09-03 23:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:25:02.975610488 +0000 UTC m=+1.124104400" watchObservedRunningTime="2025-09-03 23:25:02.975829263 +0000 UTC m=+1.124323175" Sep 3 23:25:02.989892 kubelet[3418]: I0903 23:25:02.989853 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-n-46801d0988" podStartSLOduration=2.989842548 podStartE2EDuration="2.989842548s" podCreationTimestamp="2025-09-03 23:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:25:02.989276291 +0000 UTC m=+1.137770227" watchObservedRunningTime="2025-09-03 23:25:02.989842548 +0000 UTC m=+1.138336460" Sep 3 23:25:03.246731 sudo[3454]: pam_unix(sudo:session): session closed for user root Sep 3 23:25:04.579005 sudo[2364]: pam_unix(sudo:session): session closed for user root Sep 3 23:25:04.667237 sshd[2363]: Connection closed by 10.200.16.10 port 37266 Sep 3 23:25:04.667675 sshd-session[2361]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:04.671617 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:37266.service: Deactivated successfully. Sep 3 23:25:04.674524 systemd[1]: session-9.scope: Deactivated successfully. Sep 3 23:25:04.674708 systemd[1]: session-9.scope: Consumed 4.335s CPU time, 273.1M memory peak. Sep 3 23:25:04.675936 systemd-logind[1859]: Session 9 logged out. Waiting for processes to exit. Sep 3 23:25:04.677288 systemd-logind[1859]: Removed session 9. Sep 3 23:25:07.019195 kubelet[3418]: I0903 23:25:07.019157 3418 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 3 23:25:07.019661 containerd[1881]: time="2025-09-03T23:25:07.019490800Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 3 23:25:07.020090 kubelet[3418]: I0903 23:25:07.019904 3418 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 3 23:25:07.956453 systemd[1]: Created slice kubepods-besteffort-pod54d0e5e3_faaa_4955_a270_ede5c51fe162.slice - libcontainer container kubepods-besteffort-pod54d0e5e3_faaa_4955_a270_ede5c51fe162.slice. Sep 3 23:25:07.967064 systemd[1]: Created slice kubepods-burstable-pod2a6b4344_f248_4793_80ef_485c882efec3.slice - libcontainer container kubepods-burstable-pod2a6b4344_f248_4793_80ef_485c882efec3.slice. Sep 3 23:25:08.048491 kubelet[3418]: I0903 23:25:08.048295 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9fc\" (UniqueName: \"kubernetes.io/projected/54d0e5e3-faaa-4955-a270-ede5c51fe162-kube-api-access-6f9fc\") pod \"kube-proxy-znfds\" (UID: \"54d0e5e3-faaa-4955-a270-ede5c51fe162\") " pod="kube-system/kube-proxy-znfds" Sep 3 23:25:08.048491 kubelet[3418]: I0903 23:25:08.048326 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cilium-run\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.048491 kubelet[3418]: I0903 23:25:08.048352 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-bpf-maps\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.048491 kubelet[3418]: I0903 23:25:08.048382 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cni-path\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.048491 kubelet[3418]: I0903 23:25:08.048392 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-xtables-lock\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.048491 kubelet[3418]: I0903 23:25:08.048402 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a6b4344-f248-4793-80ef-485c882efec3-cilium-config-path\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.049195 kubelet[3418]: I0903 23:25:08.048412 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54d0e5e3-faaa-4955-a270-ede5c51fe162-kube-proxy\") pod \"kube-proxy-znfds\" (UID: \"54d0e5e3-faaa-4955-a270-ede5c51fe162\") " pod="kube-system/kube-proxy-znfds" Sep 3 23:25:08.049195 kubelet[3418]: I0903 23:25:08.048420 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54d0e5e3-faaa-4955-a270-ede5c51fe162-lib-modules\") pod \"kube-proxy-znfds\" (UID: \"54d0e5e3-faaa-4955-a270-ede5c51fe162\") " pod="kube-system/kube-proxy-znfds" Sep 3 23:25:08.049195 kubelet[3418]: I0903 23:25:08.048428 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-etc-cni-netd\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.049195 kubelet[3418]: I0903 23:25:08.048438 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-host-proc-sys-net\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.049195 kubelet[3418]: I0903 23:25:08.048448 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq44n\" (UniqueName: \"kubernetes.io/projected/2a6b4344-f248-4793-80ef-485c882efec3-kube-api-access-jq44n\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.049515 kubelet[3418]: I0903 23:25:08.048493 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54d0e5e3-faaa-4955-a270-ede5c51fe162-xtables-lock\") pod \"kube-proxy-znfds\" (UID: \"54d0e5e3-faaa-4955-a270-ede5c51fe162\") " pod="kube-system/kube-proxy-znfds" Sep 3 23:25:08.049515 kubelet[3418]: I0903 23:25:08.048541 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-lib-modules\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.049515 kubelet[3418]: I0903 23:25:08.048922 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a6b4344-f248-4793-80ef-485c882efec3-hubble-tls\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.049515 kubelet[3418]: I0903 23:25:08.048951 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-hostproc\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.049515 kubelet[3418]: I0903 23:25:08.048966 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cilium-cgroup\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.049515 kubelet[3418]: I0903 23:25:08.048980 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a6b4344-f248-4793-80ef-485c882efec3-clustermesh-secrets\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.049814 kubelet[3418]: I0903 23:25:08.048993 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-host-proc-sys-kernel\") pod \"cilium-vwvrh\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " pod="kube-system/cilium-vwvrh" Sep 3 23:25:08.189842 systemd[1]: Created slice kubepods-besteffort-pod9aa681fc_0e7e_4aff_acb4_8782d04a93c2.slice - libcontainer container kubepods-besteffort-pod9aa681fc_0e7e_4aff_acb4_8782d04a93c2.slice. Sep 3 23:25:08.252552 kubelet[3418]: I0903 23:25:08.252147 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9aa681fc-0e7e-4aff-acb4-8782d04a93c2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mptfd\" (UID: \"9aa681fc-0e7e-4aff-acb4-8782d04a93c2\") " pod="kube-system/cilium-operator-6c4d7847fc-mptfd" Sep 3 23:25:08.252552 kubelet[3418]: I0903 23:25:08.252176 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v5b4\" (UniqueName: \"kubernetes.io/projected/9aa681fc-0e7e-4aff-acb4-8782d04a93c2-kube-api-access-9v5b4\") pod \"cilium-operator-6c4d7847fc-mptfd\" (UID: \"9aa681fc-0e7e-4aff-acb4-8782d04a93c2\") " pod="kube-system/cilium-operator-6c4d7847fc-mptfd" Sep 3 23:25:08.265709 containerd[1881]: time="2025-09-03T23:25:08.265680288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-znfds,Uid:54d0e5e3-faaa-4955-a270-ede5c51fe162,Namespace:kube-system,Attempt:0,}" Sep 3 23:25:08.270726 containerd[1881]: time="2025-09-03T23:25:08.270701342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vwvrh,Uid:2a6b4344-f248-4793-80ef-485c882efec3,Namespace:kube-system,Attempt:0,}" Sep 3 23:25:08.343750 containerd[1881]: time="2025-09-03T23:25:08.343650153Z" level=info msg="connecting to shim b22558ec0b8aba13e337b21f6889a8ed44faf2f6cd5f267fe7c5143489b87f25" address="unix:///run/containerd/s/d5b9573279af3f6ab722556f392db048b8b33bd865ca420e326e7b8fa07ee4af" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:08.349882 containerd[1881]: time="2025-09-03T23:25:08.349829824Z" level=info msg="connecting to shim b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b" address="unix:///run/containerd/s/868e1b0d7f1bceb836009d75e662998193a5668769d43159dce5625ad1b575af" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:08.368335 systemd[1]: Started cri-containerd-b22558ec0b8aba13e337b21f6889a8ed44faf2f6cd5f267fe7c5143489b87f25.scope - libcontainer container b22558ec0b8aba13e337b21f6889a8ed44faf2f6cd5f267fe7c5143489b87f25. Sep 3 23:25:08.371404 systemd[1]: Started cri-containerd-b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b.scope - libcontainer container b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b. Sep 3 23:25:08.398376 containerd[1881]: time="2025-09-03T23:25:08.398254064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-znfds,Uid:54d0e5e3-faaa-4955-a270-ede5c51fe162,Namespace:kube-system,Attempt:0,} returns sandbox id \"b22558ec0b8aba13e337b21f6889a8ed44faf2f6cd5f267fe7c5143489b87f25\"" Sep 3 23:25:08.401162 containerd[1881]: time="2025-09-03T23:25:08.400640379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vwvrh,Uid:2a6b4344-f248-4793-80ef-485c882efec3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\"" Sep 3 23:25:08.402944 containerd[1881]: time="2025-09-03T23:25:08.402918530Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 3 23:25:08.409183 containerd[1881]: time="2025-09-03T23:25:08.409160570Z" level=info msg="CreateContainer within sandbox \"b22558ec0b8aba13e337b21f6889a8ed44faf2f6cd5f267fe7c5143489b87f25\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 3 23:25:08.437561 containerd[1881]: time="2025-09-03T23:25:08.437532419Z" level=info msg="Container 3cfa5b1f9e9bbb8ae0d3cb61dc808f0a026ef66d2986ec237bd07ad3c9158c47: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:08.456366 containerd[1881]: time="2025-09-03T23:25:08.456341608Z" level=info msg="CreateContainer within sandbox \"b22558ec0b8aba13e337b21f6889a8ed44faf2f6cd5f267fe7c5143489b87f25\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3cfa5b1f9e9bbb8ae0d3cb61dc808f0a026ef66d2986ec237bd07ad3c9158c47\"" Sep 3 23:25:08.456932 containerd[1881]: time="2025-09-03T23:25:08.456877170Z" level=info msg="StartContainer for \"3cfa5b1f9e9bbb8ae0d3cb61dc808f0a026ef66d2986ec237bd07ad3c9158c47\"" Sep 3 23:25:08.458127 containerd[1881]: time="2025-09-03T23:25:08.458107133Z" level=info msg="connecting to shim 3cfa5b1f9e9bbb8ae0d3cb61dc808f0a026ef66d2986ec237bd07ad3c9158c47" address="unix:///run/containerd/s/d5b9573279af3f6ab722556f392db048b8b33bd865ca420e326e7b8fa07ee4af" protocol=ttrpc version=3 Sep 3 23:25:08.474320 systemd[1]: Started cri-containerd-3cfa5b1f9e9bbb8ae0d3cb61dc808f0a026ef66d2986ec237bd07ad3c9158c47.scope - libcontainer container 3cfa5b1f9e9bbb8ae0d3cb61dc808f0a026ef66d2986ec237bd07ad3c9158c47. Sep 3 23:25:08.494557 containerd[1881]: time="2025-09-03T23:25:08.494533037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mptfd,Uid:9aa681fc-0e7e-4aff-acb4-8782d04a93c2,Namespace:kube-system,Attempt:0,}" Sep 3 23:25:08.502134 containerd[1881]: time="2025-09-03T23:25:08.502110356Z" level=info msg="StartContainer for \"3cfa5b1f9e9bbb8ae0d3cb61dc808f0a026ef66d2986ec237bd07ad3c9158c47\" returns successfully" Sep 3 23:25:08.539372 containerd[1881]: time="2025-09-03T23:25:08.538986571Z" level=info msg="connecting to shim c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3" address="unix:///run/containerd/s/1c2b18a9e61c7b80ca8374e72d3b1bedde84eb0dd46cf6d02b9156111ca75502" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:08.558361 systemd[1]: Started cri-containerd-c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3.scope - libcontainer container c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3. Sep 3 23:25:08.591479 containerd[1881]: time="2025-09-03T23:25:08.591445511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mptfd,Uid:9aa681fc-0e7e-4aff-acb4-8782d04a93c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\"" Sep 3 23:25:08.999159 kubelet[3418]: I0903 23:25:08.999102 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-znfds" podStartSLOduration=1.999086824 podStartE2EDuration="1.999086824s" podCreationTimestamp="2025-09-03 23:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:25:08.978172074 +0000 UTC m=+7.126665986" watchObservedRunningTime="2025-09-03 23:25:08.999086824 +0000 UTC m=+7.147580736" Sep 3 23:25:12.040272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198243047.mount: Deactivated successfully. Sep 3 23:25:13.697920 containerd[1881]: time="2025-09-03T23:25:13.697872151Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:13.733176 containerd[1881]: time="2025-09-03T23:25:13.733103200Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 3 23:25:13.737495 containerd[1881]: time="2025-09-03T23:25:13.737247056Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:13.738149 containerd[1881]: time="2025-09-03T23:25:13.738006869Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.334984456s" Sep 3 23:25:13.738149 containerd[1881]: time="2025-09-03T23:25:13.738033926Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 3 23:25:13.741797 containerd[1881]: time="2025-09-03T23:25:13.741656903Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 3 23:25:13.751142 containerd[1881]: time="2025-09-03T23:25:13.750298168Z" level=info msg="CreateContainer within sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:25:14.089095 containerd[1881]: time="2025-09-03T23:25:14.089012099Z" level=info msg="Container 7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:14.113590 containerd[1881]: time="2025-09-03T23:25:14.113556135Z" level=info msg="CreateContainer within sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\"" Sep 3 23:25:14.115628 containerd[1881]: time="2025-09-03T23:25:14.115158646Z" level=info msg="StartContainer for \"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\"" Sep 3 23:25:14.116018 containerd[1881]: time="2025-09-03T23:25:14.115994158Z" level=info msg="connecting to shim 7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21" address="unix:///run/containerd/s/868e1b0d7f1bceb836009d75e662998193a5668769d43159dce5625ad1b575af" protocol=ttrpc version=3 Sep 3 23:25:14.132308 systemd[1]: Started cri-containerd-7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21.scope - libcontainer container 7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21. Sep 3 23:25:14.158030 containerd[1881]: time="2025-09-03T23:25:14.157846838Z" level=info msg="StartContainer for \"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\" returns successfully" Sep 3 23:25:14.158812 systemd[1]: cri-containerd-7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21.scope: Deactivated successfully. Sep 3 23:25:14.162598 containerd[1881]: time="2025-09-03T23:25:14.162572254Z" level=info msg="received exit event container_id:\"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\" id:\"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\" pid:3833 exited_at:{seconds:1756941914 nanos:162240309}" Sep 3 23:25:14.162788 containerd[1881]: time="2025-09-03T23:25:14.162600879Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\" id:\"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\" pid:3833 exited_at:{seconds:1756941914 nanos:162240309}" Sep 3 23:25:14.176185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21-rootfs.mount: Deactivated successfully. Sep 3 23:25:15.984568 containerd[1881]: time="2025-09-03T23:25:15.984302537Z" level=info msg="CreateContainer within sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:25:16.008170 containerd[1881]: time="2025-09-03T23:25:16.007772708Z" level=info msg="Container 03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:16.010962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183910469.mount: Deactivated successfully. Sep 3 23:25:16.020954 containerd[1881]: time="2025-09-03T23:25:16.020913332Z" level=info msg="CreateContainer within sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\"" Sep 3 23:25:16.022247 containerd[1881]: time="2025-09-03T23:25:16.022110078Z" level=info msg="StartContainer for \"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\"" Sep 3 23:25:16.022786 containerd[1881]: time="2025-09-03T23:25:16.022752928Z" level=info msg="connecting to shim 03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb" address="unix:///run/containerd/s/868e1b0d7f1bceb836009d75e662998193a5668769d43159dce5625ad1b575af" protocol=ttrpc version=3 Sep 3 23:25:16.039342 systemd[1]: Started cri-containerd-03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb.scope - libcontainer container 03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb. Sep 3 23:25:16.067492 containerd[1881]: time="2025-09-03T23:25:16.067471382Z" level=info msg="StartContainer for \"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\" returns successfully" Sep 3 23:25:16.079030 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:25:16.079193 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:25:16.079912 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:25:16.082319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:25:16.083591 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:25:16.087514 systemd[1]: cri-containerd-03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb.scope: Deactivated successfully. Sep 3 23:25:16.088872 containerd[1881]: time="2025-09-03T23:25:16.088848982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\" id:\"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\" pid:3876 exited_at:{seconds:1756941916 nanos:88397314}" Sep 3 23:25:16.089511 containerd[1881]: time="2025-09-03T23:25:16.089261842Z" level=info msg="received exit event container_id:\"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\" id:\"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\" pid:3876 exited_at:{seconds:1756941916 nanos:88397314}" Sep 3 23:25:16.107071 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:25:16.592738 containerd[1881]: time="2025-09-03T23:25:16.592696126Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:16.595929 containerd[1881]: time="2025-09-03T23:25:16.595797125Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 3 23:25:16.599230 containerd[1881]: time="2025-09-03T23:25:16.599080609Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:16.600581 containerd[1881]: time="2025-09-03T23:25:16.600554226Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.858867347s" Sep 3 23:25:16.600651 containerd[1881]: time="2025-09-03T23:25:16.600584979Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 3 23:25:16.609800 containerd[1881]: time="2025-09-03T23:25:16.609774253Z" level=info msg="CreateContainer within sandbox \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 3 23:25:16.626140 containerd[1881]: time="2025-09-03T23:25:16.626116687Z" level=info msg="Container fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:16.639044 containerd[1881]: time="2025-09-03T23:25:16.639016681Z" level=info msg="CreateContainer within sandbox \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\"" Sep 3 23:25:16.639705 containerd[1881]: time="2025-09-03T23:25:16.639589457Z" level=info msg="StartContainer for \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\"" Sep 3 23:25:16.640254 containerd[1881]: time="2025-09-03T23:25:16.640227427Z" level=info msg="connecting to shim fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e" address="unix:///run/containerd/s/1c2b18a9e61c7b80ca8374e72d3b1bedde84eb0dd46cf6d02b9156111ca75502" protocol=ttrpc version=3 Sep 3 23:25:16.656317 systemd[1]: Started cri-containerd-fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e.scope - libcontainer container fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e. Sep 3 23:25:16.681357 containerd[1881]: time="2025-09-03T23:25:16.681170048Z" level=info msg="StartContainer for \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\" returns successfully" Sep 3 23:25:16.995040 containerd[1881]: time="2025-09-03T23:25:16.995005204Z" level=info msg="CreateContainer within sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:25:17.009881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb-rootfs.mount: Deactivated successfully. Sep 3 23:25:17.020534 containerd[1881]: time="2025-09-03T23:25:17.019274877Z" level=info msg="Container 69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:17.022862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1770165292.mount: Deactivated successfully. Sep 3 23:25:17.038688 containerd[1881]: time="2025-09-03T23:25:17.038642260Z" level=info msg="CreateContainer within sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\"" Sep 3 23:25:17.040487 containerd[1881]: time="2025-09-03T23:25:17.040450775Z" level=info msg="StartContainer for \"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\"" Sep 3 23:25:17.041491 containerd[1881]: time="2025-09-03T23:25:17.041442667Z" level=info msg="connecting to shim 69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c" address="unix:///run/containerd/s/868e1b0d7f1bceb836009d75e662998193a5668769d43159dce5625ad1b575af" protocol=ttrpc version=3 Sep 3 23:25:17.067365 systemd[1]: Started cri-containerd-69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c.scope - libcontainer container 69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c. Sep 3 23:25:17.103631 kubelet[3418]: I0903 23:25:17.103586 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mptfd" podStartSLOduration=1.094669343 podStartE2EDuration="9.103570298s" podCreationTimestamp="2025-09-03 23:25:08 +0000 UTC" firstStartedPulling="2025-09-03 23:25:08.592909154 +0000 UTC m=+6.741403066" lastFinishedPulling="2025-09-03 23:25:16.601810109 +0000 UTC m=+14.750304021" observedRunningTime="2025-09-03 23:25:17.024637035 +0000 UTC m=+15.173130956" watchObservedRunningTime="2025-09-03 23:25:17.103570298 +0000 UTC m=+15.252064210" Sep 3 23:25:17.117281 containerd[1881]: time="2025-09-03T23:25:17.117246514Z" level=info msg="StartContainer for \"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\" returns successfully" Sep 3 23:25:17.118776 systemd[1]: cri-containerd-69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c.scope: Deactivated successfully. Sep 3 23:25:17.121867 containerd[1881]: time="2025-09-03T23:25:17.121838442Z" level=info msg="received exit event container_id:\"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\" id:\"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\" pid:3977 exited_at:{seconds:1756941917 nanos:121621076}" Sep 3 23:25:17.122554 containerd[1881]: time="2025-09-03T23:25:17.122192084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\" id:\"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\" pid:3977 exited_at:{seconds:1756941917 nanos:121621076}" Sep 3 23:25:17.142814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c-rootfs.mount: Deactivated successfully. Sep 3 23:25:18.005888 containerd[1881]: time="2025-09-03T23:25:18.005850891Z" level=info msg="CreateContainer within sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:25:18.039591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1266810456.mount: Deactivated successfully. Sep 3 23:25:18.041723 containerd[1881]: time="2025-09-03T23:25:18.041578677Z" level=info msg="Container 47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:18.057230 containerd[1881]: time="2025-09-03T23:25:18.057150010Z" level=info msg="CreateContainer within sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\"" Sep 3 23:25:18.057909 containerd[1881]: time="2025-09-03T23:25:18.057885407Z" level=info msg="StartContainer for \"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\"" Sep 3 23:25:18.058526 containerd[1881]: time="2025-09-03T23:25:18.058499472Z" level=info msg="connecting to shim 47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6" address="unix:///run/containerd/s/868e1b0d7f1bceb836009d75e662998193a5668769d43159dce5625ad1b575af" protocol=ttrpc version=3 Sep 3 23:25:18.075321 systemd[1]: Started cri-containerd-47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6.scope - libcontainer container 47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6. Sep 3 23:25:18.094072 systemd[1]: cri-containerd-47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6.scope: Deactivated successfully. Sep 3 23:25:18.097337 containerd[1881]: time="2025-09-03T23:25:18.097299593Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\" id:\"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\" pid:4016 exited_at:{seconds:1756941918 nanos:97009976}" Sep 3 23:25:18.103153 containerd[1881]: time="2025-09-03T23:25:18.103035625Z" level=info msg="received exit event container_id:\"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\" id:\"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\" pid:4016 exited_at:{seconds:1756941918 nanos:97009976}" Sep 3 23:25:18.108109 containerd[1881]: time="2025-09-03T23:25:18.108084351Z" level=info msg="StartContainer for \"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\" returns successfully" Sep 3 23:25:18.118957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6-rootfs.mount: Deactivated successfully. Sep 3 23:25:19.006218 containerd[1881]: time="2025-09-03T23:25:19.006176107Z" level=info msg="CreateContainer within sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:25:19.029139 containerd[1881]: time="2025-09-03T23:25:19.028675402Z" level=info msg="Container 6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:19.044712 containerd[1881]: time="2025-09-03T23:25:19.044685115Z" level=info msg="CreateContainer within sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\"" Sep 3 23:25:19.047112 containerd[1881]: time="2025-09-03T23:25:19.047081487Z" level=info msg="StartContainer for \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\"" Sep 3 23:25:19.047833 containerd[1881]: time="2025-09-03T23:25:19.047777442Z" level=info msg="connecting to shim 6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a" address="unix:///run/containerd/s/868e1b0d7f1bceb836009d75e662998193a5668769d43159dce5625ad1b575af" protocol=ttrpc version=3 Sep 3 23:25:19.065322 systemd[1]: Started cri-containerd-6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a.scope - libcontainer container 6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a. Sep 3 23:25:19.091599 containerd[1881]: time="2025-09-03T23:25:19.091563654Z" level=info msg="StartContainer for \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" returns successfully" Sep 3 23:25:19.149516 containerd[1881]: time="2025-09-03T23:25:19.149486575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" id:\"3d2b5549433f89c74388158b236d94584a5f9e0813e30c30ff5a99199cf64340\" pid:4082 exited_at:{seconds:1756941919 nanos:148898479}" Sep 3 23:25:19.226220 kubelet[3418]: I0903 23:25:19.226166 3418 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 3 23:25:19.272256 systemd[1]: Created slice kubepods-burstable-pod44aca003_0418_417c_9663_85a01116981a.slice - libcontainer container kubepods-burstable-pod44aca003_0418_417c_9663_85a01116981a.slice. Sep 3 23:25:19.279930 systemd[1]: Created slice kubepods-burstable-pod8ef5a92f_d3bf_431f_bf5b_73f7fddfb139.slice - libcontainer container kubepods-burstable-pod8ef5a92f_d3bf_431f_bf5b_73f7fddfb139.slice. Sep 3 23:25:19.318919 kubelet[3418]: I0903 23:25:19.318885 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ef5a92f-d3bf-431f-bf5b-73f7fddfb139-config-volume\") pod \"coredns-674b8bbfcf-z5xtj\" (UID: \"8ef5a92f-d3bf-431f-bf5b-73f7fddfb139\") " pod="kube-system/coredns-674b8bbfcf-z5xtj" Sep 3 23:25:19.319056 kubelet[3418]: I0903 23:25:19.318937 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44aca003-0418-417c-9663-85a01116981a-config-volume\") pod \"coredns-674b8bbfcf-pwprk\" (UID: \"44aca003-0418-417c-9663-85a01116981a\") " pod="kube-system/coredns-674b8bbfcf-pwprk" Sep 3 23:25:19.319056 kubelet[3418]: I0903 23:25:19.318949 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bf7z\" (UniqueName: \"kubernetes.io/projected/44aca003-0418-417c-9663-85a01116981a-kube-api-access-4bf7z\") pod \"coredns-674b8bbfcf-pwprk\" (UID: \"44aca003-0418-417c-9663-85a01116981a\") " pod="kube-system/coredns-674b8bbfcf-pwprk" Sep 3 23:25:19.319056 kubelet[3418]: I0903 23:25:19.318961 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwx5p\" (UniqueName: \"kubernetes.io/projected/8ef5a92f-d3bf-431f-bf5b-73f7fddfb139-kube-api-access-cwx5p\") pod \"coredns-674b8bbfcf-z5xtj\" (UID: \"8ef5a92f-d3bf-431f-bf5b-73f7fddfb139\") " pod="kube-system/coredns-674b8bbfcf-z5xtj" Sep 3 23:25:19.577046 containerd[1881]: time="2025-09-03T23:25:19.576747731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pwprk,Uid:44aca003-0418-417c-9663-85a01116981a,Namespace:kube-system,Attempt:0,}" Sep 3 23:25:19.584905 containerd[1881]: time="2025-09-03T23:25:19.584873511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z5xtj,Uid:8ef5a92f-d3bf-431f-bf5b-73f7fddfb139,Namespace:kube-system,Attempt:0,}" Sep 3 23:25:20.019586 kubelet[3418]: I0903 23:25:20.018969 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vwvrh" podStartSLOduration=7.681513628 podStartE2EDuration="13.018955496s" podCreationTimestamp="2025-09-03 23:25:07 +0000 UTC" firstStartedPulling="2025-09-03 23:25:08.402208873 +0000 UTC m=+6.550702785" lastFinishedPulling="2025-09-03 23:25:13.739650733 +0000 UTC m=+11.888144653" observedRunningTime="2025-09-03 23:25:20.018128817 +0000 UTC m=+18.166622729" watchObservedRunningTime="2025-09-03 23:25:20.018955496 +0000 UTC m=+18.167449416" Sep 3 23:25:21.226592 systemd-networkd[1706]: cilium_host: Link UP Sep 3 23:25:21.229173 systemd-networkd[1706]: cilium_net: Link UP Sep 3 23:25:21.229429 systemd-networkd[1706]: cilium_net: Gained carrier Sep 3 23:25:21.230269 systemd-networkd[1706]: cilium_host: Gained carrier Sep 3 23:25:21.385292 systemd-networkd[1706]: cilium_net: Gained IPv6LL Sep 3 23:25:21.443134 systemd-networkd[1706]: cilium_vxlan: Link UP Sep 3 23:25:21.443139 systemd-networkd[1706]: cilium_vxlan: Gained carrier Sep 3 23:25:21.685222 kernel: NET: Registered PF_ALG protocol family Sep 3 23:25:22.185295 systemd-networkd[1706]: cilium_host: Gained IPv6LL Sep 3 23:25:22.267905 systemd-networkd[1706]: lxc_health: Link UP Sep 3 23:25:22.275820 systemd-networkd[1706]: lxc_health: Gained carrier Sep 3 23:25:22.612937 systemd-networkd[1706]: lxcc9b48142193f: Link UP Sep 3 23:25:22.617290 kernel: eth0: renamed from tmp96b27 Sep 3 23:25:22.619294 systemd-networkd[1706]: lxcc9b48142193f: Gained carrier Sep 3 23:25:22.641067 systemd-networkd[1706]: lxc7cc6ee1f3350: Link UP Sep 3 23:25:22.642221 kernel: eth0: renamed from tmp1d3fa Sep 3 23:25:22.642264 systemd-networkd[1706]: lxc7cc6ee1f3350: Gained carrier Sep 3 23:25:23.146366 systemd-networkd[1706]: cilium_vxlan: Gained IPv6LL Sep 3 23:25:23.337408 systemd-networkd[1706]: lxc_health: Gained IPv6LL Sep 3 23:25:24.617385 systemd-networkd[1706]: lxcc9b48142193f: Gained IPv6LL Sep 3 23:25:24.681347 systemd-networkd[1706]: lxc7cc6ee1f3350: Gained IPv6LL Sep 3 23:25:25.147553 containerd[1881]: time="2025-09-03T23:25:25.147432511Z" level=info msg="connecting to shim 1d3fa707b90933d87b7bd1d33e9c180d237055e22b305a247d13be5c703bb7c1" address="unix:///run/containerd/s/bb405e65ac34065e5e4d43b0c1c3adbb50f5666a571413ed51b6945d4faa38ba" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:25.149071 containerd[1881]: time="2025-09-03T23:25:25.149040908Z" level=info msg="connecting to shim 96b27704c8a2aa57b6c5460004237ca2251d09dda04fed9557c1b5619013e37d" address="unix:///run/containerd/s/525f8171441eb22be0b99d10f851e4c64222a0cecd219404aacb98efa24441cf" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:25:25.170428 systemd[1]: Started cri-containerd-1d3fa707b90933d87b7bd1d33e9c180d237055e22b305a247d13be5c703bb7c1.scope - libcontainer container 1d3fa707b90933d87b7bd1d33e9c180d237055e22b305a247d13be5c703bb7c1. Sep 3 23:25:25.173843 systemd[1]: Started cri-containerd-96b27704c8a2aa57b6c5460004237ca2251d09dda04fed9557c1b5619013e37d.scope - libcontainer container 96b27704c8a2aa57b6c5460004237ca2251d09dda04fed9557c1b5619013e37d. Sep 3 23:25:25.206228 containerd[1881]: time="2025-09-03T23:25:25.206174159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z5xtj,Uid:8ef5a92f-d3bf-431f-bf5b-73f7fddfb139,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d3fa707b90933d87b7bd1d33e9c180d237055e22b305a247d13be5c703bb7c1\"" Sep 3 23:25:25.210969 containerd[1881]: time="2025-09-03T23:25:25.210909733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pwprk,Uid:44aca003-0418-417c-9663-85a01116981a,Namespace:kube-system,Attempt:0,} returns sandbox id \"96b27704c8a2aa57b6c5460004237ca2251d09dda04fed9557c1b5619013e37d\"" Sep 3 23:25:25.214385 containerd[1881]: time="2025-09-03T23:25:25.214291164Z" level=info msg="CreateContainer within sandbox \"1d3fa707b90933d87b7bd1d33e9c180d237055e22b305a247d13be5c703bb7c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:25:25.220568 containerd[1881]: time="2025-09-03T23:25:25.220541268Z" level=info msg="CreateContainer within sandbox \"96b27704c8a2aa57b6c5460004237ca2251d09dda04fed9557c1b5619013e37d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:25:25.242113 containerd[1881]: time="2025-09-03T23:25:25.242082732Z" level=info msg="Container 65bdf5216cf751027b9375fcbcfacf92a88d322e91cb6d9ba8c7f93419da6977: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:25.244787 containerd[1881]: time="2025-09-03T23:25:25.244465135Z" level=info msg="Container c6099b9b79910fb4b83cecdfdd33aa0a9e73b64708c17d80700a5381fcd61ff7: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:25:25.259872 containerd[1881]: time="2025-09-03T23:25:25.259844696Z" level=info msg="CreateContainer within sandbox \"1d3fa707b90933d87b7bd1d33e9c180d237055e22b305a247d13be5c703bb7c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65bdf5216cf751027b9375fcbcfacf92a88d322e91cb6d9ba8c7f93419da6977\"" Sep 3 23:25:25.260360 containerd[1881]: time="2025-09-03T23:25:25.260336966Z" level=info msg="StartContainer for \"65bdf5216cf751027b9375fcbcfacf92a88d322e91cb6d9ba8c7f93419da6977\"" Sep 3 23:25:25.262814 containerd[1881]: time="2025-09-03T23:25:25.262790347Z" level=info msg="connecting to shim 65bdf5216cf751027b9375fcbcfacf92a88d322e91cb6d9ba8c7f93419da6977" address="unix:///run/containerd/s/bb405e65ac34065e5e4d43b0c1c3adbb50f5666a571413ed51b6945d4faa38ba" protocol=ttrpc version=3 Sep 3 23:25:25.272543 containerd[1881]: time="2025-09-03T23:25:25.272518094Z" level=info msg="CreateContainer within sandbox \"96b27704c8a2aa57b6c5460004237ca2251d09dda04fed9557c1b5619013e37d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6099b9b79910fb4b83cecdfdd33aa0a9e73b64708c17d80700a5381fcd61ff7\"" Sep 3 23:25:25.273851 containerd[1881]: time="2025-09-03T23:25:25.273551707Z" level=info msg="StartContainer for \"c6099b9b79910fb4b83cecdfdd33aa0a9e73b64708c17d80700a5381fcd61ff7\"" Sep 3 23:25:25.275329 containerd[1881]: time="2025-09-03T23:25:25.275267587Z" level=info msg="connecting to shim c6099b9b79910fb4b83cecdfdd33aa0a9e73b64708c17d80700a5381fcd61ff7" address="unix:///run/containerd/s/525f8171441eb22be0b99d10f851e4c64222a0cecd219404aacb98efa24441cf" protocol=ttrpc version=3 Sep 3 23:25:25.278697 systemd[1]: Started cri-containerd-65bdf5216cf751027b9375fcbcfacf92a88d322e91cb6d9ba8c7f93419da6977.scope - libcontainer container 65bdf5216cf751027b9375fcbcfacf92a88d322e91cb6d9ba8c7f93419da6977. Sep 3 23:25:25.298312 systemd[1]: Started cri-containerd-c6099b9b79910fb4b83cecdfdd33aa0a9e73b64708c17d80700a5381fcd61ff7.scope - libcontainer container c6099b9b79910fb4b83cecdfdd33aa0a9e73b64708c17d80700a5381fcd61ff7. Sep 3 23:25:25.323448 containerd[1881]: time="2025-09-03T23:25:25.323382072Z" level=info msg="StartContainer for \"65bdf5216cf751027b9375fcbcfacf92a88d322e91cb6d9ba8c7f93419da6977\" returns successfully" Sep 3 23:25:25.337724 containerd[1881]: time="2025-09-03T23:25:25.337669435Z" level=info msg="StartContainer for \"c6099b9b79910fb4b83cecdfdd33aa0a9e73b64708c17d80700a5381fcd61ff7\" returns successfully" Sep 3 23:25:26.035710 kubelet[3418]: I0903 23:25:26.035634 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z5xtj" podStartSLOduration=18.03562342 podStartE2EDuration="18.03562342s" podCreationTimestamp="2025-09-03 23:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:25:26.034941425 +0000 UTC m=+24.183435345" watchObservedRunningTime="2025-09-03 23:25:26.03562342 +0000 UTC m=+24.184117332" Sep 3 23:25:26.134577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339755427.mount: Deactivated successfully. Sep 3 23:27:08.688776 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:50974.service - OpenSSH per-connection server daemon (10.200.16.10:50974). Sep 3 23:27:09.182243 sshd[4746]: Accepted publickey for core from 10.200.16.10 port 50974 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:09.183328 sshd-session[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:09.186937 systemd-logind[1859]: New session 10 of user core. Sep 3 23:27:09.197322 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 3 23:27:09.593684 sshd[4748]: Connection closed by 10.200.16.10 port 50974 Sep 3 23:27:09.593897 sshd-session[4746]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:09.597195 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:50974.service: Deactivated successfully. Sep 3 23:27:09.600763 systemd[1]: session-10.scope: Deactivated successfully. Sep 3 23:27:09.602327 systemd-logind[1859]: Session 10 logged out. Waiting for processes to exit. Sep 3 23:27:09.603879 systemd-logind[1859]: Removed session 10. Sep 3 23:27:14.686643 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:49102.service - OpenSSH per-connection server daemon (10.200.16.10:49102). Sep 3 23:27:15.175124 sshd[4761]: Accepted publickey for core from 10.200.16.10 port 49102 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:15.176143 sshd-session[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:15.179958 systemd-logind[1859]: New session 11 of user core. Sep 3 23:27:15.186315 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 3 23:27:15.564428 sshd[4763]: Connection closed by 10.200.16.10 port 49102 Sep 3 23:27:15.563630 sshd-session[4761]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:15.566905 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:49102.service: Deactivated successfully. Sep 3 23:27:15.568911 systemd[1]: session-11.scope: Deactivated successfully. Sep 3 23:27:15.569994 systemd-logind[1859]: Session 11 logged out. Waiting for processes to exit. Sep 3 23:27:15.572008 systemd-logind[1859]: Removed session 11. Sep 3 23:27:20.651225 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:51534.service - OpenSSH per-connection server daemon (10.200.16.10:51534). Sep 3 23:27:21.142877 sshd[4776]: Accepted publickey for core from 10.200.16.10 port 51534 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:21.143853 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:21.147875 systemd-logind[1859]: New session 12 of user core. Sep 3 23:27:21.154310 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 3 23:27:21.529562 sshd[4778]: Connection closed by 10.200.16.10 port 51534 Sep 3 23:27:21.530117 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:21.532888 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:51534.service: Deactivated successfully. Sep 3 23:27:21.535348 systemd[1]: session-12.scope: Deactivated successfully. Sep 3 23:27:21.535927 systemd-logind[1859]: Session 12 logged out. Waiting for processes to exit. Sep 3 23:27:21.536991 systemd-logind[1859]: Removed session 12. Sep 3 23:27:26.617391 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:51542.service - OpenSSH per-connection server daemon (10.200.16.10:51542). Sep 3 23:27:27.093291 sshd[4791]: Accepted publickey for core from 10.200.16.10 port 51542 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:27.094391 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:27.098300 systemd-logind[1859]: New session 13 of user core. Sep 3 23:27:27.106303 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 3 23:27:27.472384 sshd[4793]: Connection closed by 10.200.16.10 port 51542 Sep 3 23:27:27.472866 sshd-session[4791]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:27.476428 systemd-logind[1859]: Session 13 logged out. Waiting for processes to exit. Sep 3 23:27:27.476432 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:51542.service: Deactivated successfully. Sep 3 23:27:27.477756 systemd[1]: session-13.scope: Deactivated successfully. Sep 3 23:27:27.479399 systemd-logind[1859]: Removed session 13. Sep 3 23:27:27.563405 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:51546.service - OpenSSH per-connection server daemon (10.200.16.10:51546). Sep 3 23:27:28.041862 sshd[4806]: Accepted publickey for core from 10.200.16.10 port 51546 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:28.042887 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:28.046426 systemd-logind[1859]: New session 14 of user core. Sep 3 23:27:28.053325 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 3 23:27:28.447879 sshd[4808]: Connection closed by 10.200.16.10 port 51546 Sep 3 23:27:28.447271 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:28.449823 systemd-logind[1859]: Session 14 logged out. Waiting for processes to exit. Sep 3 23:27:28.450353 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:51546.service: Deactivated successfully. Sep 3 23:27:28.451628 systemd[1]: session-14.scope: Deactivated successfully. Sep 3 23:27:28.455784 systemd-logind[1859]: Removed session 14. Sep 3 23:27:28.531449 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:51554.service - OpenSSH per-connection server daemon (10.200.16.10:51554). Sep 3 23:27:29.010822 sshd[4818]: Accepted publickey for core from 10.200.16.10 port 51554 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:29.011895 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:29.015484 systemd-logind[1859]: New session 15 of user core. Sep 3 23:27:29.023301 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 3 23:27:29.390795 sshd[4820]: Connection closed by 10.200.16.10 port 51554 Sep 3 23:27:29.391306 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:29.394444 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:51554.service: Deactivated successfully. Sep 3 23:27:29.396134 systemd[1]: session-15.scope: Deactivated successfully. Sep 3 23:27:29.397032 systemd-logind[1859]: Session 15 logged out. Waiting for processes to exit. Sep 3 23:27:29.398633 systemd-logind[1859]: Removed session 15. Sep 3 23:27:34.479818 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:33804.service - OpenSSH per-connection server daemon (10.200.16.10:33804). Sep 3 23:27:34.971690 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 33804 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:34.972727 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:34.976191 systemd-logind[1859]: New session 16 of user core. Sep 3 23:27:34.984316 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 3 23:27:35.375069 sshd[4833]: Connection closed by 10.200.16.10 port 33804 Sep 3 23:27:35.375560 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:35.378488 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:33804.service: Deactivated successfully. Sep 3 23:27:35.380028 systemd[1]: session-16.scope: Deactivated successfully. Sep 3 23:27:35.380994 systemd-logind[1859]: Session 16 logged out. Waiting for processes to exit. Sep 3 23:27:35.381934 systemd-logind[1859]: Removed session 16. Sep 3 23:27:35.467373 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:33816.service - OpenSSH per-connection server daemon (10.200.16.10:33816). Sep 3 23:27:35.943991 sshd[4844]: Accepted publickey for core from 10.200.16.10 port 33816 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:35.945000 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:35.949614 systemd-logind[1859]: New session 17 of user core. Sep 3 23:27:35.959308 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 3 23:27:36.373947 sshd[4846]: Connection closed by 10.200.16.10 port 33816 Sep 3 23:27:36.373808 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:36.377472 systemd-logind[1859]: Session 17 logged out. Waiting for processes to exit. Sep 3 23:27:36.377948 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:33816.service: Deactivated successfully. Sep 3 23:27:36.380694 systemd[1]: session-17.scope: Deactivated successfully. Sep 3 23:27:36.384016 systemd-logind[1859]: Removed session 17. Sep 3 23:27:36.454542 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:33830.service - OpenSSH per-connection server daemon (10.200.16.10:33830). Sep 3 23:27:36.911384 sshd[4856]: Accepted publickey for core from 10.200.16.10 port 33830 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:36.912430 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:36.915914 systemd-logind[1859]: New session 18 of user core. Sep 3 23:27:36.923320 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 3 23:27:37.578758 sshd[4858]: Connection closed by 10.200.16.10 port 33830 Sep 3 23:27:37.579299 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:37.582031 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:33830.service: Deactivated successfully. Sep 3 23:27:37.583794 systemd[1]: session-18.scope: Deactivated successfully. Sep 3 23:27:37.584673 systemd-logind[1859]: Session 18 logged out. Waiting for processes to exit. Sep 3 23:27:37.586130 systemd-logind[1859]: Removed session 18. Sep 3 23:27:37.663042 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:33844.service - OpenSSH per-connection server daemon (10.200.16.10:33844). Sep 3 23:27:38.121100 sshd[4875]: Accepted publickey for core from 10.200.16.10 port 33844 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:38.122135 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:38.125774 systemd-logind[1859]: New session 19 of user core. Sep 3 23:27:38.146325 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 3 23:27:38.580192 sshd[4877]: Connection closed by 10.200.16.10 port 33844 Sep 3 23:27:38.580806 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:38.583735 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:33844.service: Deactivated successfully. Sep 3 23:27:38.585225 systemd[1]: session-19.scope: Deactivated successfully. Sep 3 23:27:38.585937 systemd-logind[1859]: Session 19 logged out. Waiting for processes to exit. Sep 3 23:27:38.587718 systemd-logind[1859]: Removed session 19. Sep 3 23:27:38.669761 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:33854.service - OpenSSH per-connection server daemon (10.200.16.10:33854). Sep 3 23:27:39.154771 sshd[4890]: Accepted publickey for core from 10.200.16.10 port 33854 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:39.155838 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:39.160284 systemd-logind[1859]: New session 20 of user core. Sep 3 23:27:39.164476 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 3 23:27:39.532287 sshd[4892]: Connection closed by 10.200.16.10 port 33854 Sep 3 23:27:39.531793 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:39.534500 systemd-logind[1859]: Session 20 logged out. Waiting for processes to exit. Sep 3 23:27:39.535815 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:33854.service: Deactivated successfully. Sep 3 23:27:39.537601 systemd[1]: session-20.scope: Deactivated successfully. Sep 3 23:27:39.539159 systemd-logind[1859]: Removed session 20. Sep 3 23:27:48.707360 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:50022.service - OpenSSH per-connection server daemon (10.200.16.10:50022). Sep 3 23:27:49.190343 sshd[4905]: Accepted publickey for core from 10.200.16.10 port 50022 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:49.191416 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:49.195049 systemd-logind[1859]: New session 21 of user core. Sep 3 23:27:49.205313 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 3 23:27:49.567643 sshd[4907]: Connection closed by 10.200.16.10 port 50022 Sep 3 23:27:49.568252 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:49.571170 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:50022.service: Deactivated successfully. Sep 3 23:27:49.573018 systemd[1]: session-21.scope: Deactivated successfully. Sep 3 23:27:49.573699 systemd-logind[1859]: Session 21 logged out. Waiting for processes to exit. Sep 3 23:27:49.574754 systemd-logind[1859]: Removed session 21. Sep 3 23:27:49.665325 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:50032.service - OpenSSH per-connection server daemon (10.200.16.10:50032). Sep 3 23:27:50.118342 sshd[4919]: Accepted publickey for core from 10.200.16.10 port 50032 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:50.119404 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:50.122904 systemd-logind[1859]: New session 22 of user core. Sep 3 23:27:50.134499 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 3 23:27:51.647666 kubelet[3418]: I0903 23:27:51.647602 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pwprk" podStartSLOduration=163.647588522 podStartE2EDuration="2m43.647588522s" podCreationTimestamp="2025-09-03 23:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:25:26.069137909 +0000 UTC m=+24.217631829" watchObservedRunningTime="2025-09-03 23:27:51.647588522 +0000 UTC m=+169.796082434" Sep 3 23:27:51.673985 containerd[1881]: time="2025-09-03T23:27:51.673945615Z" level=info msg="StopContainer for \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\" with timeout 30 (s)" Sep 3 23:27:51.674898 containerd[1881]: time="2025-09-03T23:27:51.674865905Z" level=info msg="Stop container \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\" with signal terminated" Sep 3 23:27:51.678657 containerd[1881]: time="2025-09-03T23:27:51.678607626Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:27:51.684399 containerd[1881]: time="2025-09-03T23:27:51.684378253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" id:\"8b83b0a7b6cf31c6b55ea25482bda6c17b46ddffe24e16945abf18d1aa807c74\" pid:4941 exited_at:{seconds:1756942071 nanos:684147390}" Sep 3 23:27:51.686052 systemd[1]: cri-containerd-fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e.scope: Deactivated successfully. Sep 3 23:27:51.688257 containerd[1881]: time="2025-09-03T23:27:51.686911300Z" level=info msg="StopContainer for \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" with timeout 2 (s)" Sep 3 23:27:51.688257 containerd[1881]: time="2025-09-03T23:27:51.687246653Z" level=info msg="Stop container \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" with signal terminated" Sep 3 23:27:51.689477 containerd[1881]: time="2025-09-03T23:27:51.689271110Z" level=info msg="received exit event container_id:\"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\" id:\"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\" pid:3944 exited_at:{seconds:1756942071 nanos:688980870}" Sep 3 23:27:51.689715 containerd[1881]: time="2025-09-03T23:27:51.689692074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\" id:\"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\" pid:3944 exited_at:{seconds:1756942071 nanos:688980870}" Sep 3 23:27:51.695582 systemd-networkd[1706]: lxc_health: Link DOWN Sep 3 23:27:51.695586 systemd-networkd[1706]: lxc_health: Lost carrier Sep 3 23:27:51.710007 systemd[1]: cri-containerd-6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a.scope: Deactivated successfully. Sep 3 23:27:51.710436 systemd[1]: cri-containerd-6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a.scope: Consumed 4.334s CPU time, 125.3M memory peak, 128K read from disk, 12.9M written to disk. Sep 3 23:27:51.713110 containerd[1881]: time="2025-09-03T23:27:51.712978449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" id:\"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" pid:4053 exited_at:{seconds:1756942071 nanos:712465323}" Sep 3 23:27:51.713110 containerd[1881]: time="2025-09-03T23:27:51.713017074Z" level=info msg="received exit event container_id:\"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" id:\"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" pid:4053 exited_at:{seconds:1756942071 nanos:712465323}" Sep 3 23:27:51.718765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e-rootfs.mount: Deactivated successfully. Sep 3 23:27:51.729603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a-rootfs.mount: Deactivated successfully. Sep 3 23:27:51.787649 containerd[1881]: time="2025-09-03T23:27:51.787605924Z" level=info msg="StopContainer for \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" returns successfully" Sep 3 23:27:51.788268 containerd[1881]: time="2025-09-03T23:27:51.788246678Z" level=info msg="StopPodSandbox for \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\"" Sep 3 23:27:51.788330 containerd[1881]: time="2025-09-03T23:27:51.788300679Z" level=info msg="Container to stop \"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:27:51.788330 containerd[1881]: time="2025-09-03T23:27:51.788316152Z" level=info msg="Container to stop \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:27:51.788330 containerd[1881]: time="2025-09-03T23:27:51.788322904Z" level=info msg="Container to stop \"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:27:51.788330 containerd[1881]: time="2025-09-03T23:27:51.788329144Z" level=info msg="Container to stop \"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:27:51.788399 containerd[1881]: time="2025-09-03T23:27:51.788337464Z" level=info msg="Container to stop \"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:27:51.791022 containerd[1881]: time="2025-09-03T23:27:51.790960114Z" level=info msg="StopContainer for \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\" returns successfully" Sep 3 23:27:51.791596 containerd[1881]: time="2025-09-03T23:27:51.791548523Z" level=info msg="StopPodSandbox for \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\"" Sep 3 23:27:51.791596 containerd[1881]: time="2025-09-03T23:27:51.791588836Z" level=info msg="Container to stop \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:27:51.794016 systemd[1]: cri-containerd-b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b.scope: Deactivated successfully. Sep 3 23:27:51.797100 containerd[1881]: time="2025-09-03T23:27:51.796653866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" id:\"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" pid:3566 exit_status:137 exited_at:{seconds:1756942071 nanos:795741409}" Sep 3 23:27:51.802366 systemd[1]: cri-containerd-c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3.scope: Deactivated successfully. Sep 3 23:27:51.816831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b-rootfs.mount: Deactivated successfully. Sep 3 23:27:51.823660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3-rootfs.mount: Deactivated successfully. Sep 3 23:27:51.837283 containerd[1881]: time="2025-09-03T23:27:51.837256496Z" level=info msg="shim disconnected" id=b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b namespace=k8s.io Sep 3 23:27:51.837477 containerd[1881]: time="2025-09-03T23:27:51.837277401Z" level=warning msg="cleaning up after shim disconnected" id=b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b namespace=k8s.io Sep 3 23:27:51.837477 containerd[1881]: time="2025-09-03T23:27:51.837297873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:27:51.838483 containerd[1881]: time="2025-09-03T23:27:51.838457234Z" level=info msg="shim disconnected" id=c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3 namespace=k8s.io Sep 3 23:27:51.839471 containerd[1881]: time="2025-09-03T23:27:51.838480723Z" level=warning msg="cleaning up after shim disconnected" id=c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3 namespace=k8s.io Sep 3 23:27:51.839471 containerd[1881]: time="2025-09-03T23:27:51.838498683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:27:51.839471 containerd[1881]: time="2025-09-03T23:27:51.838640751Z" level=info msg="received exit event sandbox_id:\"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" exit_status:137 exited_at:{seconds:1756942071 nanos:795741409}" Sep 3 23:27:51.840844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b-shm.mount: Deactivated successfully. Sep 3 23:27:51.842765 containerd[1881]: time="2025-09-03T23:27:51.842742498Z" level=info msg="TearDown network for sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" successfully" Sep 3 23:27:51.843001 containerd[1881]: time="2025-09-03T23:27:51.842897575Z" level=info msg="StopPodSandbox for \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" returns successfully" Sep 3 23:27:51.852717 containerd[1881]: time="2025-09-03T23:27:51.852689442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\" id:\"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\" pid:3652 exit_status:137 exited_at:{seconds:1756942071 nanos:805998601}" Sep 3 23:27:51.852790 containerd[1881]: time="2025-09-03T23:27:51.852780005Z" level=info msg="received exit event sandbox_id:\"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\" exit_status:137 exited_at:{seconds:1756942071 nanos:805998601}" Sep 3 23:27:51.853036 containerd[1881]: time="2025-09-03T23:27:51.853016027Z" level=info msg="TearDown network for sandbox \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\" successfully" Sep 3 23:27:51.853157 containerd[1881]: time="2025-09-03T23:27:51.853142319Z" level=info msg="StopPodSandbox for \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\" returns successfully" Sep 3 23:27:51.911639 kubelet[3418]: I0903 23:27:51.911550 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cni-path\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.911834 kubelet[3418]: I0903 23:27:51.911819 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-host-proc-sys-net\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.911907 kubelet[3418]: I0903 23:27:51.911897 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-hostproc\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.911985 kubelet[3418]: I0903 23:27:51.911974 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9aa681fc-0e7e-4aff-acb4-8782d04a93c2-cilium-config-path\") pod \"9aa681fc-0e7e-4aff-acb4-8782d04a93c2\" (UID: \"9aa681fc-0e7e-4aff-acb4-8782d04a93c2\") " Sep 3 23:27:51.912822 kubelet[3418]: I0903 23:27:51.912475 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-host-proc-sys-kernel\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.912822 kubelet[3418]: I0903 23:27:51.912496 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq44n\" (UniqueName: \"kubernetes.io/projected/2a6b4344-f248-4793-80ef-485c882efec3-kube-api-access-jq44n\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.912822 kubelet[3418]: I0903 23:27:51.912505 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-etc-cni-netd\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.912822 kubelet[3418]: I0903 23:27:51.912517 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v5b4\" (UniqueName: \"kubernetes.io/projected/9aa681fc-0e7e-4aff-acb4-8782d04a93c2-kube-api-access-9v5b4\") pod \"9aa681fc-0e7e-4aff-acb4-8782d04a93c2\" (UID: \"9aa681fc-0e7e-4aff-acb4-8782d04a93c2\") " Sep 3 23:27:51.912822 kubelet[3418]: I0903 23:27:51.912526 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cilium-run\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.912822 kubelet[3418]: I0903 23:27:51.912534 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-xtables-lock\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.912961 kubelet[3418]: I0903 23:27:51.912546 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a6b4344-f248-4793-80ef-485c882efec3-cilium-config-path\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.912961 kubelet[3418]: I0903 23:27:51.912556 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cilium-cgroup\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.912961 kubelet[3418]: I0903 23:27:51.912566 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a6b4344-f248-4793-80ef-485c882efec3-clustermesh-secrets\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.912961 kubelet[3418]: I0903 23:27:51.912576 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-bpf-maps\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.912961 kubelet[3418]: I0903 23:27:51.912583 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-lib-modules\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.912961 kubelet[3418]: I0903 23:27:51.912593 3418 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a6b4344-f248-4793-80ef-485c882efec3-hubble-tls\") pod \"2a6b4344-f248-4793-80ef-485c882efec3\" (UID: \"2a6b4344-f248-4793-80ef-485c882efec3\") " Sep 3 23:27:51.913926 kubelet[3418]: I0903 23:27:51.911786 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cni-path" (OuterVolumeSpecName: "cni-path") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:27:51.914023 kubelet[3418]: I0903 23:27:51.912029 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:27:51.914071 kubelet[3418]: I0903 23:27:51.912039 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-hostproc" (OuterVolumeSpecName: "hostproc") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:27:51.914112 kubelet[3418]: I0903 23:27:51.913885 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa681fc-0e7e-4aff-acb4-8782d04a93c2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9aa681fc-0e7e-4aff-acb4-8782d04a93c2" (UID: "9aa681fc-0e7e-4aff-acb4-8782d04a93c2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 3 23:27:51.914154 kubelet[3418]: I0903 23:27:51.913906 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:27:51.914263 kubelet[3418]: I0903 23:27:51.914214 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:27:51.914401 kubelet[3418]: I0903 23:27:51.914385 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6b4344-f248-4793-80ef-485c882efec3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 3 23:27:51.914639 kubelet[3418]: I0903 23:27:51.914624 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:27:51.916211 kubelet[3418]: I0903 23:27:51.915806 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6b4344-f248-4793-80ef-485c882efec3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 3 23:27:51.916211 kubelet[3418]: I0903 23:27:51.915884 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6b4344-f248-4793-80ef-485c882efec3-kube-api-access-jq44n" (OuterVolumeSpecName: "kube-api-access-jq44n") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "kube-api-access-jq44n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 3 23:27:51.916211 kubelet[3418]: I0903 23:27:51.915898 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:27:51.916211 kubelet[3418]: I0903 23:27:51.916079 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:27:51.917608 kubelet[3418]: I0903 23:27:51.917587 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa681fc-0e7e-4aff-acb4-8782d04a93c2-kube-api-access-9v5b4" (OuterVolumeSpecName: "kube-api-access-9v5b4") pod "9aa681fc-0e7e-4aff-acb4-8782d04a93c2" (UID: "9aa681fc-0e7e-4aff-acb4-8782d04a93c2"). InnerVolumeSpecName "kube-api-access-9v5b4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 3 23:27:51.917707 kubelet[3418]: I0903 23:27:51.917694 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:27:51.917779 kubelet[3418]: I0903 23:27:51.917766 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:27:51.918433 kubelet[3418]: I0903 23:27:51.918182 3418 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6b4344-f248-4793-80ef-485c882efec3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2a6b4344-f248-4793-80ef-485c882efec3" (UID: "2a6b4344-f248-4793-80ef-485c882efec3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 3 23:27:51.925017 systemd[1]: Removed slice kubepods-besteffort-pod9aa681fc_0e7e_4aff_acb4_8782d04a93c2.slice - libcontainer container kubepods-besteffort-pod9aa681fc_0e7e_4aff_acb4_8782d04a93c2.slice. Sep 3 23:27:51.927265 systemd[1]: Removed slice kubepods-burstable-pod2a6b4344_f248_4793_80ef_485c882efec3.slice - libcontainer container kubepods-burstable-pod2a6b4344_f248_4793_80ef_485c882efec3.slice. Sep 3 23:27:51.927359 systemd[1]: kubepods-burstable-pod2a6b4344_f248_4793_80ef_485c882efec3.slice: Consumed 4.390s CPU time, 125.7M memory peak, 128K read from disk, 12.9M written to disk. Sep 3 23:27:52.008111 kubelet[3418]: E0903 23:27:52.008075 3418 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 3 23:27:52.013282 kubelet[3418]: I0903 23:27:52.013253 3418 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a6b4344-f248-4793-80ef-485c882efec3-cilium-config-path\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013282 kubelet[3418]: I0903 23:27:52.013282 3418 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cilium-cgroup\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013346 kubelet[3418]: I0903 23:27:52.013291 3418 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a6b4344-f248-4793-80ef-485c882efec3-clustermesh-secrets\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013346 kubelet[3418]: I0903 23:27:52.013297 3418 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-bpf-maps\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013346 kubelet[3418]: I0903 23:27:52.013304 3418 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-lib-modules\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013346 kubelet[3418]: I0903 23:27:52.013310 3418 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a6b4344-f248-4793-80ef-485c882efec3-hubble-tls\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013346 kubelet[3418]: I0903 23:27:52.013316 3418 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cni-path\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013346 kubelet[3418]: I0903 23:27:52.013321 3418 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-host-proc-sys-net\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013346 kubelet[3418]: I0903 23:27:52.013325 3418 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-hostproc\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013346 kubelet[3418]: I0903 23:27:52.013331 3418 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9aa681fc-0e7e-4aff-acb4-8782d04a93c2-cilium-config-path\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013469 kubelet[3418]: I0903 23:27:52.013336 3418 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-host-proc-sys-kernel\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013469 kubelet[3418]: I0903 23:27:52.013347 3418 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jq44n\" (UniqueName: \"kubernetes.io/projected/2a6b4344-f248-4793-80ef-485c882efec3-kube-api-access-jq44n\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013469 kubelet[3418]: I0903 23:27:52.013354 3418 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-etc-cni-netd\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013469 kubelet[3418]: I0903 23:27:52.013359 3418 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9v5b4\" (UniqueName: \"kubernetes.io/projected/9aa681fc-0e7e-4aff-acb4-8782d04a93c2-kube-api-access-9v5b4\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013469 kubelet[3418]: I0903 23:27:52.013364 3418 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-cilium-run\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.013469 kubelet[3418]: I0903 23:27:52.013369 3418 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a6b4344-f248-4793-80ef-485c882efec3-xtables-lock\") on node \"ci-4372.1.0-n-46801d0988\" DevicePath \"\"" Sep 3 23:27:52.253021 kubelet[3418]: I0903 23:27:52.252925 3418 scope.go:117] "RemoveContainer" containerID="fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e" Sep 3 23:27:52.257568 containerd[1881]: time="2025-09-03T23:27:52.257522307Z" level=info msg="RemoveContainer for \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\"" Sep 3 23:27:52.334915 containerd[1881]: time="2025-09-03T23:27:52.334876539Z" level=info msg="RemoveContainer for \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\" returns successfully" Sep 3 23:27:52.335166 kubelet[3418]: I0903 23:27:52.335133 3418 scope.go:117] "RemoveContainer" containerID="fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e" Sep 3 23:27:52.335438 containerd[1881]: time="2025-09-03T23:27:52.335408241Z" level=error msg="ContainerStatus for \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\": not found" Sep 3 23:27:52.335714 kubelet[3418]: E0903 23:27:52.335683 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\": not found" containerID="fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e" Sep 3 23:27:52.335924 kubelet[3418]: I0903 23:27:52.335803 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e"} err="failed to get container status \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdc5a50e964c51b418f58d013ae7b594df690461490c954eefd5941eade3026e\": not found" Sep 3 23:27:52.335924 kubelet[3418]: I0903 23:27:52.335846 3418 scope.go:117] "RemoveContainer" containerID="6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a" Sep 3 23:27:52.337325 containerd[1881]: time="2025-09-03T23:27:52.337300583Z" level=info msg="RemoveContainer for \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\"" Sep 3 23:27:52.347340 containerd[1881]: time="2025-09-03T23:27:52.347310168Z" level=info msg="RemoveContainer for \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" returns successfully" Sep 3 23:27:52.347545 kubelet[3418]: I0903 23:27:52.347456 3418 scope.go:117] "RemoveContainer" containerID="47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6" Sep 3 23:27:52.348814 containerd[1881]: time="2025-09-03T23:27:52.348515522Z" level=info msg="RemoveContainer for \"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\"" Sep 3 23:27:52.367384 containerd[1881]: time="2025-09-03T23:27:52.367339299Z" level=info msg="RemoveContainer for \"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\" returns successfully" Sep 3 23:27:52.367669 kubelet[3418]: I0903 23:27:52.367501 3418 scope.go:117] "RemoveContainer" containerID="69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c" Sep 3 23:27:52.369498 containerd[1881]: time="2025-09-03T23:27:52.369458743Z" level=info msg="RemoveContainer for \"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\"" Sep 3 23:27:52.377190 containerd[1881]: time="2025-09-03T23:27:52.377163640Z" level=info msg="RemoveContainer for \"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\" returns successfully" Sep 3 23:27:52.377345 kubelet[3418]: I0903 23:27:52.377320 3418 scope.go:117] "RemoveContainer" containerID="03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb" Sep 3 23:27:52.378465 containerd[1881]: time="2025-09-03T23:27:52.378441508Z" level=info msg="RemoveContainer for \"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\"" Sep 3 23:27:52.386728 containerd[1881]: time="2025-09-03T23:27:52.386704884Z" level=info msg="RemoveContainer for \"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\" returns successfully" Sep 3 23:27:52.386864 kubelet[3418]: I0903 23:27:52.386843 3418 scope.go:117] "RemoveContainer" containerID="7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21" Sep 3 23:27:52.387845 containerd[1881]: time="2025-09-03T23:27:52.387818235Z" level=info msg="RemoveContainer for \"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\"" Sep 3 23:27:52.396161 containerd[1881]: time="2025-09-03T23:27:52.396111765Z" level=info msg="RemoveContainer for \"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\" returns successfully" Sep 3 23:27:52.396460 kubelet[3418]: I0903 23:27:52.396439 3418 scope.go:117] "RemoveContainer" containerID="6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a" Sep 3 23:27:52.397333 containerd[1881]: time="2025-09-03T23:27:52.397302366Z" level=error msg="ContainerStatus for \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\": not found" Sep 3 23:27:52.397499 kubelet[3418]: E0903 23:27:52.397479 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\": not found" containerID="6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a" Sep 3 23:27:52.397544 kubelet[3418]: I0903 23:27:52.397502 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a"} err="failed to get container status \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c39d8bb40fb4918d6de92391510c71c36ff50bdfab776d4ee8cfe5b1130d31a\": not found" Sep 3 23:27:52.397544 kubelet[3418]: I0903 23:27:52.397517 3418 scope.go:117] "RemoveContainer" containerID="47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6" Sep 3 23:27:52.397768 containerd[1881]: time="2025-09-03T23:27:52.397736778Z" level=error msg="ContainerStatus for \"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\": not found" Sep 3 23:27:52.397971 kubelet[3418]: E0903 23:27:52.397863 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\": not found" containerID="47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6" Sep 3 23:27:52.397971 kubelet[3418]: I0903 23:27:52.397883 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6"} err="failed to get container status \"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\": rpc error: code = NotFound desc = an error occurred when try to find container \"47900264ac2b4aa762d5d0e0f78261cc278a96413bc55baaeb5fd58f9e603be6\": not found" Sep 3 23:27:52.397971 kubelet[3418]: I0903 23:27:52.397896 3418 scope.go:117] "RemoveContainer" containerID="69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c" Sep 3 23:27:52.398571 containerd[1881]: time="2025-09-03T23:27:52.398540297Z" level=error msg="ContainerStatus for \"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\": not found" Sep 3 23:27:52.398706 kubelet[3418]: E0903 23:27:52.398684 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\": not found" containerID="69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c" Sep 3 23:27:52.398902 kubelet[3418]: I0903 23:27:52.398876 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c"} err="failed to get container status \"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\": rpc error: code = NotFound desc = an error occurred when try to find container \"69b1814970e5df9da684c8aade2489fd504b946eab343f821a7ce72e219ae39c\": not found" Sep 3 23:27:52.398902 kubelet[3418]: I0903 23:27:52.398900 3418 scope.go:117] "RemoveContainer" containerID="03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb" Sep 3 23:27:52.399208 containerd[1881]: time="2025-09-03T23:27:52.399164090Z" level=error msg="ContainerStatus for \"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\": not found" Sep 3 23:27:52.399368 kubelet[3418]: E0903 23:27:52.399349 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\": not found" containerID="03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb" Sep 3 23:27:52.399415 kubelet[3418]: I0903 23:27:52.399369 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb"} err="failed to get container status \"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"03a8b26d55e879a4e53cf61b67c58f1fa58b800a75045f11ad94a0f6ca7f34fb\": not found" Sep 3 23:27:52.399415 kubelet[3418]: I0903 23:27:52.399383 3418 scope.go:117] "RemoveContainer" containerID="7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21" Sep 3 23:27:52.399534 containerd[1881]: time="2025-09-03T23:27:52.399510076Z" level=error msg="ContainerStatus for \"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\": not found" Sep 3 23:27:52.399626 kubelet[3418]: E0903 23:27:52.399606 3418 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\": not found" containerID="7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21" Sep 3 23:27:52.399698 kubelet[3418]: I0903 23:27:52.399627 3418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21"} err="failed to get container status \"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b6d05f611b1d67e85c2f9bb87459eeb462eeb5d23be2f16d3f71825a45a0d21\": not found" Sep 3 23:27:52.717378 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3-shm.mount: Deactivated successfully. Sep 3 23:27:52.717483 systemd[1]: var-lib-kubelet-pods-9aa681fc\x2d0e7e\x2d4aff\x2dacb4\x2d8782d04a93c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9v5b4.mount: Deactivated successfully. Sep 3 23:27:52.717531 systemd[1]: var-lib-kubelet-pods-2a6b4344\x2df248\x2d4793\x2d80ef\x2d485c882efec3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djq44n.mount: Deactivated successfully. Sep 3 23:27:52.717576 systemd[1]: var-lib-kubelet-pods-2a6b4344\x2df248\x2d4793\x2d80ef\x2d485c882efec3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 3 23:27:52.717610 systemd[1]: var-lib-kubelet-pods-2a6b4344\x2df248\x2d4793\x2d80ef\x2d485c882efec3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 3 23:27:53.686227 sshd[4921]: Connection closed by 10.200.16.10 port 50032 Sep 3 23:27:53.686526 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:53.689935 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:50032.service: Deactivated successfully. Sep 3 23:27:53.691543 systemd[1]: session-22.scope: Deactivated successfully. Sep 3 23:27:53.692155 systemd-logind[1859]: Session 22 logged out. Waiting for processes to exit. Sep 3 23:27:53.693603 systemd-logind[1859]: Removed session 22. Sep 3 23:27:53.803769 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:35306.service - OpenSSH per-connection server daemon (10.200.16.10:35306). Sep 3 23:27:53.921236 kubelet[3418]: I0903 23:27:53.921047 3418 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6b4344-f248-4793-80ef-485c882efec3" path="/var/lib/kubelet/pods/2a6b4344-f248-4793-80ef-485c882efec3/volumes" Sep 3 23:27:53.921799 kubelet[3418]: I0903 23:27:53.921776 3418 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa681fc-0e7e-4aff-acb4-8782d04a93c2" path="/var/lib/kubelet/pods/9aa681fc-0e7e-4aff-acb4-8782d04a93c2/volumes" Sep 3 23:27:54.292247 sshd[5065]: Accepted publickey for core from 10.200.16.10 port 35306 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:54.293343 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:54.296943 systemd-logind[1859]: New session 23 of user core. Sep 3 23:27:54.303317 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 3 23:27:54.981528 systemd[1]: Created slice kubepods-burstable-pod1571d7ac_a2bb_4820_a64a_0d9449b1cfd8.slice - libcontainer container kubepods-burstable-pod1571d7ac_a2bb_4820_a64a_0d9449b1cfd8.slice. Sep 3 23:27:55.011900 sshd[5067]: Connection closed by 10.200.16.10 port 35306 Sep 3 23:27:55.011454 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:55.015460 systemd-logind[1859]: Session 23 logged out. Waiting for processes to exit. Sep 3 23:27:55.015776 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:35306.service: Deactivated successfully. Sep 3 23:27:55.017458 systemd[1]: session-23.scope: Deactivated successfully. Sep 3 23:27:55.019112 systemd-logind[1859]: Removed session 23. Sep 3 23:27:55.030331 kubelet[3418]: I0903 23:27:55.030097 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-cilium-run\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030331 kubelet[3418]: I0903 23:27:55.030127 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-etc-cni-netd\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030331 kubelet[3418]: I0903 23:27:55.030139 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnsq6\" (UniqueName: \"kubernetes.io/projected/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-kube-api-access-bnsq6\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030331 kubelet[3418]: I0903 23:27:55.030151 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-xtables-lock\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030331 kubelet[3418]: I0903 23:27:55.030160 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-cilium-config-path\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030331 kubelet[3418]: I0903 23:27:55.030168 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-hubble-tls\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030616 kubelet[3418]: I0903 23:27:55.030176 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-cni-path\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030616 kubelet[3418]: I0903 23:27:55.030184 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-lib-modules\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030616 kubelet[3418]: I0903 23:27:55.030193 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-host-proc-sys-net\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030616 kubelet[3418]: I0903 23:27:55.030211 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-host-proc-sys-kernel\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030616 kubelet[3418]: I0903 23:27:55.030223 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-bpf-maps\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030616 kubelet[3418]: I0903 23:27:55.030239 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-hostproc\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030732 kubelet[3418]: I0903 23:27:55.030249 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-clustermesh-secrets\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030732 kubelet[3418]: I0903 23:27:55.030257 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-cilium-ipsec-secrets\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.030732 kubelet[3418]: I0903 23:27:55.030270 3418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1571d7ac-a2bb-4820-a64a-0d9449b1cfd8-cilium-cgroup\") pod \"cilium-26krf\" (UID: \"1571d7ac-a2bb-4820-a64a-0d9449b1cfd8\") " pod="kube-system/cilium-26krf" Sep 3 23:27:55.103551 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:35318.service - OpenSSH per-connection server daemon (10.200.16.10:35318). Sep 3 23:27:55.216482 kubelet[3418]: I0903 23:27:55.216443 3418 setters.go:618] "Node became not ready" node="ci-4372.1.0-n-46801d0988" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-03T23:27:55Z","lastTransitionTime":"2025-09-03T23:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 3 23:27:55.285462 containerd[1881]: time="2025-09-03T23:27:55.285388339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-26krf,Uid:1571d7ac-a2bb-4820-a64a-0d9449b1cfd8,Namespace:kube-system,Attempt:0,}" Sep 3 23:27:55.321670 containerd[1881]: time="2025-09-03T23:27:55.321606541Z" level=info msg="connecting to shim ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6" address="unix:///run/containerd/s/777f8cb806f6768ddba09309b4cc5979d32ad16c4abdf30c00dd5ff68b615d2d" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:27:55.341310 systemd[1]: Started cri-containerd-ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6.scope - libcontainer container ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6. Sep 3 23:27:55.363386 containerd[1881]: time="2025-09-03T23:27:55.363315218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-26krf,Uid:1571d7ac-a2bb-4820-a64a-0d9449b1cfd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6\"" Sep 3 23:27:55.381981 containerd[1881]: time="2025-09-03T23:27:55.381951143Z" level=info msg="CreateContainer within sandbox \"ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:27:55.397801 containerd[1881]: time="2025-09-03T23:27:55.397777260Z" level=info msg="Container e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:27:55.412040 containerd[1881]: time="2025-09-03T23:27:55.412013036Z" level=info msg="CreateContainer within sandbox \"ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd\"" Sep 3 23:27:55.412493 containerd[1881]: time="2025-09-03T23:27:55.412465609Z" level=info msg="StartContainer for \"e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd\"" Sep 3 23:27:55.413796 containerd[1881]: time="2025-09-03T23:27:55.413777006Z" level=info msg="connecting to shim e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd" address="unix:///run/containerd/s/777f8cb806f6768ddba09309b4cc5979d32ad16c4abdf30c00dd5ff68b615d2d" protocol=ttrpc version=3 Sep 3 23:27:55.434259 systemd[1]: Started cri-containerd-e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd.scope - libcontainer container e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd. Sep 3 23:27:55.460022 systemd[1]: cri-containerd-e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd.scope: Deactivated successfully. Sep 3 23:27:55.462258 containerd[1881]: time="2025-09-03T23:27:55.461089823Z" level=info msg="StartContainer for \"e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd\" returns successfully" Sep 3 23:27:55.462865 containerd[1881]: time="2025-09-03T23:27:55.462842536Z" level=info msg="received exit event container_id:\"e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd\" id:\"e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd\" pid:5140 exited_at:{seconds:1756942075 nanos:460802295}" Sep 3 23:27:55.463092 containerd[1881]: time="2025-09-03T23:27:55.463070182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd\" id:\"e59769054c2b931414fe6d7ee2ac5ca9c3e68cc94365c9881ffee9b56748bfcd\" pid:5140 exited_at:{seconds:1756942075 nanos:460802295}" Sep 3 23:27:55.595753 sshd[5078]: Accepted publickey for core from 10.200.16.10 port 35318 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:55.596883 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:55.600371 systemd-logind[1859]: New session 24 of user core. Sep 3 23:27:55.605406 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 3 23:27:55.956236 sshd[5173]: Connection closed by 10.200.16.10 port 35318 Sep 3 23:27:55.956758 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:55.959650 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:35318.service: Deactivated successfully. Sep 3 23:27:55.961390 systemd[1]: session-24.scope: Deactivated successfully. Sep 3 23:27:55.962124 systemd-logind[1859]: Session 24 logged out. Waiting for processes to exit. Sep 3 23:27:55.963536 systemd-logind[1859]: Removed session 24. Sep 3 23:27:56.059700 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:35322.service - OpenSSH per-connection server daemon (10.200.16.10:35322). Sep 3 23:27:56.284097 containerd[1881]: time="2025-09-03T23:27:56.284009913Z" level=info msg="CreateContainer within sandbox \"ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:27:56.302233 containerd[1881]: time="2025-09-03T23:27:56.301822992Z" level=info msg="Container 876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:27:56.305140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633230875.mount: Deactivated successfully. Sep 3 23:27:56.318938 containerd[1881]: time="2025-09-03T23:27:56.318897569Z" level=info msg="CreateContainer within sandbox \"ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a\"" Sep 3 23:27:56.320230 containerd[1881]: time="2025-09-03T23:27:56.320123563Z" level=info msg="StartContainer for \"876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a\"" Sep 3 23:27:56.321503 containerd[1881]: time="2025-09-03T23:27:56.321475865Z" level=info msg="connecting to shim 876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a" address="unix:///run/containerd/s/777f8cb806f6768ddba09309b4cc5979d32ad16c4abdf30c00dd5ff68b615d2d" protocol=ttrpc version=3 Sep 3 23:27:56.339345 systemd[1]: Started cri-containerd-876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a.scope - libcontainer container 876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a. Sep 3 23:27:56.368210 systemd[1]: cri-containerd-876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a.scope: Deactivated successfully. Sep 3 23:27:56.369983 containerd[1881]: time="2025-09-03T23:27:56.369923290Z" level=info msg="received exit event container_id:\"876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a\" id:\"876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a\" pid:5194 exited_at:{seconds:1756942076 nanos:369522375}" Sep 3 23:27:56.369983 containerd[1881]: time="2025-09-03T23:27:56.369955835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a\" id:\"876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a\" pid:5194 exited_at:{seconds:1756942076 nanos:369522375}" Sep 3 23:27:56.370718 containerd[1881]: time="2025-09-03T23:27:56.370650646Z" level=info msg="StartContainer for \"876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a\" returns successfully" Sep 3 23:27:56.386267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-876f126494dc03b0c1335254bee7564db3412d5c83b2b457ddcc5d28e90e550a-rootfs.mount: Deactivated successfully. Sep 3 23:27:56.542275 sshd[5180]: Accepted publickey for core from 10.200.16.10 port 35322 ssh2: RSA SHA256:+LoyTczYPQZz35LneG7EaruCG6YAUVWd39QoXAwwCdw Sep 3 23:27:56.543137 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:56.546504 systemd-logind[1859]: New session 25 of user core. Sep 3 23:27:56.555307 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 3 23:27:57.009334 kubelet[3418]: E0903 23:27:57.009294 3418 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 3 23:27:57.294452 containerd[1881]: time="2025-09-03T23:27:57.294344966Z" level=info msg="CreateContainer within sandbox \"ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:27:57.321162 containerd[1881]: time="2025-09-03T23:27:57.319695622Z" level=info msg="Container 501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:27:57.341684 containerd[1881]: time="2025-09-03T23:27:57.341652447Z" level=info msg="CreateContainer within sandbox \"ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0\"" Sep 3 23:27:57.342317 containerd[1881]: time="2025-09-03T23:27:57.342298289Z" level=info msg="StartContainer for \"501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0\"" Sep 3 23:27:57.346213 containerd[1881]: time="2025-09-03T23:27:57.345267755Z" level=info msg="connecting to shim 501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0" address="unix:///run/containerd/s/777f8cb806f6768ddba09309b4cc5979d32ad16c4abdf30c00dd5ff68b615d2d" protocol=ttrpc version=3 Sep 3 23:27:57.371349 systemd[1]: Started cri-containerd-501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0.scope - libcontainer container 501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0. Sep 3 23:27:57.407256 systemd[1]: cri-containerd-501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0.scope: Deactivated successfully. Sep 3 23:27:57.408721 containerd[1881]: time="2025-09-03T23:27:57.408694740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0\" id:\"501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0\" pid:5245 exited_at:{seconds:1756942077 nanos:408453661}" Sep 3 23:27:57.409940 containerd[1881]: time="2025-09-03T23:27:57.409886341Z" level=info msg="received exit event container_id:\"501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0\" id:\"501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0\" pid:5245 exited_at:{seconds:1756942077 nanos:408453661}" Sep 3 23:27:57.412867 containerd[1881]: time="2025-09-03T23:27:57.412824183Z" level=info msg="StartContainer for \"501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0\" returns successfully" Sep 3 23:27:57.426048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-501db405d475c567070b499023dbb551d6e7aed0c5ea08cab121e65772e5cfc0-rootfs.mount: Deactivated successfully. Sep 3 23:27:57.919179 kubelet[3418]: E0903 23:27:57.919099 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-pwprk" podUID="44aca003-0418-417c-9663-85a01116981a" Sep 3 23:27:58.294338 containerd[1881]: time="2025-09-03T23:27:58.294227352Z" level=info msg="CreateContainer within sandbox \"ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:27:58.317631 containerd[1881]: time="2025-09-03T23:27:58.317463517Z" level=info msg="Container 567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:27:58.332237 containerd[1881]: time="2025-09-03T23:27:58.332194574Z" level=info msg="CreateContainer within sandbox \"ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058\"" Sep 3 23:27:58.333290 containerd[1881]: time="2025-09-03T23:27:58.333048854Z" level=info msg="StartContainer for \"567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058\"" Sep 3 23:27:58.333778 containerd[1881]: time="2025-09-03T23:27:58.333719664Z" level=info msg="connecting to shim 567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058" address="unix:///run/containerd/s/777f8cb806f6768ddba09309b4cc5979d32ad16c4abdf30c00dd5ff68b615d2d" protocol=ttrpc version=3 Sep 3 23:27:58.359327 systemd[1]: Started cri-containerd-567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058.scope - libcontainer container 567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058. Sep 3 23:27:58.378242 systemd[1]: cri-containerd-567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058.scope: Deactivated successfully. Sep 3 23:27:58.379833 containerd[1881]: time="2025-09-03T23:27:58.379810336Z" level=info msg="TaskExit event in podsandbox handler container_id:\"567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058\" id:\"567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058\" pid:5284 exited_at:{seconds:1756942078 nanos:379512311}" Sep 3 23:27:58.383324 containerd[1881]: time="2025-09-03T23:27:58.383296849Z" level=info msg="received exit event container_id:\"567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058\" id:\"567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058\" pid:5284 exited_at:{seconds:1756942078 nanos:379512311}" Sep 3 23:27:58.388889 containerd[1881]: time="2025-09-03T23:27:58.388633917Z" level=info msg="StartContainer for \"567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058\" returns successfully" Sep 3 23:27:58.397471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-567187ac4af39f924db433291ffae320e47796e0365f81dbfb7d4bc0877bb058-rootfs.mount: Deactivated successfully. Sep 3 23:27:59.295734 containerd[1881]: time="2025-09-03T23:27:59.295694494Z" level=info msg="CreateContainer within sandbox \"ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:27:59.317417 containerd[1881]: time="2025-09-03T23:27:59.317386329Z" level=info msg="Container 9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:27:59.333536 containerd[1881]: time="2025-09-03T23:27:59.333507536Z" level=info msg="CreateContainer within sandbox \"ffd818423bc2ef27b34f05e5a99002fc838ff93d4d02f28bf0b0d568ab7bfaf6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df\"" Sep 3 23:27:59.334274 containerd[1881]: time="2025-09-03T23:27:59.334250997Z" level=info msg="StartContainer for \"9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df\"" Sep 3 23:27:59.335070 containerd[1881]: time="2025-09-03T23:27:59.335026170Z" level=info msg="connecting to shim 9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df" address="unix:///run/containerd/s/777f8cb806f6768ddba09309b4cc5979d32ad16c4abdf30c00dd5ff68b615d2d" protocol=ttrpc version=3 Sep 3 23:27:59.354316 systemd[1]: Started cri-containerd-9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df.scope - libcontainer container 9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df. Sep 3 23:27:59.381665 containerd[1881]: time="2025-09-03T23:27:59.381643608Z" level=info msg="StartContainer for \"9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df\" returns successfully" Sep 3 23:27:59.432586 containerd[1881]: time="2025-09-03T23:27:59.432554005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df\" id:\"956b98812af878f14b77ca07f9e0ecc4927a874a1eaaa139f08947dcc2eb2471\" pid:5349 exited_at:{seconds:1756942079 nanos:432342719}" Sep 3 23:27:59.759225 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 3 23:27:59.920652 kubelet[3418]: E0903 23:27:59.920334 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-pwprk" podUID="44aca003-0418-417c-9663-85a01116981a" Sep 3 23:28:00.936130 containerd[1881]: time="2025-09-03T23:28:00.936090403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df\" id:\"6804f7a64a2c59896bb5d3d89a75589ff478b600a08ac637985ef6280ea71c27\" pid:5430 exit_status:1 exited_at:{seconds:1756942080 nanos:935905302}" Sep 3 23:28:01.920842 kubelet[3418]: E0903 23:28:01.920787 3418 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-pwprk" podUID="44aca003-0418-417c-9663-85a01116981a" Sep 3 23:28:01.929345 containerd[1881]: time="2025-09-03T23:28:01.929017381Z" level=info msg="StopPodSandbox for \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\"" Sep 3 23:28:01.929345 containerd[1881]: time="2025-09-03T23:28:01.929131280Z" level=info msg="TearDown network for sandbox \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\" successfully" Sep 3 23:28:01.929345 containerd[1881]: time="2025-09-03T23:28:01.929140856Z" level=info msg="StopPodSandbox for \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\" returns successfully" Sep 3 23:28:01.929478 containerd[1881]: time="2025-09-03T23:28:01.929458545Z" level=info msg="RemovePodSandbox for \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\"" Sep 3 23:28:01.929501 containerd[1881]: time="2025-09-03T23:28:01.929479593Z" level=info msg="Forcibly stopping sandbox \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\"" Sep 3 23:28:01.929550 containerd[1881]: time="2025-09-03T23:28:01.929543339Z" level=info msg="TearDown network for sandbox \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\" successfully" Sep 3 23:28:01.930409 containerd[1881]: time="2025-09-03T23:28:01.930387515Z" level=info msg="Ensure that sandbox c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3 in task-service has been cleanup successfully" Sep 3 23:28:01.942577 containerd[1881]: time="2025-09-03T23:28:01.942550460Z" level=info msg="RemovePodSandbox \"c3d8d3bf68de2d801df01e42916de6900858cba09a79a7518911ca133c80f6d3\" returns successfully" Sep 3 23:28:01.943068 containerd[1881]: time="2025-09-03T23:28:01.943049938Z" level=info msg="StopPodSandbox for \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\"" Sep 3 23:28:01.943510 containerd[1881]: time="2025-09-03T23:28:01.943311545Z" level=info msg="TearDown network for sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" successfully" Sep 3 23:28:01.943510 containerd[1881]: time="2025-09-03T23:28:01.943327050Z" level=info msg="StopPodSandbox for \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" returns successfully" Sep 3 23:28:01.944247 containerd[1881]: time="2025-09-03T23:28:01.943813639Z" level=info msg="RemovePodSandbox for \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\"" Sep 3 23:28:01.944247 containerd[1881]: time="2025-09-03T23:28:01.943835056Z" level=info msg="Forcibly stopping sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\"" Sep 3 23:28:01.944247 containerd[1881]: time="2025-09-03T23:28:01.943884721Z" level=info msg="TearDown network for sandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" successfully" Sep 3 23:28:01.945463 containerd[1881]: time="2025-09-03T23:28:01.945442821Z" level=info msg="Ensure that sandbox b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b in task-service has been cleanup successfully" Sep 3 23:28:01.957891 containerd[1881]: time="2025-09-03T23:28:01.957865613Z" level=info msg="RemovePodSandbox \"b67748348db0a0c0968ea6c366540e233234296f938e0a6a590fdd6deedd544b\" returns successfully" Sep 3 23:28:02.067296 systemd-networkd[1706]: lxc_health: Link UP Sep 3 23:28:02.082026 systemd-networkd[1706]: lxc_health: Gained carrier Sep 3 23:28:03.034658 containerd[1881]: time="2025-09-03T23:28:03.034616709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df\" id:\"1f01949e931b3f46686f2270fd6723a1ae4d3665205ac17f14938367b8d8fa6d\" pid:5886 exited_at:{seconds:1756942083 nanos:34294157}" Sep 3 23:28:03.037348 kubelet[3418]: E0903 23:28:03.037315 3418 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:39594->127.0.0.1:43335: read tcp 127.0.0.1:39594->127.0.0.1:43335: read: connection reset by peer Sep 3 23:28:03.037735 kubelet[3418]: E0903 23:28:03.037415 3418 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39594->127.0.0.1:43335: write tcp 127.0.0.1:39594->127.0.0.1:43335: write: broken pipe Sep 3 23:28:03.273386 systemd-networkd[1706]: lxc_health: Gained IPv6LL Sep 3 23:28:03.325214 kubelet[3418]: I0903 23:28:03.324901 3418 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-26krf" podStartSLOduration=9.324889111 podStartE2EDuration="9.324889111s" podCreationTimestamp="2025-09-03 23:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:28:00.31206109 +0000 UTC m=+178.460555002" watchObservedRunningTime="2025-09-03 23:28:03.324889111 +0000 UTC m=+181.473383031" Sep 3 23:28:05.113503 containerd[1881]: time="2025-09-03T23:28:05.113439083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df\" id:\"1bda7758f07352a7c6af496e5978ecf4b43afb24ff4a78bde570a8dc2352d587\" pid:5918 exited_at:{seconds:1756942085 nanos:113045776}" Sep 3 23:28:07.188582 containerd[1881]: time="2025-09-03T23:28:07.188525004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9af68d13cd0d87b5ebc6c9dfd01207d6ca222ce4eae51340e09326c337c287df\" id:\"6d38c1ddd5457e518492583992d1769711ed51e905911fe5e7b8e6d9f1922f88\" pid:5940 exited_at:{seconds:1756942087 nanos:188153225}" Sep 3 23:28:07.265185 sshd[5226]: Connection closed by 10.200.16.10 port 35322 Sep 3 23:28:07.265046 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Sep 3 23:28:07.268410 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:35322.service: Deactivated successfully. Sep 3 23:28:07.269707 systemd[1]: session-25.scope: Deactivated successfully. Sep 3 23:28:07.270480 systemd-logind[1859]: Session 25 logged out. Waiting for processes to exit. Sep 3 23:28:07.271886 systemd-logind[1859]: Removed session 25.