Sep 11 00:26:13.937408 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:26:13.937434 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:26:13.937443 kernel: BIOS-provided physical RAM map: Sep 11 00:26:13.937450 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 11 00:26:13.937456 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 11 00:26:13.937462 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Sep 11 00:26:13.937470 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Sep 11 00:26:13.937477 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Sep 11 00:26:13.937483 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Sep 11 00:26:13.937489 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 11 00:26:13.937495 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 11 00:26:13.937501 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 11 00:26:13.937507 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 11 00:26:13.937513 kernel: printk: legacy bootconsole [earlyser0] enabled Sep 11 00:26:13.937522 kernel: NX (Execute Disable) protection: active Sep 11 00:26:13.937529 kernel: APIC: Static calls initialized Sep 11 00:26:13.937535 kernel: efi: EFI v2.7 by Microsoft Sep 11 00:26:13.937542 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eab0018 RNG=0x3ffd2018 Sep 11 00:26:13.937549 kernel: random: crng init done Sep 11 00:26:13.937555 kernel: secureboot: Secure boot disabled Sep 11 00:26:13.937562 kernel: SMBIOS 3.1.0 present. Sep 11 00:26:13.937569 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Sep 11 00:26:13.937577 kernel: DMI: Memory slots populated: 2/2 Sep 11 00:26:13.937583 kernel: Hypervisor detected: Microsoft Hyper-V Sep 11 00:26:13.937590 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Sep 11 00:26:13.937596 kernel: Hyper-V: Nested features: 0x3e0101 Sep 11 00:26:13.937603 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 11 00:26:13.937609 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 11 00:26:13.937615 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 11 00:26:13.937622 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 11 00:26:13.937629 kernel: tsc: Detected 2300.001 MHz processor Sep 11 00:26:13.937635 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:26:13.937642 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:26:13.937651 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Sep 11 00:26:13.937658 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 11 00:26:13.937665 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:26:13.937672 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Sep 11 00:26:13.937678 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Sep 11 00:26:13.937685 kernel: Using GB pages for direct mapping Sep 11 00:26:13.937692 kernel: ACPI: Early table checksum verification disabled Sep 11 00:26:13.937701 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 11 00:26:13.937710 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:13.937717 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:13.937724 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 11 00:26:13.937731 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 11 00:26:13.937738 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:13.937745 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:13.937754 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:13.937761 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 11 00:26:13.937768 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 11 00:26:13.937775 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 11 00:26:13.937782 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 11 00:26:13.937789 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Sep 11 00:26:13.937796 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 11 00:26:13.937803 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 11 00:26:13.937812 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 11 00:26:13.937819 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 11 00:26:13.937826 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Sep 11 00:26:13.937833 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Sep 11 00:26:13.937840 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 11 00:26:13.937846 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 11 00:26:13.937854 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Sep 11 00:26:13.937861 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Sep 11 00:26:13.937868 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Sep 11 00:26:13.937876 kernel: Zone ranges: Sep 11 00:26:13.937884 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:26:13.937890 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 11 00:26:13.937897 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 11 00:26:13.937904 kernel: Device empty Sep 11 00:26:13.937911 kernel: Movable zone start for each node Sep 11 00:26:13.937918 kernel: Early memory node ranges Sep 11 00:26:13.937925 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 11 00:26:13.937932 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Sep 11 00:26:13.937939 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Sep 11 00:26:13.937948 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 11 00:26:13.937954 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 11 00:26:13.937961 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 11 00:26:13.937968 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:26:13.937975 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 11 00:26:13.937982 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 11 00:26:13.937989 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Sep 11 00:26:13.937996 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 11 00:26:13.938003 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:26:13.938012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:26:13.938019 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:26:13.938026 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 11 00:26:13.938033 kernel: TSC deadline timer available Sep 11 00:26:13.938039 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:26:13.938046 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:26:13.938053 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:26:13.938060 kernel: CPU topo: Max. threads per core: 2 Sep 11 00:26:13.938067 kernel: CPU topo: Num. cores per package: 1 Sep 11 00:26:13.938075 kernel: CPU topo: Num. threads per package: 2 Sep 11 00:26:13.938082 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 11 00:26:13.938089 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 11 00:26:13.938096 kernel: Booting paravirtualized kernel on Hyper-V Sep 11 00:26:13.938104 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:26:13.938111 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 11 00:26:13.938118 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 11 00:26:13.938125 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 11 00:26:13.938132 kernel: pcpu-alloc: [0] 0 1 Sep 11 00:26:13.938141 kernel: Hyper-V: PV spinlocks enabled Sep 11 00:26:13.938148 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:26:13.938156 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:26:13.938164 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:26:13.938171 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 11 00:26:13.938178 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:26:13.938185 kernel: Fallback order for Node 0: 0 Sep 11 00:26:13.938518 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Sep 11 00:26:13.938528 kernel: Policy zone: Normal Sep 11 00:26:13.938535 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:26:13.938542 kernel: software IO TLB: area num 2. Sep 11 00:26:13.938549 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 11 00:26:13.938556 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:26:13.938564 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:26:13.938570 kernel: Dynamic Preempt: voluntary Sep 11 00:26:13.938577 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:26:13.938585 kernel: rcu: RCU event tracing is enabled. Sep 11 00:26:13.938599 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 11 00:26:13.938607 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:26:13.938614 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:26:13.938623 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:26:13.938630 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:26:13.938637 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 11 00:26:13.938645 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:26:13.938653 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:26:13.938660 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:26:13.938668 kernel: Using NULL legacy PIC Sep 11 00:26:13.938676 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 11 00:26:13.938683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:26:13.938691 kernel: Console: colour dummy device 80x25 Sep 11 00:26:13.938698 kernel: printk: legacy console [tty1] enabled Sep 11 00:26:13.938705 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:26:13.938713 kernel: printk: legacy bootconsole [earlyser0] disabled Sep 11 00:26:13.938721 kernel: ACPI: Core revision 20240827 Sep 11 00:26:13.938729 kernel: Failed to register legacy timer interrupt Sep 11 00:26:13.938736 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:26:13.938743 kernel: x2apic enabled Sep 11 00:26:13.938751 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:26:13.938758 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 11 00:26:13.938765 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 11 00:26:13.938772 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Sep 11 00:26:13.938780 kernel: Hyper-V: Using IPI hypercalls Sep 11 00:26:13.938788 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 11 00:26:13.938796 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 11 00:26:13.938804 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 11 00:26:13.938811 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 11 00:26:13.938819 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 11 00:26:13.938826 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 11 00:26:13.938834 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Sep 11 00:26:13.938841 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300001) Sep 11 00:26:13.938849 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:26:13.938858 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 11 00:26:13.938866 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 11 00:26:13.938873 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:26:13.938880 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:26:13.938887 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:26:13.938895 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 11 00:26:13.938902 kernel: RETBleed: Vulnerable Sep 11 00:26:13.938909 kernel: Speculative Store Bypass: Vulnerable Sep 11 00:26:13.938916 kernel: active return thunk: its_return_thunk Sep 11 00:26:13.938923 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 11 00:26:13.938930 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:26:13.938939 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:26:13.938946 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:26:13.938953 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 11 00:26:13.938960 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 11 00:26:13.938967 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 11 00:26:13.938974 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Sep 11 00:26:13.938982 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Sep 11 00:26:13.938989 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Sep 11 00:26:13.938996 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:26:13.939003 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 11 00:26:13.939010 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 11 00:26:13.939019 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 11 00:26:13.939025 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Sep 11 00:26:13.939033 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Sep 11 00:26:13.939040 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Sep 11 00:26:13.939047 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Sep 11 00:26:13.939055 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:26:13.939062 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:26:13.939070 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:26:13.939077 kernel: landlock: Up and running. Sep 11 00:26:13.939084 kernel: SELinux: Initializing. Sep 11 00:26:13.939091 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 11 00:26:13.939100 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 11 00:26:13.939107 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Sep 11 00:26:13.939114 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Sep 11 00:26:13.939121 kernel: signal: max sigframe size: 11952 Sep 11 00:26:13.939129 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:26:13.939136 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:26:13.939144 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:26:13.939151 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 11 00:26:13.939159 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:26:13.939166 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:26:13.939175 kernel: .... node #0, CPUs: #1 Sep 11 00:26:13.939182 kernel: smp: Brought up 1 node, 2 CPUs Sep 11 00:26:13.939727 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Sep 11 00:26:13.939737 kernel: Memory: 8079080K/8383228K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 297940K reserved, 0K cma-reserved) Sep 11 00:26:13.939745 kernel: devtmpfs: initialized Sep 11 00:26:13.939753 kernel: x86/mm: Memory block size: 128MB Sep 11 00:26:13.939761 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 11 00:26:13.939768 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:26:13.939776 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 11 00:26:13.939785 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:26:13.939793 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:26:13.939800 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:26:13.939807 kernel: audit: type=2000 audit(1757550371.058:1): state=initialized audit_enabled=0 res=1 Sep 11 00:26:13.939814 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:26:13.939822 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:26:13.939829 kernel: cpuidle: using governor menu Sep 11 00:26:13.939837 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:26:13.939844 kernel: dca service started, version 1.12.1 Sep 11 00:26:13.939853 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Sep 11 00:26:13.939860 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Sep 11 00:26:13.939867 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:26:13.939875 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:26:13.939882 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:26:13.939889 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:26:13.939897 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:26:13.939904 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:26:13.939913 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:26:13.939936 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:26:13.939944 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:26:13.939951 kernel: ACPI: Interpreter enabled Sep 11 00:26:13.939958 kernel: ACPI: PM: (supports S0 S5) Sep 11 00:26:13.939966 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:26:13.939973 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:26:13.939981 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 11 00:26:13.939989 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 11 00:26:13.939996 kernel: iommu: Default domain type: Translated Sep 11 00:26:13.940006 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:26:13.940013 kernel: efivars: Registered efivars operations Sep 11 00:26:13.940020 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:26:13.940027 kernel: PCI: System does not support PCI Sep 11 00:26:13.940034 kernel: vgaarb: loaded Sep 11 00:26:13.940042 kernel: clocksource: Switched to clocksource tsc-early Sep 11 00:26:13.940050 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:26:13.940057 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:26:13.940065 kernel: pnp: PnP ACPI init Sep 11 00:26:13.940074 kernel: pnp: PnP ACPI: found 3 devices Sep 11 00:26:13.940082 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:26:13.940089 kernel: NET: Registered PF_INET protocol family Sep 11 00:26:13.940097 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 11 00:26:13.940105 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 11 00:26:13.940112 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:26:13.940120 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:26:13.940128 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 11 00:26:13.940137 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 11 00:26:13.940145 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 11 00:26:13.940153 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 11 00:26:13.940161 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:26:13.940169 kernel: NET: Registered PF_XDP protocol family Sep 11 00:26:13.940176 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:26:13.940184 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 11 00:26:13.940205 kernel: software IO TLB: mapped [mem 0x000000003a9c6000-0x000000003e9c6000] (64MB) Sep 11 00:26:13.940213 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Sep 11 00:26:13.940223 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Sep 11 00:26:13.940231 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Sep 11 00:26:13.940239 kernel: clocksource: Switched to clocksource tsc Sep 11 00:26:13.940246 kernel: Initialise system trusted keyrings Sep 11 00:26:13.940254 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 11 00:26:13.940261 kernel: Key type asymmetric registered Sep 11 00:26:13.940269 kernel: Asymmetric key parser 'x509' registered Sep 11 00:26:13.940277 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:26:13.940284 kernel: io scheduler mq-deadline registered Sep 11 00:26:13.940293 kernel: io scheduler kyber registered Sep 11 00:26:13.940301 kernel: io scheduler bfq registered Sep 11 00:26:13.940309 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:26:13.940316 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:26:13.940324 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:26:13.940331 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 11 00:26:13.940339 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:26:13.940347 kernel: i8042: PNP: No PS/2 controller found. Sep 11 00:26:13.940461 kernel: rtc_cmos 00:02: registered as rtc0 Sep 11 00:26:13.940530 kernel: rtc_cmos 00:02: setting system clock to 2025-09-11T00:26:13 UTC (1757550373) Sep 11 00:26:13.940591 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 11 00:26:13.940601 kernel: intel_pstate: Intel P-state driver initializing Sep 11 00:26:13.940608 kernel: efifb: probing for efifb Sep 11 00:26:13.940616 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 11 00:26:13.940624 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 11 00:26:13.940631 kernel: efifb: scrolling: redraw Sep 11 00:26:13.940639 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 11 00:26:13.940648 kernel: Console: switching to colour frame buffer device 128x48 Sep 11 00:26:13.940656 kernel: fb0: EFI VGA frame buffer device Sep 11 00:26:13.940663 kernel: pstore: Using crash dump compression: deflate Sep 11 00:26:13.940671 kernel: pstore: Registered efi_pstore as persistent store backend Sep 11 00:26:13.940679 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:26:13.940686 kernel: Segment Routing with IPv6 Sep 11 00:26:13.940694 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:26:13.940701 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:26:13.940709 kernel: Key type dns_resolver registered Sep 11 00:26:13.940718 kernel: IPI shorthand broadcast: enabled Sep 11 00:26:13.940725 kernel: sched_clock: Marking stable (2730003225, 89023455)->(3114437205, -295410525) Sep 11 00:26:13.940733 kernel: registered taskstats version 1 Sep 11 00:26:13.940740 kernel: Loading compiled-in X.509 certificates Sep 11 00:26:13.940748 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:26:13.940755 kernel: Demotion targets for Node 0: null Sep 11 00:26:13.940763 kernel: Key type .fscrypt registered Sep 11 00:26:13.940770 kernel: Key type fscrypt-provisioning registered Sep 11 00:26:13.940778 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:26:13.940787 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:26:13.940794 kernel: ima: No architecture policies found Sep 11 00:26:13.940802 kernel: clk: Disabling unused clocks Sep 11 00:26:13.940809 kernel: Warning: unable to open an initial console. Sep 11 00:26:13.940817 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:26:13.940824 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:26:13.940832 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:26:13.940839 kernel: Run /init as init process Sep 11 00:26:13.940847 kernel: with arguments: Sep 11 00:26:13.940855 kernel: /init Sep 11 00:26:13.940863 kernel: with environment: Sep 11 00:26:13.940870 kernel: HOME=/ Sep 11 00:26:13.940877 kernel: TERM=linux Sep 11 00:26:13.940885 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:26:13.940893 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:26:13.940904 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:26:13.940915 systemd[1]: Detected virtualization microsoft. Sep 11 00:26:13.940922 systemd[1]: Detected architecture x86-64. Sep 11 00:26:13.940930 systemd[1]: Running in initrd. Sep 11 00:26:13.940938 systemd[1]: No hostname configured, using default hostname. Sep 11 00:26:13.940946 systemd[1]: Hostname set to . Sep 11 00:26:13.940954 systemd[1]: Initializing machine ID from random generator. Sep 11 00:26:13.940962 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:26:13.940970 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:26:13.940978 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:26:13.940989 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:26:13.940997 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:26:13.941005 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:26:13.941014 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:26:13.941022 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:26:13.941030 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:26:13.941040 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:26:13.941048 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:26:13.941056 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:26:13.941064 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:26:13.941073 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:26:13.941081 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:26:13.941089 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:26:13.941097 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:26:13.941105 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:26:13.941114 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:26:13.941123 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:26:13.941131 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:26:13.941139 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:26:13.941147 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:26:13.941156 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:26:13.941164 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:26:13.941172 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:26:13.941182 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:26:13.942050 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:26:13.942064 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:26:13.942082 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:26:13.942109 systemd-journald[205]: Collecting audit messages is disabled. Sep 11 00:26:13.942131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:13.942140 systemd-journald[205]: Journal started Sep 11 00:26:13.942161 systemd-journald[205]: Runtime Journal (/run/log/journal/2f190048442f448ba8c1a722c96116d3) is 8M, max 158.9M, 150.9M free. Sep 11 00:26:13.944201 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:26:13.948201 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:26:13.949879 systemd-modules-load[207]: Inserted module 'overlay' Sep 11 00:26:13.952576 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:26:13.956092 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:26:13.962291 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:26:13.964301 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:26:13.982803 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:13.985473 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:26:13.986577 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:26:13.993401 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:26:14.004273 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:26:14.004293 kernel: Bridge firewalling registered Sep 11 00:26:13.997314 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:26:14.002989 systemd-modules-load[207]: Inserted module 'br_netfilter' Sep 11 00:26:14.004867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:26:14.007559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:26:14.009278 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:26:14.026483 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:26:14.031325 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:26:14.034845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:26:14.040423 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:26:14.041489 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:26:14.065422 systemd-resolved[236]: Positive Trust Anchors: Sep 11 00:26:14.065435 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:26:14.065461 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:26:14.087427 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:26:14.071200 systemd-resolved[236]: Defaulting to hostname 'linux'. Sep 11 00:26:14.073877 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:26:14.092279 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:26:14.140211 kernel: SCSI subsystem initialized Sep 11 00:26:14.147203 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:26:14.155202 kernel: iscsi: registered transport (tcp) Sep 11 00:26:14.170553 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:26:14.170588 kernel: QLogic iSCSI HBA Driver Sep 11 00:26:14.181374 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:26:14.189743 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:26:14.191669 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:26:14.220154 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:26:14.223562 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:26:14.262213 kernel: raid6: avx512x4 gen() 47871 MB/s Sep 11 00:26:14.279198 kernel: raid6: avx512x2 gen() 46475 MB/s Sep 11 00:26:14.296197 kernel: raid6: avx512x1 gen() 30341 MB/s Sep 11 00:26:14.314203 kernel: raid6: avx2x4 gen() 42005 MB/s Sep 11 00:26:14.331197 kernel: raid6: avx2x2 gen() 43993 MB/s Sep 11 00:26:14.348707 kernel: raid6: avx2x1 gen() 30670 MB/s Sep 11 00:26:14.348731 kernel: raid6: using algorithm avx512x4 gen() 47871 MB/s Sep 11 00:26:14.367492 kernel: raid6: .... xor() 7600 MB/s, rmw enabled Sep 11 00:26:14.367509 kernel: raid6: using avx512x2 recovery algorithm Sep 11 00:26:14.383204 kernel: xor: automatically using best checksumming function avx Sep 11 00:26:14.487207 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:26:14.491220 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:26:14.494306 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:26:14.509091 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 11 00:26:14.512616 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:26:14.518700 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:26:14.534332 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Sep 11 00:26:14.549832 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:26:14.553289 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:26:14.585933 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:26:14.592460 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:26:14.627223 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:26:14.635211 kernel: AES CTR mode by8 optimization enabled Sep 11 00:26:14.658292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:26:14.666284 kernel: hv_vmbus: Vmbus version:5.3 Sep 11 00:26:14.658342 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:14.664219 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:14.677326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:14.686134 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 11 00:26:14.686152 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 11 00:26:14.696241 kernel: hv_vmbus: registering driver hv_netvsc Sep 11 00:26:14.701212 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 11 00:26:14.706365 kernel: hv_vmbus: registering driver hv_pci Sep 11 00:26:14.706544 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:14.715551 kernel: PTP clock support registered Sep 11 00:26:14.715580 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 11 00:26:14.722026 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d76d977 (unnamed net_device) (uninitialized): VF slot 1 added Sep 11 00:26:14.722244 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Sep 11 00:26:14.731227 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Sep 11 00:26:14.740385 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 11 00:26:14.740418 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Sep 11 00:26:14.743326 kernel: hv_utils: Registering HyperV Utility Driver Sep 11 00:26:14.743359 kernel: hv_vmbus: registering driver hv_utils Sep 11 00:26:14.749314 kernel: hv_vmbus: registering driver hv_storvsc Sep 11 00:26:14.749402 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 11 00:26:14.751290 kernel: hv_utils: Shutdown IC version 3.2 Sep 11 00:26:14.753786 kernel: hv_utils: Heartbeat IC version 3.0 Sep 11 00:26:14.753815 kernel: hv_utils: TimeSync IC version 4.0 Sep 11 00:26:14.753825 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Sep 11 00:26:14.930388 kernel: hv_vmbus: registering driver hid_hyperv Sep 11 00:26:14.930416 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Sep 11 00:26:14.930432 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 11 00:26:14.930439 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 11 00:26:14.930341 systemd-resolved[236]: Clock change detected. Flushing caches. Sep 11 00:26:14.935280 kernel: scsi host0: storvsc_host_t Sep 11 00:26:14.935427 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 11 00:26:14.938988 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 11 00:26:14.939190 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Sep 11 00:26:14.947828 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 11 00:26:14.947989 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:26:14.949640 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 11 00:26:14.954630 kernel: nvme nvme0: pci function c05b:00:00.0 Sep 11 00:26:14.954766 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Sep 11 00:26:14.967653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#149 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 11 00:26:14.980633 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#183 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 11 00:26:15.108631 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 11 00:26:15.113633 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 11 00:26:15.341631 kernel: nvme nvme0: using unchecked data buffer Sep 11 00:26:15.534811 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Sep 11 00:26:15.560378 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 11 00:26:15.607298 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Sep 11 00:26:15.616295 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 11 00:26:15.616544 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 11 00:26:15.623710 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:26:15.626714 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:26:15.626886 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:26:15.626910 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:26:15.628711 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:26:15.636594 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:26:15.652794 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:26:15.653868 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 11 00:26:15.909668 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Sep 11 00:26:15.913400 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Sep 11 00:26:15.913544 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Sep 11 00:26:15.914934 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Sep 11 00:26:15.919736 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Sep 11 00:26:15.922647 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Sep 11 00:26:15.926662 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Sep 11 00:26:15.926690 kernel: pci 7870:00:00.0: enabling Extended Tags Sep 11 00:26:15.941353 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Sep 11 00:26:15.941514 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Sep 11 00:26:15.945934 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Sep 11 00:26:15.949212 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Sep 11 00:26:15.957624 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Sep 11 00:26:15.959629 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d76d977 eth0: VF registering: eth1 Sep 11 00:26:15.959752 kernel: mana 7870:00:00.0 eth1: joined to eth0 Sep 11 00:26:15.964631 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Sep 11 00:26:16.666444 disk-uuid[674]: The operation has completed successfully. Sep 11 00:26:16.668101 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 11 00:26:16.716441 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:26:16.716526 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:26:16.741330 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:26:16.754411 sh[712]: Success Sep 11 00:26:16.783484 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:26:16.783523 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:26:16.783536 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:26:16.790632 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:26:16.983045 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:26:16.986695 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:26:17.003417 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:26:17.015325 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (725) Sep 11 00:26:17.015359 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:26:17.016636 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:26:17.287949 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 11 00:26:17.288026 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:26:17.288871 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:26:17.318569 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:26:17.321946 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:26:17.324767 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:26:17.327444 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:26:17.339792 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:26:17.357643 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (748) Sep 11 00:26:17.360452 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:26:17.360483 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:26:17.379218 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 11 00:26:17.379266 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 11 00:26:17.379279 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 11 00:26:17.383758 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:26:17.384470 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:26:17.387366 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:26:17.419520 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:26:17.422719 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:26:17.449409 systemd-networkd[894]: lo: Link UP Sep 11 00:26:17.453674 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 11 00:26:17.449416 systemd-networkd[894]: lo: Gained carrier Sep 11 00:26:17.450888 systemd-networkd[894]: Enumeration completed Sep 11 00:26:17.451198 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:26:17.451202 systemd-networkd[894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:26:17.464690 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 11 00:26:17.464839 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d76d977 eth0: Data path switched to VF: enP30832s1 Sep 11 00:26:17.451400 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:26:17.461022 systemd-networkd[894]: enP30832s1: Link UP Sep 11 00:26:17.461085 systemd-networkd[894]: eth0: Link UP Sep 11 00:26:17.461212 systemd-networkd[894]: eth0: Gained carrier Sep 11 00:26:17.461221 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:26:17.463305 systemd[1]: Reached target network.target - Network. Sep 11 00:26:17.463887 systemd-networkd[894]: enP30832s1: Gained carrier Sep 11 00:26:17.473644 systemd-networkd[894]: eth0: DHCPv4 address 10.200.8.50/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 11 00:26:18.138306 ignition[829]: Ignition 2.21.0 Sep 11 00:26:18.138315 ignition[829]: Stage: fetch-offline Sep 11 00:26:18.140122 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:26:18.138383 ignition[829]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:18.144809 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 11 00:26:18.138389 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:18.138463 ignition[829]: parsed url from cmdline: "" Sep 11 00:26:18.138465 ignition[829]: no config URL provided Sep 11 00:26:18.138469 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:26:18.138474 ignition[829]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:26:18.138478 ignition[829]: failed to fetch config: resource requires networking Sep 11 00:26:18.138606 ignition[829]: Ignition finished successfully Sep 11 00:26:18.170546 ignition[904]: Ignition 2.21.0 Sep 11 00:26:18.170556 ignition[904]: Stage: fetch Sep 11 00:26:18.170932 ignition[904]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:18.170941 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:18.171036 ignition[904]: parsed url from cmdline: "" Sep 11 00:26:18.171038 ignition[904]: no config URL provided Sep 11 00:26:18.171046 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:26:18.171051 ignition[904]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:26:18.171073 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 11 00:26:18.233230 ignition[904]: GET result: OK Sep 11 00:26:18.233304 ignition[904]: config has been read from IMDS userdata Sep 11 00:26:18.233324 ignition[904]: parsing config with SHA512: bd278a36dbb7c152ac4cb28bc73427ecfb130b0a8c68e9e47ab6dd0dc54c6e7bc539e338f93191f6dbb3d05cbf04720479a8b95afb300a9eb99221a1d6a99615 Sep 11 00:26:18.236821 unknown[904]: fetched base config from "system" Sep 11 00:26:18.237148 ignition[904]: fetch: fetch complete Sep 11 00:26:18.236827 unknown[904]: fetched base config from "system" Sep 11 00:26:18.237152 ignition[904]: fetch: fetch passed Sep 11 00:26:18.236830 unknown[904]: fetched user config from "azure" Sep 11 00:26:18.237183 ignition[904]: Ignition finished successfully Sep 11 00:26:18.238946 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 11 00:26:18.241731 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:26:18.265166 ignition[911]: Ignition 2.21.0 Sep 11 00:26:18.265174 ignition[911]: Stage: kargs Sep 11 00:26:18.267242 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:26:18.265336 ignition[911]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:18.273703 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:26:18.265342 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:18.266133 ignition[911]: kargs: kargs passed Sep 11 00:26:18.266162 ignition[911]: Ignition finished successfully Sep 11 00:26:18.292212 ignition[918]: Ignition 2.21.0 Sep 11 00:26:18.292221 ignition[918]: Stage: disks Sep 11 00:26:18.292353 ignition[918]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:18.294636 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:26:18.292359 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:18.293071 ignition[918]: disks: disks passed Sep 11 00:26:18.293100 ignition[918]: Ignition finished successfully Sep 11 00:26:18.302791 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:26:18.303058 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:26:18.309653 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:26:18.309789 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:26:18.316115 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:26:18.317532 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:26:18.379740 systemd-fsck[926]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 11 00:26:18.383231 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:26:18.388248 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:26:18.633455 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:26:18.637051 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:26:18.634898 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:26:18.649191 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:26:18.653794 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:26:18.659327 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 11 00:26:18.664701 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:26:18.670290 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (935) Sep 11 00:26:18.670309 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:26:18.670470 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:26:18.664793 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:26:18.677522 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:26:18.679713 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 11 00:26:18.679736 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 11 00:26:18.679750 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 11 00:26:18.681474 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:26:18.684710 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:26:19.066721 systemd-networkd[894]: eth0: Gained IPv6LL Sep 11 00:26:19.145900 coreos-metadata[937]: Sep 11 00:26:19.145 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 11 00:26:19.152407 coreos-metadata[937]: Sep 11 00:26:19.152 INFO Fetch successful Sep 11 00:26:19.153696 coreos-metadata[937]: Sep 11 00:26:19.152 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 11 00:26:19.159660 coreos-metadata[937]: Sep 11 00:26:19.159 INFO Fetch successful Sep 11 00:26:19.174453 coreos-metadata[937]: Sep 11 00:26:19.174 INFO wrote hostname ci-4372.1.0-n-1c5282f4e4 to /sysroot/etc/hostname Sep 11 00:26:19.175960 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 11 00:26:19.251079 initrd-setup-root[967]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:26:19.295506 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:26:19.299788 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:26:19.303572 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:26:20.062494 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:26:20.066066 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:26:20.072727 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:26:20.080546 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:26:20.083678 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:26:20.102534 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:26:20.106878 ignition[1056]: INFO : Ignition 2.21.0 Sep 11 00:26:20.106878 ignition[1056]: INFO : Stage: mount Sep 11 00:26:20.109924 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:20.109924 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:20.119696 ignition[1056]: INFO : mount: mount passed Sep 11 00:26:20.119696 ignition[1056]: INFO : Ignition finished successfully Sep 11 00:26:20.112485 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:26:20.117304 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:26:20.136939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:26:20.147636 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1067) Sep 11 00:26:20.149644 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:26:20.149778 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:26:20.153949 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 11 00:26:20.153979 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 11 00:26:20.153989 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 11 00:26:20.155986 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:26:20.179832 ignition[1084]: INFO : Ignition 2.21.0 Sep 11 00:26:20.179832 ignition[1084]: INFO : Stage: files Sep 11 00:26:20.183666 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:20.183666 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:20.183666 ignition[1084]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:26:20.226248 ignition[1084]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:26:20.226248 ignition[1084]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:26:20.327628 ignition[1084]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:26:20.331673 ignition[1084]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:26:20.331673 ignition[1084]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:26:20.331673 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 11 00:26:20.331673 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 11 00:26:20.327904 unknown[1084]: wrote ssh authorized keys file for user: core Sep 11 00:26:20.398112 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:26:20.440341 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 11 00:26:20.443683 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 11 00:26:20.443683 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 11 00:26:20.668607 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 11 00:26:20.859207 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 11 00:26:20.862707 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:26:20.862707 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:26:20.862707 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:26:20.862707 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:26:20.862707 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:26:20.862707 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:26:20.862707 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:26:20.862707 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:26:20.929752 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:26:20.932131 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:26:20.932131 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:26:20.988351 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:26:20.988351 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:26:20.994489 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 11 00:26:21.490798 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 11 00:26:22.076611 ignition[1084]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:26:22.076611 ignition[1084]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 11 00:26:22.107258 ignition[1084]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:26:22.116120 ignition[1084]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:26:22.116120 ignition[1084]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 11 00:26:22.116120 ignition[1084]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:26:22.116120 ignition[1084]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:26:22.116120 ignition[1084]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:26:22.116120 ignition[1084]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:26:22.116120 ignition[1084]: INFO : files: files passed Sep 11 00:26:22.116120 ignition[1084]: INFO : Ignition finished successfully Sep 11 00:26:22.118182 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:26:22.123177 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:26:22.141186 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:26:22.147479 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:26:22.147676 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:26:22.153938 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:26:22.156067 initrd-setup-root-after-ignition[1114]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:26:22.158923 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:26:22.160155 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:26:22.165863 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:26:22.168964 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:26:22.193943 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:26:22.194018 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:26:22.195568 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:26:22.195779 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:26:22.195839 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:26:22.197487 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:26:22.214756 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:26:22.216719 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:26:22.237324 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:26:22.237466 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:26:22.237691 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:26:22.237935 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:26:22.238039 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:26:22.238480 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:26:22.246773 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:26:22.249409 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:26:22.252750 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:26:22.256738 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:26:22.261503 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:26:22.264326 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:26:22.270223 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:26:22.273233 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:26:22.277060 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:26:22.281438 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:26:22.286871 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:26:22.286962 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:26:22.294803 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:26:22.296932 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:26:22.301402 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:26:22.302593 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:26:22.303131 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:26:22.303221 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:26:22.308706 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:26:22.308827 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:26:22.311343 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:26:22.311443 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:26:22.315761 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 11 00:26:22.315858 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 11 00:26:22.321793 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:26:22.324673 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:26:22.324816 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:26:22.327718 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:26:22.327825 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:26:22.327952 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:26:22.328195 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:26:22.328291 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:26:22.330754 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:26:22.333720 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:26:22.353987 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:26:22.369938 ignition[1138]: INFO : Ignition 2.21.0 Sep 11 00:26:22.369938 ignition[1138]: INFO : Stage: umount Sep 11 00:26:22.369938 ignition[1138]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:26:22.369938 ignition[1138]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 11 00:26:22.369938 ignition[1138]: INFO : umount: umount passed Sep 11 00:26:22.369938 ignition[1138]: INFO : Ignition finished successfully Sep 11 00:26:22.366657 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:26:22.366723 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:26:22.370910 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:26:22.370942 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:26:22.375591 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:26:22.375643 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:26:22.378918 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 11 00:26:22.378949 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 11 00:26:22.384717 systemd[1]: Stopped target network.target - Network. Sep 11 00:26:22.387487 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:26:22.387554 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:26:22.391192 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:26:22.393706 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:26:22.394676 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:26:22.398647 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:26:22.400470 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:26:22.401593 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:26:22.401634 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:26:22.403509 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:26:22.403531 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:26:22.407665 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:26:22.407703 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:26:22.411672 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:26:22.411703 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:26:22.415736 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:26:22.480675 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d76d977 eth0: Data path switched from VF: enP30832s1 Sep 11 00:26:22.481205 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 11 00:26:22.419703 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:26:22.427798 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:26:22.427900 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:26:22.436859 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:26:22.436981 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:26:22.437043 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:26:22.441387 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:26:22.443599 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:26:22.449241 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:26:22.449266 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:26:22.453830 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:26:22.458685 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:26:22.458732 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:26:22.459217 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:26:22.459245 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:26:22.461375 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:26:22.461409 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:26:22.461627 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:26:22.461656 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:26:22.463689 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:26:22.470854 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:26:22.470907 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:26:22.485553 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:26:22.485646 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:26:22.489368 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:26:22.489458 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:26:22.493886 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:26:22.493933 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:26:22.497138 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:26:22.497157 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:26:22.504385 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:26:22.504425 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:26:22.535669 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:26:22.536708 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:26:22.541417 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:26:22.542608 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:26:22.546312 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:26:22.548688 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:26:22.548741 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:26:22.554165 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:26:22.554208 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:26:22.555320 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:26:22.555353 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:22.557077 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:26:22.557124 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:26:22.557153 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:26:22.579322 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:26:22.579396 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:26:23.278304 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:26:23.278407 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:26:23.282925 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:26:23.286657 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:26:23.286705 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:26:23.287752 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:26:23.301369 systemd[1]: Switching root. Sep 11 00:26:23.388965 systemd-journald[205]: Journal stopped Sep 11 00:26:33.503343 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Sep 11 00:26:33.503375 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:26:33.503387 kernel: SELinux: policy capability open_perms=1 Sep 11 00:26:33.503395 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:26:33.503402 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:26:33.503410 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:26:33.503421 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:26:33.503428 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:26:33.503436 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:26:33.503443 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:26:33.503451 kernel: audit: type=1403 audit(1757550390.972:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:26:33.503460 systemd[1]: Successfully loaded SELinux policy in 117.163ms. Sep 11 00:26:33.503469 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.894ms. Sep 11 00:26:33.503481 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:26:33.503490 systemd[1]: Detected virtualization microsoft. Sep 11 00:26:33.503499 systemd[1]: Detected architecture x86-64. Sep 11 00:26:33.503507 systemd[1]: Detected first boot. Sep 11 00:26:33.503517 systemd[1]: Hostname set to . Sep 11 00:26:33.503526 systemd[1]: Initializing machine ID from random generator. Sep 11 00:26:33.503534 zram_generator::config[1181]: No configuration found. Sep 11 00:26:33.503543 kernel: Guest personality initialized and is inactive Sep 11 00:26:33.503551 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 11 00:26:33.503559 kernel: Initialized host personality Sep 11 00:26:33.503567 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:26:33.503575 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:26:33.503587 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:26:33.503601 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:26:33.505925 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:26:33.505950 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:26:33.505960 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:26:33.505970 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:26:33.505979 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:26:33.505991 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:26:33.505999 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:26:33.506008 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:26:33.506017 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:26:33.506025 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:26:33.506034 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:26:33.506052 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:26:33.506062 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:26:33.506076 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:26:33.506088 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:26:33.506097 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:26:33.506107 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:26:33.506116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:26:33.506125 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:26:33.506134 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:26:33.506143 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:26:33.506154 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:26:33.506164 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:26:33.506173 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:26:33.506182 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:26:33.506191 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:26:33.506201 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:26:33.506210 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:26:33.506219 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:26:33.506231 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:26:33.506240 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:26:33.506249 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:26:33.506259 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:26:33.506268 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:26:33.506279 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:26:33.506288 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:26:33.506297 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:26:33.506306 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:26:33.506315 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:26:33.506325 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:26:33.506334 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:26:33.506344 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:26:33.506355 systemd[1]: Reached target machines.target - Containers. Sep 11 00:26:33.506364 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:26:33.506374 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:26:33.506383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:26:33.506392 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:26:33.506402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:26:33.506411 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:26:33.506420 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:26:33.506431 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:26:33.506440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:26:33.506449 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:26:33.506459 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:26:33.506468 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:26:33.506478 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:26:33.506487 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:26:33.506497 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:26:33.506506 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:26:33.506517 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:26:33.506526 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:26:33.506535 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:26:33.506544 kernel: loop: module loaded Sep 11 00:26:33.506554 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:26:33.506563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:26:33.506572 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:26:33.506581 systemd[1]: Stopped verity-setup.service. Sep 11 00:26:33.506592 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:26:33.506602 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:26:33.506611 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:26:33.506634 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:26:33.506643 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:26:33.506652 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:26:33.506661 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:26:33.506692 systemd-journald[1267]: Collecting audit messages is disabled. Sep 11 00:26:33.506716 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:26:33.506726 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:26:33.506735 systemd-journald[1267]: Journal started Sep 11 00:26:33.506758 systemd-journald[1267]: Runtime Journal (/run/log/journal/539e8122cbf2448a9e37e70d009b014c) is 8M, max 158.9M, 150.9M free. Sep 11 00:26:33.069029 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:26:33.080977 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 11 00:26:33.081273 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:26:33.509671 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:26:33.509714 kernel: fuse: init (API version 7.41) Sep 11 00:26:33.515690 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:26:33.518063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:26:33.518174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:26:33.520812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:26:33.520934 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:26:33.522432 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:26:33.522552 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:26:33.523932 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:26:33.524046 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:26:33.525379 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:26:33.532262 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:26:33.535707 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:26:33.538732 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:26:33.539865 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:26:33.543518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:26:33.546046 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:26:33.796133 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:26:33.798213 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:26:33.798239 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:26:33.800702 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:26:33.802947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:26:33.805045 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:26:33.907480 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:26:33.911742 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:26:33.914079 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:26:33.917455 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:26:33.921764 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:26:33.925722 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:26:33.928447 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:26:33.934297 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:26:33.937589 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:26:33.941960 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:26:33.947734 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:26:33.961201 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:26:33.970019 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:26:33.972813 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:26:33.973896 systemd-journald[1267]: Time spent on flushing to /var/log/journal/539e8122cbf2448a9e37e70d009b014c is 14.832ms for 992 entries. Sep 11 00:26:33.973896 systemd-journald[1267]: System Journal (/var/log/journal/539e8122cbf2448a9e37e70d009b014c) is 8M, max 2.6G, 2.6G free. Sep 11 00:26:34.024046 systemd-journald[1267]: Received client request to flush runtime journal. Sep 11 00:26:34.024081 kernel: loop0: detected capacity change from 0 to 113872 Sep 11 00:26:34.024095 kernel: ACPI: bus type drm_connector registered Sep 11 00:26:33.977768 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:26:34.012216 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:26:34.012347 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:26:34.025072 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:26:34.029000 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:26:34.032333 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:26:34.070040 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 11 00:26:34.070211 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 11 00:26:34.073925 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:26:34.106988 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:26:34.107382 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:26:34.353628 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:26:34.385630 kernel: loop1: detected capacity change from 0 to 146240 Sep 11 00:26:34.699762 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:26:34.705741 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:26:34.718631 kernel: loop2: detected capacity change from 0 to 229808 Sep 11 00:26:34.733658 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Sep 11 00:26:34.755631 kernel: loop3: detected capacity change from 0 to 28504 Sep 11 00:26:34.858298 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:26:34.863065 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:26:34.907767 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:26:34.929020 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:26:34.951628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 11 00:26:35.002629 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:26:35.016633 kernel: hv_vmbus: registering driver hyperv_fb Sep 11 00:26:35.020166 kernel: hv_vmbus: registering driver hv_balloon Sep 11 00:26:35.021862 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 11 00:26:35.021889 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 11 00:26:35.024783 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 11 00:26:35.028589 kernel: Console: switching to colour dummy device 80x25 Sep 11 00:26:35.036579 kernel: Console: switching to colour frame buffer device 128x48 Sep 11 00:26:35.057164 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:26:35.109845 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:35.118432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:26:35.119184 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:35.124764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:35.130843 kernel: loop4: detected capacity change from 0 to 113872 Sep 11 00:26:35.147629 kernel: loop5: detected capacity change from 0 to 146240 Sep 11 00:26:35.154077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:26:35.154238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:35.157831 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:26:35.166088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:26:35.174628 kernel: loop6: detected capacity change from 0 to 229808 Sep 11 00:26:35.204657 kernel: loop7: detected capacity change from 0 to 28504 Sep 11 00:26:35.214681 (sd-merge)[1417]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 11 00:26:35.218799 (sd-merge)[1417]: Merged extensions into '/usr'. Sep 11 00:26:35.228280 systemd[1]: Reload requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:26:35.228296 systemd[1]: Reloading... Sep 11 00:26:35.359213 systemd-networkd[1350]: lo: Link UP Sep 11 00:26:35.359224 systemd-networkd[1350]: lo: Gained carrier Sep 11 00:26:35.361406 systemd-networkd[1350]: Enumeration completed Sep 11 00:26:35.361873 systemd-networkd[1350]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:26:35.361876 systemd-networkd[1350]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:26:35.366629 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 11 00:26:35.366823 zram_generator::config[1469]: No configuration found. Sep 11 00:26:35.375377 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 11 00:26:35.382671 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d76d977 eth0: Data path switched to VF: enP30832s1 Sep 11 00:26:35.383134 systemd-networkd[1350]: enP30832s1: Link UP Sep 11 00:26:35.383555 systemd-networkd[1350]: eth0: Link UP Sep 11 00:26:35.385095 systemd-networkd[1350]: eth0: Gained carrier Sep 11 00:26:35.385115 systemd-networkd[1350]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:26:35.390200 systemd-networkd[1350]: enP30832s1: Gained carrier Sep 11 00:26:35.391642 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 11 00:26:35.430662 systemd-networkd[1350]: eth0: DHCPv4 address 10.200.8.50/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 11 00:26:35.472952 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:26:35.556936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 11 00:26:35.557453 systemd[1]: Reloading finished in 328 ms. Sep 11 00:26:35.570908 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:26:35.573894 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:26:35.576944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:26:35.599281 systemd[1]: Starting ensure-sysext.service... Sep 11 00:26:35.602481 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:26:35.615792 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:26:35.620737 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:26:35.623807 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:26:35.643747 systemd[1]: Reload requested from client PID 1527 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:26:35.643758 systemd[1]: Reloading... Sep 11 00:26:35.694168 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:26:35.694201 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:26:35.694412 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:26:35.694595 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:26:35.696637 zram_generator::config[1565]: No configuration found. Sep 11 00:26:35.697159 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:26:35.697482 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Sep 11 00:26:35.697560 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Sep 11 00:26:35.717150 systemd-tmpfiles[1531]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:26:35.717244 systemd-tmpfiles[1531]: Skipping /boot Sep 11 00:26:35.724497 systemd-tmpfiles[1531]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:26:35.724511 systemd-tmpfiles[1531]: Skipping /boot Sep 11 00:26:35.775813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:26:35.854560 systemd[1]: Reloading finished in 210 ms. Sep 11 00:26:35.874778 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:26:35.880771 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:26:35.883888 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:26:35.890408 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:26:35.893816 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:26:35.897805 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:26:35.904333 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:26:35.907794 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:26:35.913593 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:26:35.914860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:26:35.919523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:26:35.923735 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:26:35.926121 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:26:35.926231 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:26:35.927086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:26:35.927929 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:26:35.931196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:26:35.931546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:26:35.935213 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:26:35.935344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:26:35.950501 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:26:35.952798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:26:35.957046 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:26:35.962853 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:26:35.964016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:26:35.969412 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:26:35.969519 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:26:35.969693 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:26:35.973818 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:26:35.978316 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:26:35.981039 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:26:35.981165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:26:35.986093 systemd[1]: Finished ensure-sysext.service. Sep 11 00:26:35.989014 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:26:35.992764 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:26:35.995995 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:26:35.996208 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:26:36.000918 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:26:36.001049 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:26:36.007875 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:26:36.007939 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:26:36.040777 systemd-resolved[1630]: Positive Trust Anchors: Sep 11 00:26:36.040785 systemd-resolved[1630]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:26:36.040812 systemd-resolved[1630]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:26:36.044104 systemd-resolved[1630]: Using system hostname 'ci-4372.1.0-n-1c5282f4e4'. Sep 11 00:26:36.045335 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:26:36.046649 systemd[1]: Reached target network.target - Network. Sep 11 00:26:36.049704 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:26:36.053238 augenrules[1669]: No rules Sep 11 00:26:36.054142 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:26:36.054377 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:26:36.113981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:26:36.113998 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:26:36.145773 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:26:36.147271 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:26:37.242744 systemd-networkd[1350]: eth0: Gained IPv6LL Sep 11 00:26:37.244647 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:26:37.247827 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:26:38.061894 ldconfig[1311]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:26:38.072243 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:26:38.074902 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:26:38.091812 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:26:38.093281 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:26:38.095771 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:26:38.098674 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:26:38.101658 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:26:38.103110 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:26:38.105693 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:26:38.108654 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:26:38.111661 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:26:38.111689 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:26:38.113662 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:26:38.116166 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:26:38.120552 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:26:38.123588 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:26:38.125199 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:26:38.126933 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:26:38.136048 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:26:38.138930 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:26:38.140858 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:26:38.142672 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:26:38.143761 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:26:38.144753 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:26:38.144782 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:26:38.146408 systemd[1]: Starting chronyd.service - NTP client/server... Sep 11 00:26:38.156692 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:26:38.159377 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 11 00:26:38.163833 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:26:38.168998 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:26:38.174734 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:26:38.177763 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:26:38.179870 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:26:38.181860 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:26:38.183737 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Sep 11 00:26:38.186787 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 11 00:26:38.189026 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 11 00:26:38.192513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:26:38.197785 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:26:38.200113 KVP[1693]: KVP starting; pid is:1693 Sep 11 00:26:38.200785 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:26:38.205743 KVP[1693]: KVP LIC Version: 3.1 Sep 11 00:26:38.206634 kernel: hv_utils: KVP IC version 4.0 Sep 11 00:26:38.206790 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:26:38.211784 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:26:38.213352 jq[1687]: false Sep 11 00:26:38.217166 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:26:38.224751 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:26:38.228396 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:26:38.231145 (chronyd)[1682]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 11 00:26:38.233219 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:26:38.233711 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:26:38.237134 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:26:38.238605 chronyd[1705]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 11 00:26:38.242570 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:26:38.242852 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:26:38.245685 chronyd[1705]: Timezone right/UTC failed leap second check, ignoring Sep 11 00:26:38.245819 chronyd[1705]: Loaded seccomp filter (level 2) Sep 11 00:26:38.246896 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:26:38.249869 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:26:38.252497 systemd[1]: Started chronyd.service - NTP client/server. Sep 11 00:26:38.254776 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:26:38.273940 jq[1703]: true Sep 11 00:26:38.287481 jq[1726]: true Sep 11 00:26:38.293671 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Refreshing passwd entry cache Sep 11 00:26:38.293683 oslogin_cache_refresh[1692]: Refreshing passwd entry cache Sep 11 00:26:38.310117 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Failure getting users, quitting Sep 11 00:26:38.310117 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:26:38.310117 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Refreshing group entry cache Sep 11 00:26:38.309424 oslogin_cache_refresh[1692]: Failure getting users, quitting Sep 11 00:26:38.309439 oslogin_cache_refresh[1692]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:26:38.309472 oslogin_cache_refresh[1692]: Refreshing group entry cache Sep 11 00:26:38.329091 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Failure getting groups, quitting Sep 11 00:26:38.329091 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:26:38.329068 oslogin_cache_refresh[1692]: Failure getting groups, quitting Sep 11 00:26:38.329076 oslogin_cache_refresh[1692]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:26:38.330108 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:26:38.330594 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:26:38.360523 (ntainerd)[1746]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:26:38.360899 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:26:38.365331 extend-filesystems[1691]: Found /dev/nvme0n1p6 Sep 11 00:26:38.370846 update_engine[1702]: I20250911 00:26:38.368796 1702 main.cc:92] Flatcar Update Engine starting Sep 11 00:26:38.366178 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:26:38.366683 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:26:38.377333 systemd-logind[1700]: New seat seat0. Sep 11 00:26:38.378835 systemd-logind[1700]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:26:38.378962 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:26:38.500400 extend-filesystems[1691]: Found /dev/nvme0n1p9 Sep 11 00:26:38.840515 extend-filesystems[1691]: Checking size of /dev/nvme0n1p9 Sep 11 00:26:38.842275 tar[1708]: linux-amd64/LICENSE Sep 11 00:26:38.842275 tar[1708]: linux-amd64/helm Sep 11 00:26:38.851213 dbus-daemon[1685]: [system] SELinux support is enabled Sep 11 00:26:38.851327 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:26:38.859474 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:26:38.859502 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:26:38.861956 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:26:38.861972 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:26:38.866834 dbus-daemon[1685]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 11 00:26:38.869204 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:26:38.872146 update_engine[1702]: I20250911 00:26:38.869469 1702 update_check_scheduler.cc:74] Next update check in 4m14s Sep 11 00:26:38.872790 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:26:38.884508 extend-filesystems[1691]: Old size kept for /dev/nvme0n1p9 Sep 11 00:26:38.886762 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:26:38.887229 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:26:38.896263 bash[1743]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:26:38.900835 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:26:38.903750 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:26:39.012472 coreos-metadata[1684]: Sep 11 00:26:39.011 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 11 00:26:39.019438 coreos-metadata[1684]: Sep 11 00:26:39.019 INFO Fetch successful Sep 11 00:26:39.019438 coreos-metadata[1684]: Sep 11 00:26:39.019 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 11 00:26:39.023018 coreos-metadata[1684]: Sep 11 00:26:39.022 INFO Fetch successful Sep 11 00:26:39.023550 coreos-metadata[1684]: Sep 11 00:26:39.023 INFO Fetching http://168.63.129.16/machine/57870a91-8dec-4a6e-b53c-e04ca3296bb3/b28b2fa6%2D0da7%2D463b%2Da471%2D9bc5a350a71e.%5Fci%2D4372.1.0%2Dn%2D1c5282f4e4?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 11 00:26:39.028327 coreos-metadata[1684]: Sep 11 00:26:39.028 INFO Fetch successful Sep 11 00:26:39.028327 coreos-metadata[1684]: Sep 11 00:26:39.028 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 11 00:26:39.047634 coreos-metadata[1684]: Sep 11 00:26:39.046 INFO Fetch successful Sep 11 00:26:39.112244 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 11 00:26:39.114517 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:26:39.534311 tar[1708]: linux-amd64/README.md Sep 11 00:26:39.546303 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:26:39.550583 locksmithd[1774]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:26:39.595709 sshd_keygen[1753]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:26:39.615850 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:26:39.622749 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:26:39.627154 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 11 00:26:39.631703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:26:39.641169 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:26:39.641529 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:26:39.643844 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:26:39.649143 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:26:39.665977 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 11 00:26:39.676735 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:26:39.681268 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:26:39.683787 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:26:39.685981 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:26:40.097703 kubelet[1823]: E0911 00:26:40.097654 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:26:40.099254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:26:40.099366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:26:40.099607 systemd[1]: kubelet.service: Consumed 841ms CPU time, 266.4M memory peak. Sep 11 00:26:40.559862 containerd[1746]: time="2025-09-11T00:26:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:26:40.560559 containerd[1746]: time="2025-09-11T00:26:40.560523293Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:26:40.567621 containerd[1746]: time="2025-09-11T00:26:40.567578679Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.998µs" Sep 11 00:26:40.567621 containerd[1746]: time="2025-09-11T00:26:40.567601130Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:26:40.567699 containerd[1746]: time="2025-09-11T00:26:40.567626923Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:26:40.567772 containerd[1746]: time="2025-09-11T00:26:40.567756615Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:26:40.567772 containerd[1746]: time="2025-09-11T00:26:40.567769083Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:26:40.567811 containerd[1746]: time="2025-09-11T00:26:40.567788505Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:26:40.567856 containerd[1746]: time="2025-09-11T00:26:40.567830098Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:26:40.567856 containerd[1746]: time="2025-09-11T00:26:40.567849940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:26:40.568018 containerd[1746]: time="2025-09-11T00:26:40.568001752Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:26:40.568018 containerd[1746]: time="2025-09-11T00:26:40.568011779Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:26:40.568057 containerd[1746]: time="2025-09-11T00:26:40.568021160Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:26:40.568057 containerd[1746]: time="2025-09-11T00:26:40.568028148Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:26:40.568094 containerd[1746]: time="2025-09-11T00:26:40.568085201Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:26:40.568219 containerd[1746]: time="2025-09-11T00:26:40.568204194Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:26:40.568247 containerd[1746]: time="2025-09-11T00:26:40.568225184Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:26:40.568247 containerd[1746]: time="2025-09-11T00:26:40.568233574Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:26:40.568281 containerd[1746]: time="2025-09-11T00:26:40.568255337Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:26:40.568467 containerd[1746]: time="2025-09-11T00:26:40.568456712Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:26:40.568518 containerd[1746]: time="2025-09-11T00:26:40.568498172Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:26:40.853436 containerd[1746]: time="2025-09-11T00:26:40.853367973Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:26:40.853436 containerd[1746]: time="2025-09-11T00:26:40.853424833Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:26:40.853529 containerd[1746]: time="2025-09-11T00:26:40.853442139Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:26:40.853529 containerd[1746]: time="2025-09-11T00:26:40.853452759Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:26:40.853529 containerd[1746]: time="2025-09-11T00:26:40.853463004Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:26:40.853529 containerd[1746]: time="2025-09-11T00:26:40.853472384Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:26:40.853529 containerd[1746]: time="2025-09-11T00:26:40.853484919Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:26:40.853529 containerd[1746]: time="2025-09-11T00:26:40.853494111Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:26:40.853529 containerd[1746]: time="2025-09-11T00:26:40.853504278Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:26:40.853529 containerd[1746]: time="2025-09-11T00:26:40.853512536Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:26:40.853529 containerd[1746]: time="2025-09-11T00:26:40.853524316Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:26:40.853734 containerd[1746]: time="2025-09-11T00:26:40.853538735Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:26:40.853734 containerd[1746]: time="2025-09-11T00:26:40.853648495Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:26:40.853734 containerd[1746]: time="2025-09-11T00:26:40.853664635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:26:40.853734 containerd[1746]: time="2025-09-11T00:26:40.853679469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:26:40.853734 containerd[1746]: time="2025-09-11T00:26:40.853688844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:26:40.853734 containerd[1746]: time="2025-09-11T00:26:40.853698095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:26:40.853734 containerd[1746]: time="2025-09-11T00:26:40.853706407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:26:40.853734 containerd[1746]: time="2025-09-11T00:26:40.853717979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:26:40.853734 containerd[1746]: time="2025-09-11T00:26:40.853730199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:26:40.853892 containerd[1746]: time="2025-09-11T00:26:40.853740632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:26:40.853892 containerd[1746]: time="2025-09-11T00:26:40.853749713Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:26:40.853892 containerd[1746]: time="2025-09-11T00:26:40.853759373Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:26:40.853892 containerd[1746]: time="2025-09-11T00:26:40.853812305Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:26:40.853892 containerd[1746]: time="2025-09-11T00:26:40.853822985Z" level=info msg="Start snapshots syncer" Sep 11 00:26:40.853892 containerd[1746]: time="2025-09-11T00:26:40.853844783Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:26:40.854096 containerd[1746]: time="2025-09-11T00:26:40.854060660Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:26:40.854202 containerd[1746]: time="2025-09-11T00:26:40.854102704Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:26:40.854202 containerd[1746]: time="2025-09-11T00:26:40.854176437Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:26:40.854267 containerd[1746]: time="2025-09-11T00:26:40.854254270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:26:40.854287 containerd[1746]: time="2025-09-11T00:26:40.854269268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:26:40.854287 containerd[1746]: time="2025-09-11T00:26:40.854279437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:26:40.854320 containerd[1746]: time="2025-09-11T00:26:40.854296437Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:26:40.854320 containerd[1746]: time="2025-09-11T00:26:40.854314278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:26:40.854356 containerd[1746]: time="2025-09-11T00:26:40.854324307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:26:40.854356 containerd[1746]: time="2025-09-11T00:26:40.854333336Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:26:40.854356 containerd[1746]: time="2025-09-11T00:26:40.854352820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:26:40.854420 containerd[1746]: time="2025-09-11T00:26:40.854361839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:26:40.854420 containerd[1746]: time="2025-09-11T00:26:40.854370359Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:26:40.854420 containerd[1746]: time="2025-09-11T00:26:40.854393190Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:26:40.854420 containerd[1746]: time="2025-09-11T00:26:40.854411043Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:26:40.854521 containerd[1746]: time="2025-09-11T00:26:40.854419582Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:26:40.854521 containerd[1746]: time="2025-09-11T00:26:40.854427803Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:26:40.854521 containerd[1746]: time="2025-09-11T00:26:40.854434811Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:26:40.854521 containerd[1746]: time="2025-09-11T00:26:40.854446050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:26:40.854521 containerd[1746]: time="2025-09-11T00:26:40.854454850Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:26:40.854521 containerd[1746]: time="2025-09-11T00:26:40.854467877Z" level=info msg="runtime interface created" Sep 11 00:26:40.854521 containerd[1746]: time="2025-09-11T00:26:40.854472183Z" level=info msg="created NRI interface" Sep 11 00:26:40.854521 containerd[1746]: time="2025-09-11T00:26:40.854478973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:26:40.854521 containerd[1746]: time="2025-09-11T00:26:40.854488348Z" level=info msg="Connect containerd service" Sep 11 00:26:40.854521 containerd[1746]: time="2025-09-11T00:26:40.854512043Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:26:40.855113 containerd[1746]: time="2025-09-11T00:26:40.855088381Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:26:41.995545 containerd[1746]: time="2025-09-11T00:26:41.995477614Z" level=info msg="Start subscribing containerd event" Sep 11 00:26:41.995898 containerd[1746]: time="2025-09-11T00:26:41.995525439Z" level=info msg="Start recovering state" Sep 11 00:26:41.995898 containerd[1746]: time="2025-09-11T00:26:41.995704531Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:26:41.995898 containerd[1746]: time="2025-09-11T00:26:41.995742210Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:26:41.996284 containerd[1746]: time="2025-09-11T00:26:41.996019222Z" level=info msg="Start event monitor" Sep 11 00:26:41.996284 containerd[1746]: time="2025-09-11T00:26:41.996036569Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:26:41.996284 containerd[1746]: time="2025-09-11T00:26:41.996058398Z" level=info msg="Start streaming server" Sep 11 00:26:41.996284 containerd[1746]: time="2025-09-11T00:26:41.996071940Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:26:41.996284 containerd[1746]: time="2025-09-11T00:26:41.996078783Z" level=info msg="runtime interface starting up..." Sep 11 00:26:41.996284 containerd[1746]: time="2025-09-11T00:26:41.996084519Z" level=info msg="starting plugins..." Sep 11 00:26:41.996284 containerd[1746]: time="2025-09-11T00:26:41.996095974Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:26:41.996284 containerd[1746]: time="2025-09-11T00:26:41.996214589Z" level=info msg="containerd successfully booted in 1.436668s" Sep 11 00:26:41.996278 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:26:41.998937 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:26:42.001150 systemd[1]: Startup finished in 2.861s (kernel) + 16.984s (initrd) + 11.144s (userspace) = 30.990s. Sep 11 00:26:42.424986 login[1834]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 11 00:26:42.426548 login[1835]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 11 00:26:42.431165 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:26:42.432758 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:26:42.439891 systemd-logind[1700]: New session 1 of user core. Sep 11 00:26:42.452723 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:26:42.455561 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:26:42.467231 (systemd)[1867]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:26:42.468775 systemd-logind[1700]: New session c1 of user core. Sep 11 00:26:42.593031 waagent[1832]: 2025-09-11T00:26:42.592977Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 11 00:26:42.595271 waagent[1832]: 2025-09-11T00:26:42.595185Z INFO Daemon Daemon OS: flatcar 4372.1.0 Sep 11 00:26:42.596773 waagent[1832]: 2025-09-11T00:26:42.596218Z INFO Daemon Daemon Python: 3.11.12 Sep 11 00:26:42.598352 waagent[1832]: 2025-09-11T00:26:42.598318Z INFO Daemon Daemon Run daemon Sep 11 00:26:42.600224 waagent[1832]: 2025-09-11T00:26:42.600112Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.1.0' Sep 11 00:26:42.603058 waagent[1832]: 2025-09-11T00:26:42.603024Z INFO Daemon Daemon Using waagent for provisioning Sep 11 00:26:42.605162 waagent[1832]: 2025-09-11T00:26:42.605133Z INFO Daemon Daemon Activate resource disk Sep 11 00:26:42.605471 waagent[1832]: 2025-09-11T00:26:42.605447Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 11 00:26:42.610458 waagent[1832]: 2025-09-11T00:26:42.610426Z INFO Daemon Daemon Found device: None Sep 11 00:26:42.612150 waagent[1832]: 2025-09-11T00:26:42.612121Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 11 00:26:42.614908 waagent[1832]: 2025-09-11T00:26:42.614882Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 11 00:26:42.618812 waagent[1832]: 2025-09-11T00:26:42.618782Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 11 00:26:42.621051 waagent[1832]: 2025-09-11T00:26:42.621024Z INFO Daemon Daemon Running default provisioning handler Sep 11 00:26:42.628509 waagent[1832]: 2025-09-11T00:26:42.628466Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 11 00:26:42.631030 systemd[1867]: Queued start job for default target default.target. Sep 11 00:26:42.632750 waagent[1832]: 2025-09-11T00:26:42.632718Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 11 00:26:42.634336 systemd[1867]: Created slice app.slice - User Application Slice. Sep 11 00:26:42.634588 systemd[1867]: Reached target paths.target - Paths. Sep 11 00:26:42.634633 systemd[1867]: Reached target timers.target - Timers. Sep 11 00:26:42.637030 waagent[1832]: 2025-09-11T00:26:42.635402Z INFO Daemon Daemon cloud-init is enabled: False Sep 11 00:26:42.637030 waagent[1832]: 2025-09-11T00:26:42.635520Z INFO Daemon Daemon Copying ovf-env.xml Sep 11 00:26:42.637695 systemd[1867]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:26:42.646588 systemd[1867]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:26:42.646663 systemd[1867]: Reached target sockets.target - Sockets. Sep 11 00:26:42.646695 systemd[1867]: Reached target basic.target - Basic System. Sep 11 00:26:42.646750 systemd[1867]: Reached target default.target - Main User Target. Sep 11 00:26:42.646768 systemd[1867]: Startup finished in 173ms. Sep 11 00:26:42.646839 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:26:42.652754 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:26:42.708206 waagent[1832]: 2025-09-11T00:26:42.708143Z INFO Daemon Daemon Successfully mounted dvd Sep 11 00:26:42.730125 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 11 00:26:42.732114 waagent[1832]: 2025-09-11T00:26:42.732076Z INFO Daemon Daemon Detect protocol endpoint Sep 11 00:26:42.732319 waagent[1832]: 2025-09-11T00:26:42.732292Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 11 00:26:42.732445 waagent[1832]: 2025-09-11T00:26:42.732427Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 11 00:26:42.732583 waagent[1832]: 2025-09-11T00:26:42.732572Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 11 00:26:42.732751 waagent[1832]: 2025-09-11T00:26:42.732735Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 11 00:26:42.732851 waagent[1832]: 2025-09-11T00:26:42.732839Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 11 00:26:42.744288 waagent[1832]: 2025-09-11T00:26:42.744270Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 11 00:26:42.744555 waagent[1832]: 2025-09-11T00:26:42.744541Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 11 00:26:42.744735 waagent[1832]: 2025-09-11T00:26:42.744721Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 11 00:26:42.882696 waagent[1832]: 2025-09-11T00:26:42.882653Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 11 00:26:42.882974 waagent[1832]: 2025-09-11T00:26:42.882811Z INFO Daemon Daemon Forcing an update of the goal state. Sep 11 00:26:42.887560 waagent[1832]: 2025-09-11T00:26:42.887531Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 11 00:26:42.911020 waagent[1832]: 2025-09-11T00:26:42.910989Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 11 00:26:42.911919 waagent[1832]: 2025-09-11T00:26:42.911425Z INFO Daemon Sep 11 00:26:42.911919 waagent[1832]: 2025-09-11T00:26:42.911676Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9c4fbf13-4b36-401e-acf0-c256550ba1a0 eTag: 16623287782926866144 source: Fabric] Sep 11 00:26:42.911919 waagent[1832]: 2025-09-11T00:26:42.912109Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 11 00:26:42.911919 waagent[1832]: 2025-09-11T00:26:42.912427Z INFO Daemon Sep 11 00:26:42.911919 waagent[1832]: 2025-09-11T00:26:42.912571Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 11 00:26:42.918938 waagent[1832]: 2025-09-11T00:26:42.918918Z INFO Daemon Daemon Downloading artifacts profile blob Sep 11 00:26:42.993193 waagent[1832]: 2025-09-11T00:26:42.993124Z INFO Daemon Downloaded certificate {'thumbprint': 'B962699D56E4ABC1F0A23A3E77ED6722A340E62E', 'hasPrivateKey': True} Sep 11 00:26:42.995337 waagent[1832]: 2025-09-11T00:26:42.995305Z INFO Daemon Fetch goal state completed Sep 11 00:26:43.016776 waagent[1832]: 2025-09-11T00:26:43.016750Z INFO Daemon Daemon Starting provisioning Sep 11 00:26:43.017287 waagent[1832]: 2025-09-11T00:26:43.017114Z INFO Daemon Daemon Handle ovf-env.xml. Sep 11 00:26:43.018040 waagent[1832]: 2025-09-11T00:26:43.017876Z INFO Daemon Daemon Set hostname [ci-4372.1.0-n-1c5282f4e4] Sep 11 00:26:43.020164 waagent[1832]: 2025-09-11T00:26:43.020129Z INFO Daemon Daemon Publish hostname [ci-4372.1.0-n-1c5282f4e4] Sep 11 00:26:43.021446 waagent[1832]: 2025-09-11T00:26:43.021414Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 11 00:26:43.022800 waagent[1832]: 2025-09-11T00:26:43.022777Z INFO Daemon Daemon Primary interface is [eth0] Sep 11 00:26:43.028664 systemd-networkd[1350]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:26:43.028903 systemd-networkd[1350]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:26:43.028926 systemd-networkd[1350]: eth0: DHCP lease lost Sep 11 00:26:43.029307 waagent[1832]: 2025-09-11T00:26:43.029256Z INFO Daemon Daemon Create user account if not exists Sep 11 00:26:43.030107 waagent[1832]: 2025-09-11T00:26:43.029439Z INFO Daemon Daemon User core already exists, skip useradd Sep 11 00:26:43.030107 waagent[1832]: 2025-09-11T00:26:43.029916Z INFO Daemon Daemon Configure sudoer Sep 11 00:26:43.041083 waagent[1832]: 2025-09-11T00:26:43.041047Z INFO Daemon Daemon Configure sshd Sep 11 00:26:43.046650 systemd-networkd[1350]: eth0: DHCPv4 address 10.200.8.50/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 11 00:26:43.050252 waagent[1832]: 2025-09-11T00:26:43.047398Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 11 00:26:43.050252 waagent[1832]: 2025-09-11T00:26:43.047524Z INFO Daemon Daemon Deploy ssh public key. Sep 11 00:26:43.425300 login[1834]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 11 00:26:43.429352 systemd-logind[1700]: New session 2 of user core. Sep 11 00:26:43.434724 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:26:44.137941 waagent[1832]: 2025-09-11T00:26:44.137895Z INFO Daemon Daemon Provisioning complete Sep 11 00:26:44.146134 waagent[1832]: 2025-09-11T00:26:44.146107Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 11 00:26:44.146936 waagent[1832]: 2025-09-11T00:26:44.146278Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 11 00:26:44.146936 waagent[1832]: 2025-09-11T00:26:44.146560Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 11 00:26:44.235091 waagent[1917]: 2025-09-11T00:26:44.235038Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 11 00:26:44.235303 waagent[1917]: 2025-09-11T00:26:44.235119Z INFO ExtHandler ExtHandler OS: flatcar 4372.1.0 Sep 11 00:26:44.235303 waagent[1917]: 2025-09-11T00:26:44.235153Z INFO ExtHandler ExtHandler Python: 3.11.12 Sep 11 00:26:44.235303 waagent[1917]: 2025-09-11T00:26:44.235188Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 11 00:26:44.337445 waagent[1917]: 2025-09-11T00:26:44.337401Z INFO ExtHandler ExtHandler Distro: flatcar-4372.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 11 00:26:44.337567 waagent[1917]: 2025-09-11T00:26:44.337544Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 11 00:26:44.337643 waagent[1917]: 2025-09-11T00:26:44.337595Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 11 00:26:44.342667 waagent[1917]: 2025-09-11T00:26:44.342600Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 11 00:26:44.351078 waagent[1917]: 2025-09-11T00:26:44.351049Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 11 00:26:44.351371 waagent[1917]: 2025-09-11T00:26:44.351344Z INFO ExtHandler Sep 11 00:26:44.351409 waagent[1917]: 2025-09-11T00:26:44.351390Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d9e52bc9-1eae-4863-a08b-8f1a1395d6f9 eTag: 16623287782926866144 source: Fabric] Sep 11 00:26:44.351579 waagent[1917]: 2025-09-11T00:26:44.351559Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 11 00:26:44.351880 waagent[1917]: 2025-09-11T00:26:44.351856Z INFO ExtHandler Sep 11 00:26:44.351910 waagent[1917]: 2025-09-11T00:26:44.351892Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 11 00:26:44.360538 waagent[1917]: 2025-09-11T00:26:44.360515Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 11 00:26:44.735643 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#143 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Sep 11 00:26:44.747404 waagent[1917]: 2025-09-11T00:26:44.746834Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B962699D56E4ABC1F0A23A3E77ED6722A340E62E', 'hasPrivateKey': True} Sep 11 00:26:44.747404 waagent[1917]: 2025-09-11T00:26:44.747220Z INFO ExtHandler Fetch goal state completed Sep 11 00:26:44.759928 waagent[1917]: 2025-09-11T00:26:44.759888Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Sep 11 00:26:44.763700 waagent[1917]: 2025-09-11T00:26:44.763646Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1917 Sep 11 00:26:44.763800 waagent[1917]: 2025-09-11T00:26:44.763779Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 11 00:26:44.764010 waagent[1917]: 2025-09-11T00:26:44.763990Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 11 00:26:44.764885 waagent[1917]: 2025-09-11T00:26:44.764860Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 11 00:26:44.765127 waagent[1917]: 2025-09-11T00:26:44.765106Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 11 00:26:44.765215 waagent[1917]: 2025-09-11T00:26:44.765198Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 11 00:26:44.765526 waagent[1917]: 2025-09-11T00:26:44.765507Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 11 00:26:45.350979 waagent[1917]: 2025-09-11T00:26:45.350954Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 11 00:26:45.351230 waagent[1917]: 2025-09-11T00:26:45.351078Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 11 00:26:45.355797 waagent[1917]: 2025-09-11T00:26:45.355696Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 11 00:26:45.360503 systemd[1]: Reload requested from client PID 1934 ('systemctl') (unit waagent.service)... Sep 11 00:26:45.360511 systemd[1]: Reloading... Sep 11 00:26:45.421694 zram_generator::config[1968]: No configuration found. Sep 11 00:26:45.492994 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:26:45.571503 systemd[1]: Reloading finished in 210 ms. Sep 11 00:26:45.588641 waagent[1917]: 2025-09-11T00:26:45.587720Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 11 00:26:45.588641 waagent[1917]: 2025-09-11T00:26:45.587800Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 11 00:26:46.586316 waagent[1917]: 2025-09-11T00:26:46.586266Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 11 00:26:46.586582 waagent[1917]: 2025-09-11T00:26:46.586539Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 11 00:26:46.587199 waagent[1917]: 2025-09-11T00:26:46.587170Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 11 00:26:46.587354 waagent[1917]: 2025-09-11T00:26:46.587316Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 11 00:26:46.587510 waagent[1917]: 2025-09-11T00:26:46.587386Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 11 00:26:46.587657 waagent[1917]: 2025-09-11T00:26:46.587608Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 11 00:26:46.587898 waagent[1917]: 2025-09-11T00:26:46.587867Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 11 00:26:46.587923 waagent[1917]: 2025-09-11T00:26:46.587907Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 11 00:26:46.587975 waagent[1917]: 2025-09-11T00:26:46.587946Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 11 00:26:46.588064 waagent[1917]: 2025-09-11T00:26:46.588049Z INFO EnvHandler ExtHandler Configure routes Sep 11 00:26:46.588096 waagent[1917]: 2025-09-11T00:26:46.588084Z INFO EnvHandler ExtHandler Gateway:None Sep 11 00:26:46.588118 waagent[1917]: 2025-09-11T00:26:46.588110Z INFO EnvHandler ExtHandler Routes:None Sep 11 00:26:46.588475 waagent[1917]: 2025-09-11T00:26:46.588450Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 11 00:26:46.588608 waagent[1917]: 2025-09-11T00:26:46.588578Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 11 00:26:46.589189 waagent[1917]: 2025-09-11T00:26:46.589084Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 11 00:26:46.589256 waagent[1917]: 2025-09-11T00:26:46.589189Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 11 00:26:46.590681 waagent[1917]: 2025-09-11T00:26:46.589870Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 11 00:26:46.590681 waagent[1917]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 11 00:26:46.590681 waagent[1917]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 11 00:26:46.590681 waagent[1917]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 11 00:26:46.590681 waagent[1917]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 11 00:26:46.590681 waagent[1917]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 11 00:26:46.590681 waagent[1917]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 11 00:26:46.590954 waagent[1917]: 2025-09-11T00:26:46.590916Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 11 00:26:46.596430 waagent[1917]: 2025-09-11T00:26:46.596400Z INFO ExtHandler ExtHandler Sep 11 00:26:46.596479 waagent[1917]: 2025-09-11T00:26:46.596456Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 22585c03-2737-4ad7-8126-0d4860dd9560 correlation 341004bf-73a8-4b7c-b094-6657949c3a05 created: 2025-09-11T00:25:45.016937Z] Sep 11 00:26:46.596779 waagent[1917]: 2025-09-11T00:26:46.596755Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 11 00:26:46.597207 waagent[1917]: 2025-09-11T00:26:46.597183Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 11 00:26:46.652640 waagent[1917]: 2025-09-11T00:26:46.652482Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 11 00:26:46.652640 waagent[1917]: Try `iptables -h' or 'iptables --help' for more information.) Sep 11 00:26:46.652832 waagent[1917]: 2025-09-11T00:26:46.652803Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A03B320B-AF4F-4DF7-AC7C-541145FDEAD8;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 11 00:26:46.701602 waagent[1917]: 2025-09-11T00:26:46.701562Z INFO MonitorHandler ExtHandler Network interfaces: Sep 11 00:26:46.701602 waagent[1917]: Executing ['ip', '-a', '-o', 'link']: Sep 11 00:26:46.701602 waagent[1917]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 11 00:26:46.701602 waagent[1917]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:76:d9:77 brd ff:ff:ff:ff:ff:ff\ alias Network Device Sep 11 00:26:46.701602 waagent[1917]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:76:d9:77 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Sep 11 00:26:46.701602 waagent[1917]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 11 00:26:46.701602 waagent[1917]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 11 00:26:46.701602 waagent[1917]: 2: eth0 inet 10.200.8.50/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 11 00:26:46.701602 waagent[1917]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 11 00:26:46.701602 waagent[1917]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 11 00:26:46.701602 waagent[1917]: 2: eth0 inet6 fe80::7eed:8dff:fe76:d977/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 11 00:26:46.806746 waagent[1917]: 2025-09-11T00:26:46.806705Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 11 00:26:46.806746 waagent[1917]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:26:46.806746 waagent[1917]: pkts bytes target prot opt in out source destination Sep 11 00:26:46.806746 waagent[1917]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:26:46.806746 waagent[1917]: pkts bytes target prot opt in out source destination Sep 11 00:26:46.806746 waagent[1917]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:26:46.806746 waagent[1917]: pkts bytes target prot opt in out source destination Sep 11 00:26:46.806746 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 11 00:26:46.806746 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 11 00:26:46.806746 waagent[1917]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 11 00:26:46.809063 waagent[1917]: 2025-09-11T00:26:46.809024Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 11 00:26:46.809063 waagent[1917]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:26:46.809063 waagent[1917]: pkts bytes target prot opt in out source destination Sep 11 00:26:46.809063 waagent[1917]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:26:46.809063 waagent[1917]: pkts bytes target prot opt in out source destination Sep 11 00:26:46.809063 waagent[1917]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 11 00:26:46.809063 waagent[1917]: pkts bytes target prot opt in out source destination Sep 11 00:26:46.809063 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 11 00:26:46.809063 waagent[1917]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 11 00:26:46.809063 waagent[1917]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 11 00:26:50.189305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:26:50.191036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:26:56.562361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:26:56.572837 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:26:56.605664 kubelet[2070]: E0911 00:26:56.605605 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:26:56.608233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:26:56.608340 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:26:56.608600 systemd[1]: kubelet.service: Consumed 122ms CPU time, 109.9M memory peak. Sep 11 00:27:01.949743 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:27:01.950755 systemd[1]: Started sshd@0-10.200.8.50:22-10.200.16.10:37532.service - OpenSSH per-connection server daemon (10.200.16.10:37532). Sep 11 00:27:02.031276 chronyd[1705]: Selected source PHC0 Sep 11 00:27:02.678924 sshd[2078]: Accepted publickey for core from 10.200.16.10 port 37532 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:27:02.679830 sshd-session[2078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:02.683532 systemd-logind[1700]: New session 3 of user core. Sep 11 00:27:02.688724 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:27:03.233927 systemd[1]: Started sshd@1-10.200.8.50:22-10.200.16.10:37546.service - OpenSSH per-connection server daemon (10.200.16.10:37546). Sep 11 00:27:03.872011 sshd[2083]: Accepted publickey for core from 10.200.16.10 port 37546 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:27:03.872895 sshd-session[2083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:03.876580 systemd-logind[1700]: New session 4 of user core. Sep 11 00:27:03.885734 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:27:04.318909 sshd[2085]: Connection closed by 10.200.16.10 port 37546 Sep 11 00:27:04.319288 sshd-session[2083]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:04.321590 systemd[1]: sshd@1-10.200.8.50:22-10.200.16.10:37546.service: Deactivated successfully. Sep 11 00:27:04.322901 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:27:04.323422 systemd-logind[1700]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:27:04.324413 systemd-logind[1700]: Removed session 4. Sep 11 00:27:04.430572 systemd[1]: Started sshd@2-10.200.8.50:22-10.200.16.10:37562.service - OpenSSH per-connection server daemon (10.200.16.10:37562). Sep 11 00:27:05.067059 sshd[2091]: Accepted publickey for core from 10.200.16.10 port 37562 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:27:05.067926 sshd-session[2091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:05.071670 systemd-logind[1700]: New session 5 of user core. Sep 11 00:27:05.076750 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:27:05.518423 sshd[2093]: Connection closed by 10.200.16.10 port 37562 Sep 11 00:27:05.518807 sshd-session[2091]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:05.520861 systemd[1]: sshd@2-10.200.8.50:22-10.200.16.10:37562.service: Deactivated successfully. Sep 11 00:27:05.522202 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:27:05.523844 systemd-logind[1700]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:27:05.524477 systemd-logind[1700]: Removed session 5. Sep 11 00:27:05.629590 systemd[1]: Started sshd@3-10.200.8.50:22-10.200.16.10:37570.service - OpenSSH per-connection server daemon (10.200.16.10:37570). Sep 11 00:27:06.265460 sshd[2099]: Accepted publickey for core from 10.200.16.10 port 37570 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:27:06.266280 sshd-session[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:06.269952 systemd-logind[1700]: New session 6 of user core. Sep 11 00:27:06.275724 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:27:06.616000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:27:06.617469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:06.716502 sshd[2101]: Connection closed by 10.200.16.10 port 37570 Sep 11 00:27:06.716875 sshd-session[2099]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:06.719207 systemd[1]: sshd@3-10.200.8.50:22-10.200.16.10:37570.service: Deactivated successfully. Sep 11 00:27:06.720404 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:27:06.721236 systemd-logind[1700]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:27:06.722059 systemd-logind[1700]: Removed session 6. Sep 11 00:27:06.831507 systemd[1]: Started sshd@4-10.200.8.50:22-10.200.16.10:37584.service - OpenSSH per-connection server daemon (10.200.16.10:37584). Sep 11 00:27:07.119358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:07.121914 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:27:07.153133 kubelet[2117]: E0911 00:27:07.153104 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:27:07.154636 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:27:07.154740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:27:07.154989 systemd[1]: kubelet.service: Consumed 110ms CPU time, 110.2M memory peak. Sep 11 00:27:07.467348 sshd[2110]: Accepted publickey for core from 10.200.16.10 port 37584 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:27:07.468146 sshd-session[2110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:07.471920 systemd-logind[1700]: New session 7 of user core. Sep 11 00:27:07.474733 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:27:07.928421 sudo[2125]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:27:07.928628 sudo[2125]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:27:07.950373 sudo[2125]: pam_unix(sudo:session): session closed for user root Sep 11 00:27:08.053461 sshd[2124]: Connection closed by 10.200.16.10 port 37584 Sep 11 00:27:08.053931 sshd-session[2110]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:08.056497 systemd[1]: sshd@4-10.200.8.50:22-10.200.16.10:37584.service: Deactivated successfully. Sep 11 00:27:08.057765 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:27:08.058336 systemd-logind[1700]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:27:08.059265 systemd-logind[1700]: Removed session 7. Sep 11 00:27:08.185658 systemd[1]: Started sshd@5-10.200.8.50:22-10.200.16.10:37588.service - OpenSSH per-connection server daemon (10.200.16.10:37588). Sep 11 00:27:08.826486 sshd[2131]: Accepted publickey for core from 10.200.16.10 port 37588 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:27:08.827376 sshd-session[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:08.831225 systemd-logind[1700]: New session 8 of user core. Sep 11 00:27:08.836743 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:27:09.175111 sudo[2135]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:27:09.175311 sudo[2135]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:27:09.181092 sudo[2135]: pam_unix(sudo:session): session closed for user root Sep 11 00:27:09.184511 sudo[2134]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:27:09.184712 sudo[2134]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:27:09.191493 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:27:09.221468 augenrules[2157]: No rules Sep 11 00:27:09.222315 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:27:09.222451 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:27:09.223223 sudo[2134]: pam_unix(sudo:session): session closed for user root Sep 11 00:27:09.326888 sshd[2133]: Connection closed by 10.200.16.10 port 37588 Sep 11 00:27:09.327214 sshd-session[2131]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:09.329108 systemd[1]: sshd@5-10.200.8.50:22-10.200.16.10:37588.service: Deactivated successfully. Sep 11 00:27:09.330346 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:27:09.331745 systemd-logind[1700]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:27:09.332366 systemd-logind[1700]: Removed session 8. Sep 11 00:27:09.437555 systemd[1]: Started sshd@6-10.200.8.50:22-10.200.16.10:37594.service - OpenSSH per-connection server daemon (10.200.16.10:37594). Sep 11 00:27:10.083373 sshd[2166]: Accepted publickey for core from 10.200.16.10 port 37594 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:27:10.084222 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:10.087925 systemd-logind[1700]: New session 9 of user core. Sep 11 00:27:10.092754 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:27:10.430154 sudo[2169]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:27:10.430345 sudo[2169]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:27:11.429045 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:27:11.437935 (dockerd)[2188]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:27:11.939387 dockerd[2188]: time="2025-09-11T00:27:11.939342067Z" level=info msg="Starting up" Sep 11 00:27:11.940913 dockerd[2188]: time="2025-09-11T00:27:11.940889082Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:27:12.073206 dockerd[2188]: time="2025-09-11T00:27:12.073182067Z" level=info msg="Loading containers: start." Sep 11 00:27:12.098634 kernel: Initializing XFRM netlink socket Sep 11 00:27:12.343895 systemd-networkd[1350]: docker0: Link UP Sep 11 00:27:12.362476 dockerd[2188]: time="2025-09-11T00:27:12.362453629Z" level=info msg="Loading containers: done." Sep 11 00:27:12.373008 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4086401083-merged.mount: Deactivated successfully. Sep 11 00:27:12.379320 dockerd[2188]: time="2025-09-11T00:27:12.379293184Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:27:12.379385 dockerd[2188]: time="2025-09-11T00:27:12.379348489Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:27:12.379429 dockerd[2188]: time="2025-09-11T00:27:12.379417123Z" level=info msg="Initializing buildkit" Sep 11 00:27:12.424075 dockerd[2188]: time="2025-09-11T00:27:12.424052800Z" level=info msg="Completed buildkit initialization" Sep 11 00:27:12.430354 dockerd[2188]: time="2025-09-11T00:27:12.430313529Z" level=info msg="Daemon has completed initialization" Sep 11 00:27:12.430354 dockerd[2188]: time="2025-09-11T00:27:12.430395105Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:27:12.431180 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:27:13.462023 containerd[1746]: time="2025-09-11T00:27:13.461987534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 11 00:27:14.242294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4245886525.mount: Deactivated successfully. Sep 11 00:27:15.288810 containerd[1746]: time="2025-09-11T00:27:15.288770025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:15.291040 containerd[1746]: time="2025-09-11T00:27:15.291010029Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Sep 11 00:27:15.293707 containerd[1746]: time="2025-09-11T00:27:15.293673241Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:15.297133 containerd[1746]: time="2025-09-11T00:27:15.297096623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:15.297780 containerd[1746]: time="2025-09-11T00:27:15.297627126Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.835594347s" Sep 11 00:27:15.297780 containerd[1746]: time="2025-09-11T00:27:15.297655988Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 11 00:27:15.298206 containerd[1746]: time="2025-09-11T00:27:15.298180855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 11 00:27:16.563762 containerd[1746]: time="2025-09-11T00:27:16.563728984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:16.566145 containerd[1746]: time="2025-09-11T00:27:16.566122341Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Sep 11 00:27:16.569200 containerd[1746]: time="2025-09-11T00:27:16.569166197Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:16.573234 containerd[1746]: time="2025-09-11T00:27:16.573191343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:16.573810 containerd[1746]: time="2025-09-11T00:27:16.573714459Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.27550751s" Sep 11 00:27:16.573810 containerd[1746]: time="2025-09-11T00:27:16.573740825Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 11 00:27:16.574212 containerd[1746]: time="2025-09-11T00:27:16.574192726Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 11 00:27:17.189301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 11 00:27:17.191312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:17.815457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:17.818061 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:27:17.846527 kubelet[2456]: E0911 00:27:17.846498 2456 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:27:17.847946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:27:17.848049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:27:17.848287 systemd[1]: kubelet.service: Consumed 115ms CPU time, 109.8M memory peak. Sep 11 00:27:18.247560 containerd[1746]: time="2025-09-11T00:27:18.247525947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:18.250425 containerd[1746]: time="2025-09-11T00:27:18.250254991Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Sep 11 00:27:18.254560 containerd[1746]: time="2025-09-11T00:27:18.254539870Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:18.262193 containerd[1746]: time="2025-09-11T00:27:18.262172063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:18.262744 containerd[1746]: time="2025-09-11T00:27:18.262726810Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.688461492s" Sep 11 00:27:18.262821 containerd[1746]: time="2025-09-11T00:27:18.262810526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 11 00:27:18.263518 containerd[1746]: time="2025-09-11T00:27:18.263342554Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 11 00:27:19.147438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289818106.mount: Deactivated successfully. Sep 11 00:27:19.468370 containerd[1746]: time="2025-09-11T00:27:19.468335603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:19.470750 containerd[1746]: time="2025-09-11T00:27:19.470709310Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Sep 11 00:27:19.473473 containerd[1746]: time="2025-09-11T00:27:19.473438380Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:19.476903 containerd[1746]: time="2025-09-11T00:27:19.476868473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:19.477295 containerd[1746]: time="2025-09-11T00:27:19.477106847Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.213738887s" Sep 11 00:27:19.477295 containerd[1746]: time="2025-09-11T00:27:19.477132056Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 11 00:27:19.477542 containerd[1746]: time="2025-09-11T00:27:19.477528359Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 11 00:27:20.114376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1690866895.mount: Deactivated successfully. Sep 11 00:27:20.951036 containerd[1746]: time="2025-09-11T00:27:20.950999723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:20.953175 containerd[1746]: time="2025-09-11T00:27:20.953146531Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Sep 11 00:27:20.955926 containerd[1746]: time="2025-09-11T00:27:20.955863976Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:20.959104 containerd[1746]: time="2025-09-11T00:27:20.959068987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:20.959826 containerd[1746]: time="2025-09-11T00:27:20.959628485Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.482043796s" Sep 11 00:27:20.959826 containerd[1746]: time="2025-09-11T00:27:20.959657549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 11 00:27:20.960026 containerd[1746]: time="2025-09-11T00:27:20.960012542Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:27:21.502095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911349978.mount: Deactivated successfully. Sep 11 00:27:21.516283 containerd[1746]: time="2025-09-11T00:27:21.516252512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:27:21.518609 containerd[1746]: time="2025-09-11T00:27:21.518592410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 11 00:27:21.521382 containerd[1746]: time="2025-09-11T00:27:21.521349324Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:27:21.524677 containerd[1746]: time="2025-09-11T00:27:21.524640312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:27:21.525118 containerd[1746]: time="2025-09-11T00:27:21.524980720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 564.94396ms" Sep 11 00:27:21.525118 containerd[1746]: time="2025-09-11T00:27:21.525006206Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:27:21.525497 containerd[1746]: time="2025-09-11T00:27:21.525478171Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 11 00:27:22.142729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581512417.mount: Deactivated successfully. Sep 11 00:27:23.107630 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 11 00:27:23.723607 containerd[1746]: time="2025-09-11T00:27:23.723568630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:23.726206 containerd[1746]: time="2025-09-11T00:27:23.726175061Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Sep 11 00:27:23.729700 containerd[1746]: time="2025-09-11T00:27:23.729671064Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:23.734493 containerd[1746]: time="2025-09-11T00:27:23.733730682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:23.734493 containerd[1746]: time="2025-09-11T00:27:23.734384948Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.208885266s" Sep 11 00:27:23.734493 containerd[1746]: time="2025-09-11T00:27:23.734408886Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 11 00:27:24.010811 update_engine[1702]: I20250911 00:27:24.010679 1702 update_attempter.cc:509] Updating boot flags... Sep 11 00:27:26.352857 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:26.352985 systemd[1]: kubelet.service: Consumed 115ms CPU time, 109.8M memory peak. Sep 11 00:27:26.354917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:26.380394 systemd[1]: Reload requested from client PID 2659 ('systemctl') (unit session-9.scope)... Sep 11 00:27:26.380498 systemd[1]: Reloading... Sep 11 00:27:26.458639 zram_generator::config[2708]: No configuration found. Sep 11 00:27:26.573815 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:27:26.657223 systemd[1]: Reloading finished in 276 ms. Sep 11 00:27:26.696912 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:27:26.696990 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:27:26.697234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:26.697281 systemd[1]: kubelet.service: Consumed 67ms CPU time, 78M memory peak. Sep 11 00:27:26.698484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:27.325609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:27.333841 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:27:27.367097 kubelet[2772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:27:27.367293 kubelet[2772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:27:27.367293 kubelet[2772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:27:27.367345 kubelet[2772]: I0911 00:27:27.367328 2772 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:27:28.031240 kubelet[2772]: I0911 00:27:28.031205 2772 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:27:28.031240 kubelet[2772]: I0911 00:27:28.031227 2772 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:27:28.031435 kubelet[2772]: I0911 00:27:28.031420 2772 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:27:28.063429 kubelet[2772]: E0911 00:27:28.063395 2772 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 11 00:27:28.068236 kubelet[2772]: I0911 00:27:28.068203 2772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:27:28.098742 kubelet[2772]: I0911 00:27:28.098725 2772 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:27:28.101420 kubelet[2772]: I0911 00:27:28.101402 2772 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:27:28.101587 kubelet[2772]: I0911 00:27:28.101566 2772 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:27:28.101768 kubelet[2772]: I0911 00:27:28.101584 2772 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-1c5282f4e4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:27:28.101872 kubelet[2772]: I0911 00:27:28.101776 2772 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:27:28.101872 kubelet[2772]: I0911 00:27:28.101785 2772 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:27:28.101914 kubelet[2772]: I0911 00:27:28.101882 2772 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:27:28.104913 kubelet[2772]: I0911 00:27:28.104892 2772 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:27:28.104965 kubelet[2772]: I0911 00:27:28.104915 2772 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:27:28.104965 kubelet[2772]: I0911 00:27:28.104939 2772 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:27:28.106830 kubelet[2772]: I0911 00:27:28.106610 2772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:27:28.114444 kubelet[2772]: E0911 00:27:28.114420 2772 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-1c5282f4e4&limit=500&resourceVersion=0\": dial tcp 10.200.8.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 11 00:27:28.114872 kubelet[2772]: E0911 00:27:28.114853 2772 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 11 00:27:28.115002 kubelet[2772]: I0911 00:27:28.114993 2772 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:27:28.115438 kubelet[2772]: I0911 00:27:28.115427 2772 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:27:28.115993 kubelet[2772]: W0911 00:27:28.115982 2772 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:27:28.118494 kubelet[2772]: I0911 00:27:28.118338 2772 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:27:28.118494 kubelet[2772]: I0911 00:27:28.118380 2772 server.go:1289] "Started kubelet" Sep 11 00:27:28.121734 kubelet[2772]: I0911 00:27:28.121700 2772 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:27:28.122263 kubelet[2772]: I0911 00:27:28.122242 2772 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:27:28.125903 kubelet[2772]: I0911 00:27:28.125788 2772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:27:28.125903 kubelet[2772]: I0911 00:27:28.125779 2772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:27:28.125984 kubelet[2772]: I0911 00:27:28.125939 2772 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:27:28.127791 kubelet[2772]: E0911 00:27:28.126558 2772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.50:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.50:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-n-1c5282f4e4.186412d216ec4f56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-n-1c5282f4e4,UID:ci-4372.1.0-n-1c5282f4e4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-n-1c5282f4e4,},FirstTimestamp:2025-09-11 00:27:28.11835375 +0000 UTC m=+0.781029420,LastTimestamp:2025-09-11 00:27:28.11835375 +0000 UTC m=+0.781029420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-n-1c5282f4e4,}" Sep 11 00:27:28.129162 kubelet[2772]: I0911 00:27:28.128848 2772 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:27:28.130741 kubelet[2772]: E0911 00:27:28.130721 2772 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:27:28.131191 kubelet[2772]: E0911 00:27:28.131176 2772 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-1c5282f4e4\" not found" Sep 11 00:27:28.131249 kubelet[2772]: I0911 00:27:28.131204 2772 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:27:28.131380 kubelet[2772]: I0911 00:27:28.131369 2772 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:27:28.131418 kubelet[2772]: I0911 00:27:28.131412 2772 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:27:28.131714 kubelet[2772]: E0911 00:27:28.131696 2772 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 11 00:27:28.131927 kubelet[2772]: E0911 00:27:28.131892 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-1c5282f4e4?timeout=10s\": dial tcp 10.200.8.50:6443: connect: connection refused" interval="200ms" Sep 11 00:27:28.132369 kubelet[2772]: I0911 00:27:28.132353 2772 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:27:28.132429 kubelet[2772]: I0911 00:27:28.132417 2772 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:27:28.133605 kubelet[2772]: I0911 00:27:28.133216 2772 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:27:28.152404 kubelet[2772]: I0911 00:27:28.152389 2772 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:27:28.152404 kubelet[2772]: I0911 00:27:28.152398 2772 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:27:28.152514 kubelet[2772]: I0911 00:27:28.152428 2772 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:27:28.158532 kubelet[2772]: I0911 00:27:28.158474 2772 policy_none.go:49] "None policy: Start" Sep 11 00:27:28.158532 kubelet[2772]: I0911 00:27:28.158491 2772 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:27:28.158532 kubelet[2772]: I0911 00:27:28.158500 2772 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:27:28.161135 kubelet[2772]: I0911 00:27:28.161113 2772 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:27:28.162176 kubelet[2772]: I0911 00:27:28.162148 2772 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:27:28.162176 kubelet[2772]: I0911 00:27:28.162164 2772 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:27:28.162176 kubelet[2772]: I0911 00:27:28.162177 2772 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:27:28.162268 kubelet[2772]: I0911 00:27:28.162183 2772 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:27:28.162268 kubelet[2772]: E0911 00:27:28.162210 2772 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:27:28.164986 kubelet[2772]: E0911 00:27:28.164925 2772 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 11 00:27:28.167752 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:27:28.179263 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:27:28.198427 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:27:28.199596 kubelet[2772]: E0911 00:27:28.199574 2772 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:27:28.199729 kubelet[2772]: I0911 00:27:28.199713 2772 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:27:28.199959 kubelet[2772]: I0911 00:27:28.199727 2772 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:27:28.199959 kubelet[2772]: I0911 00:27:28.199871 2772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:27:28.201058 kubelet[2772]: E0911 00:27:28.201041 2772 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:27:28.201115 kubelet[2772]: E0911 00:27:28.201083 2772 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-n-1c5282f4e4\" not found" Sep 11 00:27:28.273684 systemd[1]: Created slice kubepods-burstable-pod438950d9c79650aa1c385ff34e73b424.slice - libcontainer container kubepods-burstable-pod438950d9c79650aa1c385ff34e73b424.slice. Sep 11 00:27:28.279148 kubelet[2772]: E0911 00:27:28.279122 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-1c5282f4e4\" not found" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.282950 systemd[1]: Created slice kubepods-burstable-pod7a2d56f47b77285d9ed9059696fbf170.slice - libcontainer container kubepods-burstable-pod7a2d56f47b77285d9ed9059696fbf170.slice. Sep 11 00:27:28.285344 kubelet[2772]: E0911 00:27:28.285111 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-1c5282f4e4\" not found" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.287357 systemd[1]: Created slice kubepods-burstable-pod846e9997d7e90a019ff6d1c799a9735a.slice - libcontainer container kubepods-burstable-pod846e9997d7e90a019ff6d1c799a9735a.slice. Sep 11 00:27:28.288472 kubelet[2772]: E0911 00:27:28.288454 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-1c5282f4e4\" not found" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.301577 kubelet[2772]: I0911 00:27:28.301562 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.301827 kubelet[2772]: E0911 00:27:28.301806 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.50:6443/api/v1/nodes\": dial tcp 10.200.8.50:6443: connect: connection refused" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.332235 kubelet[2772]: I0911 00:27:28.332107 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/438950d9c79650aa1c385ff34e73b424-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-1c5282f4e4\" (UID: \"438950d9c79650aa1c385ff34e73b424\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.332235 kubelet[2772]: I0911 00:27:28.332134 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a2d56f47b77285d9ed9059696fbf170-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" (UID: \"7a2d56f47b77285d9ed9059696fbf170\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.332235 kubelet[2772]: I0911 00:27:28.332153 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a2d56f47b77285d9ed9059696fbf170-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" (UID: \"7a2d56f47b77285d9ed9059696fbf170\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.332235 kubelet[2772]: I0911 00:27:28.332168 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a2d56f47b77285d9ed9059696fbf170-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" (UID: \"7a2d56f47b77285d9ed9059696fbf170\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.332235 kubelet[2772]: I0911 00:27:28.332184 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/846e9997d7e90a019ff6d1c799a9735a-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-1c5282f4e4\" (UID: \"846e9997d7e90a019ff6d1c799a9735a\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.332398 kubelet[2772]: I0911 00:27:28.332217 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/438950d9c79650aa1c385ff34e73b424-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-1c5282f4e4\" (UID: \"438950d9c79650aa1c385ff34e73b424\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.332398 kubelet[2772]: I0911 00:27:28.332233 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a2d56f47b77285d9ed9059696fbf170-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" (UID: \"7a2d56f47b77285d9ed9059696fbf170\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.332398 kubelet[2772]: I0911 00:27:28.332248 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a2d56f47b77285d9ed9059696fbf170-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" (UID: \"7a2d56f47b77285d9ed9059696fbf170\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.332398 kubelet[2772]: I0911 00:27:28.332260 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/438950d9c79650aa1c385ff34e73b424-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-1c5282f4e4\" (UID: \"438950d9c79650aa1c385ff34e73b424\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.332398 kubelet[2772]: E0911 00:27:28.332355 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-1c5282f4e4?timeout=10s\": dial tcp 10.200.8.50:6443: connect: connection refused" interval="400ms" Sep 11 00:27:28.503532 kubelet[2772]: I0911 00:27:28.503515 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.503891 kubelet[2772]: E0911 00:27:28.503804 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.50:6443/api/v1/nodes\": dial tcp 10.200.8.50:6443: connect: connection refused" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.580895 containerd[1746]: time="2025-09-11T00:27:28.580824817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-1c5282f4e4,Uid:438950d9c79650aa1c385ff34e73b424,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:28.586302 containerd[1746]: time="2025-09-11T00:27:28.586276158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-1c5282f4e4,Uid:7a2d56f47b77285d9ed9059696fbf170,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:28.589377 containerd[1746]: time="2025-09-11T00:27:28.589222812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-1c5282f4e4,Uid:846e9997d7e90a019ff6d1c799a9735a,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:28.670422 containerd[1746]: time="2025-09-11T00:27:28.670391413Z" level=info msg="connecting to shim 255b27bdb549db7c017c0f3b9f21072ee3a982c04f9c0fadb8804840a9d3bd88" address="unix:///run/containerd/s/39284ac0d26182eda52b1fb87486f8ea267cd1e5605aae8e33ada4535861f33b" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:28.677094 containerd[1746]: time="2025-09-11T00:27:28.677035004Z" level=info msg="connecting to shim 472b1509bab65caee70d9eb036744a4417d14d3e53a5196ef6370051a54467ff" address="unix:///run/containerd/s/5eb87439577c3e83141f433240cfacd416942891f30a6d122b08f0ff68636087" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:28.699021 containerd[1746]: time="2025-09-11T00:27:28.698990608Z" level=info msg="connecting to shim 3cc8ca2535b7e57f9d2504a377674ffcb1883fc1e6768f1cc23c4ecb149f84c6" address="unix:///run/containerd/s/1ae40f8441f9889abfd1a9726ebf7c174ed44cd6107f7863eb07df95a39a78ee" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:28.711771 systemd[1]: Started cri-containerd-255b27bdb549db7c017c0f3b9f21072ee3a982c04f9c0fadb8804840a9d3bd88.scope - libcontainer container 255b27bdb549db7c017c0f3b9f21072ee3a982c04f9c0fadb8804840a9d3bd88. Sep 11 00:27:28.715912 systemd[1]: Started cri-containerd-472b1509bab65caee70d9eb036744a4417d14d3e53a5196ef6370051a54467ff.scope - libcontainer container 472b1509bab65caee70d9eb036744a4417d14d3e53a5196ef6370051a54467ff. Sep 11 00:27:28.728746 systemd[1]: Started cri-containerd-3cc8ca2535b7e57f9d2504a377674ffcb1883fc1e6768f1cc23c4ecb149f84c6.scope - libcontainer container 3cc8ca2535b7e57f9d2504a377674ffcb1883fc1e6768f1cc23c4ecb149f84c6. Sep 11 00:27:28.733087 kubelet[2772]: E0911 00:27:28.732907 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-1c5282f4e4?timeout=10s\": dial tcp 10.200.8.50:6443: connect: connection refused" interval="800ms" Sep 11 00:27:28.773255 containerd[1746]: time="2025-09-11T00:27:28.773232913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-1c5282f4e4,Uid:846e9997d7e90a019ff6d1c799a9735a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cc8ca2535b7e57f9d2504a377674ffcb1883fc1e6768f1cc23c4ecb149f84c6\"" Sep 11 00:27:28.785772 containerd[1746]: time="2025-09-11T00:27:28.785752649Z" level=info msg="CreateContainer within sandbox \"3cc8ca2535b7e57f9d2504a377674ffcb1883fc1e6768f1cc23c4ecb149f84c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:27:28.788683 containerd[1746]: time="2025-09-11T00:27:28.788660222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-1c5282f4e4,Uid:7a2d56f47b77285d9ed9059696fbf170,Namespace:kube-system,Attempt:0,} returns sandbox id \"472b1509bab65caee70d9eb036744a4417d14d3e53a5196ef6370051a54467ff\"" Sep 11 00:27:28.795526 containerd[1746]: time="2025-09-11T00:27:28.795488786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-1c5282f4e4,Uid:438950d9c79650aa1c385ff34e73b424,Namespace:kube-system,Attempt:0,} returns sandbox id \"255b27bdb549db7c017c0f3b9f21072ee3a982c04f9c0fadb8804840a9d3bd88\"" Sep 11 00:27:28.796217 containerd[1746]: time="2025-09-11T00:27:28.796196203Z" level=info msg="CreateContainer within sandbox \"472b1509bab65caee70d9eb036744a4417d14d3e53a5196ef6370051a54467ff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:27:28.801867 containerd[1746]: time="2025-09-11T00:27:28.801851240Z" level=info msg="CreateContainer within sandbox \"255b27bdb549db7c017c0f3b9f21072ee3a982c04f9c0fadb8804840a9d3bd88\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:27:28.808977 containerd[1746]: time="2025-09-11T00:27:28.808954846Z" level=info msg="Container be64f96d1f0bc5a180b8f048edeb6bb6d3149d09a4f38c9691b8ce74f8d12acb: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:28.827188 containerd[1746]: time="2025-09-11T00:27:28.827106571Z" level=info msg="CreateContainer within sandbox \"3cc8ca2535b7e57f9d2504a377674ffcb1883fc1e6768f1cc23c4ecb149f84c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"be64f96d1f0bc5a180b8f048edeb6bb6d3149d09a4f38c9691b8ce74f8d12acb\"" Sep 11 00:27:28.828097 containerd[1746]: time="2025-09-11T00:27:28.828077857Z" level=info msg="StartContainer for \"be64f96d1f0bc5a180b8f048edeb6bb6d3149d09a4f38c9691b8ce74f8d12acb\"" Sep 11 00:27:28.828740 containerd[1746]: time="2025-09-11T00:27:28.828721233Z" level=info msg="connecting to shim be64f96d1f0bc5a180b8f048edeb6bb6d3149d09a4f38c9691b8ce74f8d12acb" address="unix:///run/containerd/s/1ae40f8441f9889abfd1a9726ebf7c174ed44cd6107f7863eb07df95a39a78ee" protocol=ttrpc version=3 Sep 11 00:27:28.834829 containerd[1746]: time="2025-09-11T00:27:28.834725115Z" level=info msg="Container d74ff0a34ee2cbab17bb85d7204022846a4962e4bac38f7648ce588aa0bb976c: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:28.846729 systemd[1]: Started cri-containerd-be64f96d1f0bc5a180b8f048edeb6bb6d3149d09a4f38c9691b8ce74f8d12acb.scope - libcontainer container be64f96d1f0bc5a180b8f048edeb6bb6d3149d09a4f38c9691b8ce74f8d12acb. Sep 11 00:27:28.853323 containerd[1746]: time="2025-09-11T00:27:28.853273424Z" level=info msg="Container a1fbead8d2ca803dcb75582f7f9be011ff6a533af34f8b05ce999ee96d54ad34: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:28.905838 kubelet[2772]: I0911 00:27:28.905823 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:28.906195 kubelet[2772]: E0911 00:27:28.906177 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.50:6443/api/v1/nodes\": dial tcp 10.200.8.50:6443: connect: connection refused" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:29.084320 containerd[1746]: time="2025-09-11T00:27:29.084287144Z" level=info msg="CreateContainer within sandbox \"255b27bdb549db7c017c0f3b9f21072ee3a982c04f9c0fadb8804840a9d3bd88\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a1fbead8d2ca803dcb75582f7f9be011ff6a533af34f8b05ce999ee96d54ad34\"" Sep 11 00:27:29.085556 containerd[1746]: time="2025-09-11T00:27:29.085487413Z" level=info msg="StartContainer for \"a1fbead8d2ca803dcb75582f7f9be011ff6a533af34f8b05ce999ee96d54ad34\"" Sep 11 00:27:29.085666 containerd[1746]: time="2025-09-11T00:27:29.085640083Z" level=info msg="StartContainer for \"be64f96d1f0bc5a180b8f048edeb6bb6d3149d09a4f38c9691b8ce74f8d12acb\" returns successfully" Sep 11 00:27:29.088207 containerd[1746]: time="2025-09-11T00:27:29.088151586Z" level=info msg="connecting to shim a1fbead8d2ca803dcb75582f7f9be011ff6a533af34f8b05ce999ee96d54ad34" address="unix:///run/containerd/s/39284ac0d26182eda52b1fb87486f8ea267cd1e5605aae8e33ada4535861f33b" protocol=ttrpc version=3 Sep 11 00:27:29.092273 kubelet[2772]: E0911 00:27:29.092214 2772 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 11 00:27:29.107762 systemd[1]: Started cri-containerd-a1fbead8d2ca803dcb75582f7f9be011ff6a533af34f8b05ce999ee96d54ad34.scope - libcontainer container a1fbead8d2ca803dcb75582f7f9be011ff6a533af34f8b05ce999ee96d54ad34. Sep 11 00:27:29.343652 containerd[1746]: time="2025-09-11T00:27:29.343570789Z" level=info msg="CreateContainer within sandbox \"472b1509bab65caee70d9eb036744a4417d14d3e53a5196ef6370051a54467ff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d74ff0a34ee2cbab17bb85d7204022846a4962e4bac38f7648ce588aa0bb976c\"" Sep 11 00:27:29.344711 containerd[1746]: time="2025-09-11T00:27:29.344152443Z" level=info msg="StartContainer for \"d74ff0a34ee2cbab17bb85d7204022846a4962e4bac38f7648ce588aa0bb976c\"" Sep 11 00:27:29.345241 containerd[1746]: time="2025-09-11T00:27:29.344962162Z" level=info msg="connecting to shim d74ff0a34ee2cbab17bb85d7204022846a4962e4bac38f7648ce588aa0bb976c" address="unix:///run/containerd/s/5eb87439577c3e83141f433240cfacd416942891f30a6d122b08f0ff68636087" protocol=ttrpc version=3 Sep 11 00:27:29.345847 containerd[1746]: time="2025-09-11T00:27:29.345789131Z" level=info msg="StartContainer for \"a1fbead8d2ca803dcb75582f7f9be011ff6a533af34f8b05ce999ee96d54ad34\" returns successfully" Sep 11 00:27:29.355574 kubelet[2772]: E0911 00:27:29.354304 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-1c5282f4e4\" not found" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:29.364774 kubelet[2772]: E0911 00:27:29.364663 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-1c5282f4e4\" not found" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:29.369746 systemd[1]: Started cri-containerd-d74ff0a34ee2cbab17bb85d7204022846a4962e4bac38f7648ce588aa0bb976c.scope - libcontainer container d74ff0a34ee2cbab17bb85d7204022846a4962e4bac38f7648ce588aa0bb976c. Sep 11 00:27:29.485853 containerd[1746]: time="2025-09-11T00:27:29.485817461Z" level=info msg="StartContainer for \"d74ff0a34ee2cbab17bb85d7204022846a4962e4bac38f7648ce588aa0bb976c\" returns successfully" Sep 11 00:27:29.708457 kubelet[2772]: I0911 00:27:29.708269 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:30.367209 kubelet[2772]: E0911 00:27:30.367183 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-1c5282f4e4\" not found" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:30.368307 kubelet[2772]: E0911 00:27:30.368285 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-1c5282f4e4\" not found" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:30.368671 kubelet[2772]: E0911 00:27:30.368660 2772 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-1c5282f4e4\" not found" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:30.379636 kubelet[2772]: I0911 00:27:30.378131 2772 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:30.379636 kubelet[2772]: E0911 00:27:30.378156 2772 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.1.0-n-1c5282f4e4\": node \"ci-4372.1.0-n-1c5282f4e4\" not found" Sep 11 00:27:30.432730 kubelet[2772]: I0911 00:27:30.432671 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:30.446628 kubelet[2772]: E0911 00:27:30.446549 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:30.446628 kubelet[2772]: I0911 00:27:30.446568 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:30.453308 kubelet[2772]: E0911 00:27:30.453218 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-1c5282f4e4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:30.453308 kubelet[2772]: I0911 00:27:30.453238 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:30.456689 kubelet[2772]: E0911 00:27:30.456664 2772 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-1c5282f4e4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:31.116192 kubelet[2772]: I0911 00:27:31.116166 2772 apiserver.go:52] "Watching apiserver" Sep 11 00:27:31.132207 kubelet[2772]: I0911 00:27:31.132184 2772 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:27:31.366957 kubelet[2772]: I0911 00:27:31.366783 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:31.366957 kubelet[2772]: I0911 00:27:31.366821 2772 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:31.372999 kubelet[2772]: I0911 00:27:31.372977 2772 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:31.375763 kubelet[2772]: I0911 00:27:31.375741 2772 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:32.271892 systemd[1]: Reload requested from client PID 3049 ('systemctl') (unit session-9.scope)... Sep 11 00:27:32.271905 systemd[1]: Reloading... Sep 11 00:27:32.348645 zram_generator::config[3098]: No configuration found. Sep 11 00:27:32.415103 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:27:32.504500 systemd[1]: Reloading finished in 232 ms. Sep 11 00:27:32.533026 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:32.552333 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:27:32.552532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:32.552575 systemd[1]: kubelet.service: Consumed 1.019s CPU time, 129.7M memory peak. Sep 11 00:27:32.553901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:27:33.120396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:27:33.131840 (kubelet)[3162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:27:33.167407 kubelet[3162]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:27:33.167407 kubelet[3162]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:27:33.167407 kubelet[3162]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:27:33.167407 kubelet[3162]: I0911 00:27:33.166133 3162 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:27:33.172316 kubelet[3162]: I0911 00:27:33.172291 3162 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:27:33.172316 kubelet[3162]: I0911 00:27:33.172311 3162 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:27:33.172494 kubelet[3162]: I0911 00:27:33.172480 3162 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:27:33.173842 kubelet[3162]: I0911 00:27:33.173828 3162 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 11 00:27:33.175433 kubelet[3162]: I0911 00:27:33.175357 3162 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:27:33.179399 kubelet[3162]: I0911 00:27:33.179384 3162 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:27:33.182639 kubelet[3162]: I0911 00:27:33.181735 3162 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:27:33.182639 kubelet[3162]: I0911 00:27:33.181892 3162 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:27:33.182639 kubelet[3162]: I0911 00:27:33.181907 3162 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-1c5282f4e4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:27:33.182639 kubelet[3162]: I0911 00:27:33.182111 3162 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:27:33.182823 kubelet[3162]: I0911 00:27:33.182120 3162 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:27:33.182823 kubelet[3162]: I0911 00:27:33.182155 3162 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:27:33.182823 kubelet[3162]: I0911 00:27:33.182274 3162 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:27:33.182823 kubelet[3162]: I0911 00:27:33.182290 3162 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:27:33.182823 kubelet[3162]: I0911 00:27:33.182309 3162 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:27:33.182823 kubelet[3162]: I0911 00:27:33.182320 3162 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:27:33.186464 kubelet[3162]: I0911 00:27:33.186451 3162 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:27:33.187134 kubelet[3162]: I0911 00:27:33.186934 3162 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:27:33.190337 kubelet[3162]: I0911 00:27:33.190327 3162 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:27:33.190425 kubelet[3162]: I0911 00:27:33.190420 3162 server.go:1289] "Started kubelet" Sep 11 00:27:33.192758 kubelet[3162]: I0911 00:27:33.192196 3162 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:27:33.196114 kubelet[3162]: I0911 00:27:33.196080 3162 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:27:33.197892 kubelet[3162]: I0911 00:27:33.197876 3162 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:27:33.201030 kubelet[3162]: I0911 00:27:33.200929 3162 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:27:33.201214 kubelet[3162]: I0911 00:27:33.201201 3162 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:27:33.201384 kubelet[3162]: I0911 00:27:33.201372 3162 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:27:33.202880 kubelet[3162]: I0911 00:27:33.202866 3162 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:27:33.203017 kubelet[3162]: I0911 00:27:33.202934 3162 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:27:33.203017 kubelet[3162]: I0911 00:27:33.203010 3162 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:27:33.204203 kubelet[3162]: I0911 00:27:33.204187 3162 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:27:33.204295 kubelet[3162]: I0911 00:27:33.204280 3162 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:27:33.207221 kubelet[3162]: I0911 00:27:33.206778 3162 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:27:33.211844 kubelet[3162]: E0911 00:27:33.211828 3162 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:27:33.212040 kubelet[3162]: I0911 00:27:33.212026 3162 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:27:33.213529 kubelet[3162]: I0911 00:27:33.213512 3162 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:27:33.213529 kubelet[3162]: I0911 00:27:33.213531 3162 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:27:33.213643 kubelet[3162]: I0911 00:27:33.213635 3162 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:27:33.213666 kubelet[3162]: I0911 00:27:33.213646 3162 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:27:33.213689 kubelet[3162]: E0911 00:27:33.213673 3162 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:27:33.238021 kubelet[3162]: I0911 00:27:33.238005 3162 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:27:33.238021 kubelet[3162]: I0911 00:27:33.238015 3162 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:27:33.238021 kubelet[3162]: I0911 00:27:33.238029 3162 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:27:33.238127 kubelet[3162]: I0911 00:27:33.238118 3162 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:27:33.238149 kubelet[3162]: I0911 00:27:33.238125 3162 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:27:33.238149 kubelet[3162]: I0911 00:27:33.238139 3162 policy_none.go:49] "None policy: Start" Sep 11 00:27:33.238192 kubelet[3162]: I0911 00:27:33.238152 3162 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:27:33.238192 kubelet[3162]: I0911 00:27:33.238160 3162 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:27:33.238285 kubelet[3162]: I0911 00:27:33.238279 3162 state_mem.go:75] "Updated machine memory state" Sep 11 00:27:33.240975 kubelet[3162]: E0911 00:27:33.240960 3162 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:27:33.241071 kubelet[3162]: I0911 00:27:33.241062 3162 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:27:33.241095 kubelet[3162]: I0911 00:27:33.241074 3162 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:27:33.243527 kubelet[3162]: I0911 00:27:33.243177 3162 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:27:33.245272 kubelet[3162]: E0911 00:27:33.245254 3162 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:27:33.283223 sudo[3199]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 11 00:27:33.283381 sudo[3199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 11 00:27:33.314519 kubelet[3162]: I0911 00:27:33.314248 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.314519 kubelet[3162]: I0911 00:27:33.314294 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.314519 kubelet[3162]: I0911 00:27:33.314469 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.327102 kubelet[3162]: I0911 00:27:33.327089 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:33.327217 kubelet[3162]: E0911 00:27:33.327203 3162 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-1c5282f4e4\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.327476 kubelet[3162]: I0911 00:27:33.327468 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:33.327744 kubelet[3162]: I0911 00:27:33.327736 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:33.327833 kubelet[3162]: E0911 00:27:33.327810 3162 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" already exists" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.346267 kubelet[3162]: I0911 00:27:33.346257 3162 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.357978 kubelet[3162]: I0911 00:27:33.357883 3162 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.358199 kubelet[3162]: I0911 00:27:33.358055 3162 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.404772 kubelet[3162]: I0911 00:27:33.404716 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a2d56f47b77285d9ed9059696fbf170-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" (UID: \"7a2d56f47b77285d9ed9059696fbf170\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.404772 kubelet[3162]: I0911 00:27:33.404744 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/846e9997d7e90a019ff6d1c799a9735a-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-1c5282f4e4\" (UID: \"846e9997d7e90a019ff6d1c799a9735a\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.404772 kubelet[3162]: I0911 00:27:33.404764 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/438950d9c79650aa1c385ff34e73b424-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-1c5282f4e4\" (UID: \"438950d9c79650aa1c385ff34e73b424\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.404873 kubelet[3162]: I0911 00:27:33.404780 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/438950d9c79650aa1c385ff34e73b424-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-1c5282f4e4\" (UID: \"438950d9c79650aa1c385ff34e73b424\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.404873 kubelet[3162]: I0911 00:27:33.404795 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a2d56f47b77285d9ed9059696fbf170-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" (UID: \"7a2d56f47b77285d9ed9059696fbf170\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.404873 kubelet[3162]: I0911 00:27:33.404810 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a2d56f47b77285d9ed9059696fbf170-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" (UID: \"7a2d56f47b77285d9ed9059696fbf170\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.404873 kubelet[3162]: I0911 00:27:33.404823 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a2d56f47b77285d9ed9059696fbf170-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" (UID: \"7a2d56f47b77285d9ed9059696fbf170\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.404873 kubelet[3162]: I0911 00:27:33.404841 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/438950d9c79650aa1c385ff34e73b424-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-1c5282f4e4\" (UID: \"438950d9c79650aa1c385ff34e73b424\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.404957 kubelet[3162]: I0911 00:27:33.404857 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a2d56f47b77285d9ed9059696fbf170-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-1c5282f4e4\" (UID: \"7a2d56f47b77285d9ed9059696fbf170\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:33.732933 sudo[3199]: pam_unix(sudo:session): session closed for user root Sep 11 00:27:34.184718 kubelet[3162]: I0911 00:27:34.184654 3162 apiserver.go:52] "Watching apiserver" Sep 11 00:27:34.203441 kubelet[3162]: I0911 00:27:34.203424 3162 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:27:34.232021 kubelet[3162]: I0911 00:27:34.231954 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:34.232625 kubelet[3162]: I0911 00:27:34.232517 3162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:34.252005 kubelet[3162]: I0911 00:27:34.251973 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:34.252070 kubelet[3162]: E0911 00:27:34.252025 3162 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-1c5282f4e4\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:34.253570 kubelet[3162]: I0911 00:27:34.253301 3162 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:27:34.253570 kubelet[3162]: E0911 00:27:34.253336 3162 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-1c5282f4e4\" already exists" pod="kube-system/kube-scheduler-ci-4372.1.0-n-1c5282f4e4" Sep 11 00:27:34.261184 kubelet[3162]: I0911 00:27:34.261046 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-n-1c5282f4e4" podStartSLOduration=3.261034945 podStartE2EDuration="3.261034945s" podCreationTimestamp="2025-09-11 00:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:27:34.253556073 +0000 UTC m=+1.118116632" watchObservedRunningTime="2025-09-11 00:27:34.261034945 +0000 UTC m=+1.125595503" Sep 11 00:27:34.269650 kubelet[3162]: I0911 00:27:34.269439 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-1c5282f4e4" podStartSLOduration=3.269430183 podStartE2EDuration="3.269430183s" podCreationTimestamp="2025-09-11 00:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:27:34.269404008 +0000 UTC m=+1.133964562" watchObservedRunningTime="2025-09-11 00:27:34.269430183 +0000 UTC m=+1.133990742" Sep 11 00:27:34.269650 kubelet[3162]: I0911 00:27:34.269513 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-n-1c5282f4e4" podStartSLOduration=1.26950874 podStartE2EDuration="1.26950874s" podCreationTimestamp="2025-09-11 00:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:27:34.261434544 +0000 UTC m=+1.125995101" watchObservedRunningTime="2025-09-11 00:27:34.26950874 +0000 UTC m=+1.134069295" Sep 11 00:27:34.844030 sudo[2169]: pam_unix(sudo:session): session closed for user root Sep 11 00:27:34.950079 sshd[2168]: Connection closed by 10.200.16.10 port 37594 Sep 11 00:27:34.950007 sshd-session[2166]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:34.952984 systemd[1]: sshd@6-10.200.8.50:22-10.200.16.10:37594.service: Deactivated successfully. Sep 11 00:27:34.955356 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:27:34.955509 systemd[1]: session-9.scope: Consumed 3.518s CPU time, 272.7M memory peak. Sep 11 00:27:34.956497 systemd-logind[1700]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:27:34.957973 systemd-logind[1700]: Removed session 9. Sep 11 00:27:37.856417 kubelet[3162]: I0911 00:27:37.856389 3162 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:27:37.856756 containerd[1746]: time="2025-09-11T00:27:37.856730872Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:27:37.856950 kubelet[3162]: I0911 00:27:37.856915 3162 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:27:38.791668 systemd[1]: Created slice kubepods-besteffort-pod8ca0ff00_bb2f_4596_a4b7_e4726fae0c0b.slice - libcontainer container kubepods-besteffort-pod8ca0ff00_bb2f_4596_a4b7_e4726fae0c0b.slice. Sep 11 00:27:38.803814 systemd[1]: Created slice kubepods-burstable-podb4e6fdec_7896_4428_b005_af26ddb5d9cb.slice - libcontainer container kubepods-burstable-podb4e6fdec_7896_4428_b005_af26ddb5d9cb.slice. Sep 11 00:27:38.840507 kubelet[3162]: I0911 00:27:38.840484 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-etc-cni-netd\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840602 kubelet[3162]: I0911 00:27:38.840514 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-host-proc-sys-net\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840602 kubelet[3162]: I0911 00:27:38.840530 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnx9h\" (UniqueName: \"kubernetes.io/projected/b4e6fdec-7896-4428-b005-af26ddb5d9cb-kube-api-access-rnx9h\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840602 kubelet[3162]: I0911 00:27:38.840556 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cni-path\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840602 kubelet[3162]: I0911 00:27:38.840571 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-config-path\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840602 kubelet[3162]: I0911 00:27:38.840586 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ca0ff00-bb2f-4596-a4b7-e4726fae0c0b-lib-modules\") pod \"kube-proxy-29hbj\" (UID: \"8ca0ff00-bb2f-4596-a4b7-e4726fae0c0b\") " pod="kube-system/kube-proxy-29hbj" Sep 11 00:27:38.840602 kubelet[3162]: I0911 00:27:38.840599 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-bpf-maps\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840746 kubelet[3162]: I0911 00:27:38.840626 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-xtables-lock\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840746 kubelet[3162]: I0911 00:27:38.840643 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-host-proc-sys-kernel\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840746 kubelet[3162]: I0911 00:27:38.840658 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ca0ff00-bb2f-4596-a4b7-e4726fae0c0b-kube-proxy\") pod \"kube-proxy-29hbj\" (UID: \"8ca0ff00-bb2f-4596-a4b7-e4726fae0c0b\") " pod="kube-system/kube-proxy-29hbj" Sep 11 00:27:38.840746 kubelet[3162]: I0911 00:27:38.840673 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-run\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840746 kubelet[3162]: I0911 00:27:38.840686 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-hostproc\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840746 kubelet[3162]: I0911 00:27:38.840699 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-cgroup\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840864 kubelet[3162]: I0911 00:27:38.840713 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-lib-modules\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840864 kubelet[3162]: I0911 00:27:38.840727 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4e6fdec-7896-4428-b005-af26ddb5d9cb-clustermesh-secrets\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840864 kubelet[3162]: I0911 00:27:38.840741 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4e6fdec-7896-4428-b005-af26ddb5d9cb-hubble-tls\") pod \"cilium-kkv8d\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " pod="kube-system/cilium-kkv8d" Sep 11 00:27:38.840864 kubelet[3162]: I0911 00:27:38.840757 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ca0ff00-bb2f-4596-a4b7-e4726fae0c0b-xtables-lock\") pod \"kube-proxy-29hbj\" (UID: \"8ca0ff00-bb2f-4596-a4b7-e4726fae0c0b\") " pod="kube-system/kube-proxy-29hbj" Sep 11 00:27:38.840864 kubelet[3162]: I0911 00:27:38.840772 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxvzk\" (UniqueName: \"kubernetes.io/projected/8ca0ff00-bb2f-4596-a4b7-e4726fae0c0b-kube-api-access-qxvzk\") pod \"kube-proxy-29hbj\" (UID: \"8ca0ff00-bb2f-4596-a4b7-e4726fae0c0b\") " pod="kube-system/kube-proxy-29hbj" Sep 11 00:27:39.029770 systemd[1]: Created slice kubepods-besteffort-podad318e34_f862_4f81_86ef_c4cfd183567e.slice - libcontainer container kubepods-besteffort-podad318e34_f862_4f81_86ef_c4cfd183567e.slice. Sep 11 00:27:39.042412 kubelet[3162]: I0911 00:27:39.042340 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad318e34-f862-4f81-86ef-c4cfd183567e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zxthr\" (UID: \"ad318e34-f862-4f81-86ef-c4cfd183567e\") " pod="kube-system/cilium-operator-6c4d7847fc-zxthr" Sep 11 00:27:39.042412 kubelet[3162]: I0911 00:27:39.042378 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lxhj\" (UniqueName: \"kubernetes.io/projected/ad318e34-f862-4f81-86ef-c4cfd183567e-kube-api-access-9lxhj\") pod \"cilium-operator-6c4d7847fc-zxthr\" (UID: \"ad318e34-f862-4f81-86ef-c4cfd183567e\") " pod="kube-system/cilium-operator-6c4d7847fc-zxthr" Sep 11 00:27:39.102308 containerd[1746]: time="2025-09-11T00:27:39.102273295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29hbj,Uid:8ca0ff00-bb2f-4596-a4b7-e4726fae0c0b,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:39.108776 containerd[1746]: time="2025-09-11T00:27:39.108749832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kkv8d,Uid:b4e6fdec-7896-4428-b005-af26ddb5d9cb,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:39.149327 containerd[1746]: time="2025-09-11T00:27:39.149174611Z" level=info msg="connecting to shim b171517d9c02f23117a2895527ae367f41e1bcaf2fa4eaa2ca8afac62bac9c7b" address="unix:///run/containerd/s/be7ba480e3bc8e8a2fcbdb09293784b908eda5538e9f7a166343e30769362c3c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:39.171018 containerd[1746]: time="2025-09-11T00:27:39.170129824Z" level=info msg="connecting to shim 8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899" address="unix:///run/containerd/s/a38abb5ce71b5795e38297a7fc5e3df4f4725484f1a16283a87ff41f85b997fe" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:39.179850 systemd[1]: Started cri-containerd-b171517d9c02f23117a2895527ae367f41e1bcaf2fa4eaa2ca8afac62bac9c7b.scope - libcontainer container b171517d9c02f23117a2895527ae367f41e1bcaf2fa4eaa2ca8afac62bac9c7b. Sep 11 00:27:39.201946 systemd[1]: Started cri-containerd-8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899.scope - libcontainer container 8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899. Sep 11 00:27:39.212311 containerd[1746]: time="2025-09-11T00:27:39.212288879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29hbj,Uid:8ca0ff00-bb2f-4596-a4b7-e4726fae0c0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b171517d9c02f23117a2895527ae367f41e1bcaf2fa4eaa2ca8afac62bac9c7b\"" Sep 11 00:27:39.223681 containerd[1746]: time="2025-09-11T00:27:39.223664144Z" level=info msg="CreateContainer within sandbox \"b171517d9c02f23117a2895527ae367f41e1bcaf2fa4eaa2ca8afac62bac9c7b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:27:39.228210 containerd[1746]: time="2025-09-11T00:27:39.228181787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kkv8d,Uid:b4e6fdec-7896-4428-b005-af26ddb5d9cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\"" Sep 11 00:27:39.229272 containerd[1746]: time="2025-09-11T00:27:39.229245368Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 11 00:27:39.247864 containerd[1746]: time="2025-09-11T00:27:39.247843193Z" level=info msg="Container 4225013c25f24dc6c2ae81b52a10ebd51d0ebfd6886d7c64b91bcd49a75ad7e1: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:39.261716 containerd[1746]: time="2025-09-11T00:27:39.261694753Z" level=info msg="CreateContainer within sandbox \"b171517d9c02f23117a2895527ae367f41e1bcaf2fa4eaa2ca8afac62bac9c7b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4225013c25f24dc6c2ae81b52a10ebd51d0ebfd6886d7c64b91bcd49a75ad7e1\"" Sep 11 00:27:39.262029 containerd[1746]: time="2025-09-11T00:27:39.262011390Z" level=info msg="StartContainer for \"4225013c25f24dc6c2ae81b52a10ebd51d0ebfd6886d7c64b91bcd49a75ad7e1\"" Sep 11 00:27:39.262899 containerd[1746]: time="2025-09-11T00:27:39.262878015Z" level=info msg="connecting to shim 4225013c25f24dc6c2ae81b52a10ebd51d0ebfd6886d7c64b91bcd49a75ad7e1" address="unix:///run/containerd/s/be7ba480e3bc8e8a2fcbdb09293784b908eda5538e9f7a166343e30769362c3c" protocol=ttrpc version=3 Sep 11 00:27:39.278729 systemd[1]: Started cri-containerd-4225013c25f24dc6c2ae81b52a10ebd51d0ebfd6886d7c64b91bcd49a75ad7e1.scope - libcontainer container 4225013c25f24dc6c2ae81b52a10ebd51d0ebfd6886d7c64b91bcd49a75ad7e1. Sep 11 00:27:39.305486 containerd[1746]: time="2025-09-11T00:27:39.305424545Z" level=info msg="StartContainer for \"4225013c25f24dc6c2ae81b52a10ebd51d0ebfd6886d7c64b91bcd49a75ad7e1\" returns successfully" Sep 11 00:27:39.333654 containerd[1746]: time="2025-09-11T00:27:39.333624686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zxthr,Uid:ad318e34-f862-4f81-86ef-c4cfd183567e,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:39.373022 containerd[1746]: time="2025-09-11T00:27:39.372976575Z" level=info msg="connecting to shim 2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54" address="unix:///run/containerd/s/52e17925a4781dccd28b8a1b8a24c0f42e76bf14a775fde7a4d4e79c99a128a6" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:39.393838 systemd[1]: Started cri-containerd-2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54.scope - libcontainer container 2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54. Sep 11 00:27:39.438877 containerd[1746]: time="2025-09-11T00:27:39.438828274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zxthr,Uid:ad318e34-f862-4f81-86ef-c4cfd183567e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\"" Sep 11 00:27:40.256435 kubelet[3162]: I0911 00:27:40.256396 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-29hbj" podStartSLOduration=2.25638262 podStartE2EDuration="2.25638262s" podCreationTimestamp="2025-09-11 00:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:27:40.256284134 +0000 UTC m=+7.120844692" watchObservedRunningTime="2025-09-11 00:27:40.25638262 +0000 UTC m=+7.120943176" Sep 11 00:27:43.012437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521770762.mount: Deactivated successfully. Sep 11 00:27:44.358722 containerd[1746]: time="2025-09-11T00:27:44.358686776Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:44.361312 containerd[1746]: time="2025-09-11T00:27:44.361158012Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 11 00:27:44.364510 containerd[1746]: time="2025-09-11T00:27:44.364490580Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:44.365444 containerd[1746]: time="2025-09-11T00:27:44.365418997Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.136148852s" Sep 11 00:27:44.365496 containerd[1746]: time="2025-09-11T00:27:44.365443739Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 11 00:27:44.367095 containerd[1746]: time="2025-09-11T00:27:44.366760358Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 11 00:27:44.371980 containerd[1746]: time="2025-09-11T00:27:44.371956641Z" level=info msg="CreateContainer within sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 11 00:27:44.395136 containerd[1746]: time="2025-09-11T00:27:44.395113847Z" level=info msg="Container b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:44.395824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707747774.mount: Deactivated successfully. Sep 11 00:27:44.415109 containerd[1746]: time="2025-09-11T00:27:44.415089982Z" level=info msg="CreateContainer within sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\"" Sep 11 00:27:44.415432 containerd[1746]: time="2025-09-11T00:27:44.415416727Z" level=info msg="StartContainer for \"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\"" Sep 11 00:27:44.416427 containerd[1746]: time="2025-09-11T00:27:44.416190949Z" level=info msg="connecting to shim b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec" address="unix:///run/containerd/s/a38abb5ce71b5795e38297a7fc5e3df4f4725484f1a16283a87ff41f85b997fe" protocol=ttrpc version=3 Sep 11 00:27:44.435748 systemd[1]: Started cri-containerd-b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec.scope - libcontainer container b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec. Sep 11 00:27:44.460028 containerd[1746]: time="2025-09-11T00:27:44.460011174Z" level=info msg="StartContainer for \"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\" returns successfully" Sep 11 00:27:44.465868 systemd[1]: cri-containerd-b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec.scope: Deactivated successfully. Sep 11 00:27:44.468401 containerd[1746]: time="2025-09-11T00:27:44.468377412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\" id:\"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\" pid:3580 exited_at:{seconds:1757550464 nanos:467934915}" Sep 11 00:27:44.468538 containerd[1746]: time="2025-09-11T00:27:44.468423851Z" level=info msg="received exit event container_id:\"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\" id:\"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\" pid:3580 exited_at:{seconds:1757550464 nanos:467934915}" Sep 11 00:27:44.481023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec-rootfs.mount: Deactivated successfully. Sep 11 00:27:48.655110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386343341.mount: Deactivated successfully. Sep 11 00:27:49.049212 containerd[1746]: time="2025-09-11T00:27:49.049175673Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:49.051962 containerd[1746]: time="2025-09-11T00:27:49.051881992Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 11 00:27:49.054915 containerd[1746]: time="2025-09-11T00:27:49.054891673Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:49.055770 containerd[1746]: time="2025-09-11T00:27:49.055606410Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.688819712s" Sep 11 00:27:49.055770 containerd[1746]: time="2025-09-11T00:27:49.055646343Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 11 00:27:49.062086 containerd[1746]: time="2025-09-11T00:27:49.062046886Z" level=info msg="CreateContainer within sandbox \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 11 00:27:49.080638 containerd[1746]: time="2025-09-11T00:27:49.080284856Z" level=info msg="Container 0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:49.093027 containerd[1746]: time="2025-09-11T00:27:49.093004408Z" level=info msg="CreateContainer within sandbox \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\"" Sep 11 00:27:49.093629 containerd[1746]: time="2025-09-11T00:27:49.093589596Z" level=info msg="StartContainer for \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\"" Sep 11 00:27:49.094244 containerd[1746]: time="2025-09-11T00:27:49.094224485Z" level=info msg="connecting to shim 0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a" address="unix:///run/containerd/s/52e17925a4781dccd28b8a1b8a24c0f42e76bf14a775fde7a4d4e79c99a128a6" protocol=ttrpc version=3 Sep 11 00:27:49.116803 systemd[1]: Started cri-containerd-0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a.scope - libcontainer container 0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a. Sep 11 00:27:49.141306 containerd[1746]: time="2025-09-11T00:27:49.141248067Z" level=info msg="StartContainer for \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\" returns successfully" Sep 11 00:27:49.269344 containerd[1746]: time="2025-09-11T00:27:49.269004679Z" level=info msg="CreateContainer within sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 11 00:27:49.297336 containerd[1746]: time="2025-09-11T00:27:49.297307350Z" level=info msg="Container 8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:49.316344 containerd[1746]: time="2025-09-11T00:27:49.315599961Z" level=info msg="CreateContainer within sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\"" Sep 11 00:27:49.317109 containerd[1746]: time="2025-09-11T00:27:49.316739894Z" level=info msg="StartContainer for \"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\"" Sep 11 00:27:49.317411 containerd[1746]: time="2025-09-11T00:27:49.317384132Z" level=info msg="connecting to shim 8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221" address="unix:///run/containerd/s/a38abb5ce71b5795e38297a7fc5e3df4f4725484f1a16283a87ff41f85b997fe" protocol=ttrpc version=3 Sep 11 00:27:49.342783 systemd[1]: Started cri-containerd-8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221.scope - libcontainer container 8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221. Sep 11 00:27:49.381298 containerd[1746]: time="2025-09-11T00:27:49.381264439Z" level=info msg="StartContainer for \"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\" returns successfully" Sep 11 00:27:49.390580 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:27:49.391099 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:27:49.391376 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:27:49.394694 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:27:49.398951 systemd[1]: cri-containerd-8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221.scope: Deactivated successfully. Sep 11 00:27:49.400360 containerd[1746]: time="2025-09-11T00:27:49.400202702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\" id:\"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\" pid:3675 exited_at:{seconds:1757550469 nanos:399969859}" Sep 11 00:27:49.400360 containerd[1746]: time="2025-09-11T00:27:49.400275599Z" level=info msg="received exit event container_id:\"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\" id:\"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\" pid:3675 exited_at:{seconds:1757550469 nanos:399969859}" Sep 11 00:27:49.419528 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:27:50.273971 containerd[1746]: time="2025-09-11T00:27:50.273908072Z" level=info msg="CreateContainer within sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 11 00:27:50.282094 kubelet[3162]: I0911 00:27:50.281994 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zxthr" podStartSLOduration=1.665402463 podStartE2EDuration="11.281977866s" podCreationTimestamp="2025-09-11 00:27:39 +0000 UTC" firstStartedPulling="2025-09-11 00:27:39.439644159 +0000 UTC m=+6.304204711" lastFinishedPulling="2025-09-11 00:27:49.056219561 +0000 UTC m=+15.920780114" observedRunningTime="2025-09-11 00:27:49.349510219 +0000 UTC m=+16.214070777" watchObservedRunningTime="2025-09-11 00:27:50.281977866 +0000 UTC m=+17.146538425" Sep 11 00:27:50.297208 containerd[1746]: time="2025-09-11T00:27:50.297182784Z" level=info msg="Container d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:50.314742 containerd[1746]: time="2025-09-11T00:27:50.314718522Z" level=info msg="CreateContainer within sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\"" Sep 11 00:27:50.315197 containerd[1746]: time="2025-09-11T00:27:50.315175354Z" level=info msg="StartContainer for \"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\"" Sep 11 00:27:50.316417 containerd[1746]: time="2025-09-11T00:27:50.316392355Z" level=info msg="connecting to shim d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b" address="unix:///run/containerd/s/a38abb5ce71b5795e38297a7fc5e3df4f4725484f1a16283a87ff41f85b997fe" protocol=ttrpc version=3 Sep 11 00:27:50.344791 systemd[1]: Started cri-containerd-d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b.scope - libcontainer container d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b. Sep 11 00:27:50.369671 systemd[1]: cri-containerd-d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b.scope: Deactivated successfully. Sep 11 00:27:50.370967 containerd[1746]: time="2025-09-11T00:27:50.370943661Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\" id:\"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\" pid:3723 exited_at:{seconds:1757550470 nanos:370770179}" Sep 11 00:27:50.372114 containerd[1746]: time="2025-09-11T00:27:50.371315214Z" level=info msg="received exit event container_id:\"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\" id:\"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\" pid:3723 exited_at:{seconds:1757550470 nanos:370770179}" Sep 11 00:27:50.377748 containerd[1746]: time="2025-09-11T00:27:50.377725845Z" level=info msg="StartContainer for \"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\" returns successfully" Sep 11 00:27:50.387794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b-rootfs.mount: Deactivated successfully. Sep 11 00:27:51.277428 containerd[1746]: time="2025-09-11T00:27:51.277079021Z" level=info msg="CreateContainer within sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 11 00:27:51.300720 containerd[1746]: time="2025-09-11T00:27:51.299380889Z" level=info msg="Container 11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:51.314969 containerd[1746]: time="2025-09-11T00:27:51.314944968Z" level=info msg="CreateContainer within sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\"" Sep 11 00:27:51.318756 containerd[1746]: time="2025-09-11T00:27:51.318732550Z" level=info msg="StartContainer for \"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\"" Sep 11 00:27:51.319484 containerd[1746]: time="2025-09-11T00:27:51.319376538Z" level=info msg="connecting to shim 11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb" address="unix:///run/containerd/s/a38abb5ce71b5795e38297a7fc5e3df4f4725484f1a16283a87ff41f85b997fe" protocol=ttrpc version=3 Sep 11 00:27:51.356757 systemd[1]: Started cri-containerd-11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb.scope - libcontainer container 11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb. Sep 11 00:27:51.417786 systemd[1]: cri-containerd-11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb.scope: Deactivated successfully. Sep 11 00:27:51.418906 containerd[1746]: time="2025-09-11T00:27:51.418883188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\" id:\"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\" pid:3763 exited_at:{seconds:1757550471 nanos:418657830}" Sep 11 00:27:51.422849 containerd[1746]: time="2025-09-11T00:27:51.422828990Z" level=info msg="received exit event container_id:\"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\" id:\"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\" pid:3763 exited_at:{seconds:1757550471 nanos:418657830}" Sep 11 00:27:51.423433 containerd[1746]: time="2025-09-11T00:27:51.423415288Z" level=info msg="StartContainer for \"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\" returns successfully" Sep 11 00:27:51.437605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb-rootfs.mount: Deactivated successfully. Sep 11 00:27:52.281041 containerd[1746]: time="2025-09-11T00:27:52.280994319Z" level=info msg="CreateContainer within sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 11 00:27:52.318382 containerd[1746]: time="2025-09-11T00:27:52.317716240Z" level=info msg="Container ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:52.320816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3446480762.mount: Deactivated successfully. Sep 11 00:27:52.330022 containerd[1746]: time="2025-09-11T00:27:52.329996890Z" level=info msg="CreateContainer within sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\"" Sep 11 00:27:52.330603 containerd[1746]: time="2025-09-11T00:27:52.330484350Z" level=info msg="StartContainer for \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\"" Sep 11 00:27:52.331401 containerd[1746]: time="2025-09-11T00:27:52.331367441Z" level=info msg="connecting to shim ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a" address="unix:///run/containerd/s/a38abb5ce71b5795e38297a7fc5e3df4f4725484f1a16283a87ff41f85b997fe" protocol=ttrpc version=3 Sep 11 00:27:52.351752 systemd[1]: Started cri-containerd-ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a.scope - libcontainer container ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a. Sep 11 00:27:52.387730 containerd[1746]: time="2025-09-11T00:27:52.387698816Z" level=info msg="StartContainer for \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" returns successfully" Sep 11 00:27:52.443849 containerd[1746]: time="2025-09-11T00:27:52.443607698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" id:\"ab58a3a23d48cd5edeb87baebee08fe05a43d1a019913ae94aa1f0b05f51a5eb\" pid:3829 exited_at:{seconds:1757550472 nanos:443396952}" Sep 11 00:27:52.464204 kubelet[3162]: I0911 00:27:52.463527 3162 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 00:27:52.504160 systemd[1]: Created slice kubepods-burstable-podea058d6c_c874_4958_bdec_f53d507ed2b7.slice - libcontainer container kubepods-burstable-podea058d6c_c874_4958_bdec_f53d507ed2b7.slice. Sep 11 00:27:52.514008 systemd[1]: Created slice kubepods-burstable-pod6780a1c9_db99_412a_bd22_7eedcc4488fb.slice - libcontainer container kubepods-burstable-pod6780a1c9_db99_412a_bd22_7eedcc4488fb.slice. Sep 11 00:27:52.537683 kubelet[3162]: I0911 00:27:52.537247 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnfqd\" (UniqueName: \"kubernetes.io/projected/ea058d6c-c874-4958-bdec-f53d507ed2b7-kube-api-access-mnfqd\") pod \"coredns-674b8bbfcf-2g4wl\" (UID: \"ea058d6c-c874-4958-bdec-f53d507ed2b7\") " pod="kube-system/coredns-674b8bbfcf-2g4wl" Sep 11 00:27:52.537683 kubelet[3162]: I0911 00:27:52.537287 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6780a1c9-db99-412a-bd22-7eedcc4488fb-config-volume\") pod \"coredns-674b8bbfcf-v49n5\" (UID: \"6780a1c9-db99-412a-bd22-7eedcc4488fb\") " pod="kube-system/coredns-674b8bbfcf-v49n5" Sep 11 00:27:52.537683 kubelet[3162]: I0911 00:27:52.537305 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea058d6c-c874-4958-bdec-f53d507ed2b7-config-volume\") pod \"coredns-674b8bbfcf-2g4wl\" (UID: \"ea058d6c-c874-4958-bdec-f53d507ed2b7\") " pod="kube-system/coredns-674b8bbfcf-2g4wl" Sep 11 00:27:52.537683 kubelet[3162]: I0911 00:27:52.537323 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9h8\" (UniqueName: \"kubernetes.io/projected/6780a1c9-db99-412a-bd22-7eedcc4488fb-kube-api-access-hb9h8\") pod \"coredns-674b8bbfcf-v49n5\" (UID: \"6780a1c9-db99-412a-bd22-7eedcc4488fb\") " pod="kube-system/coredns-674b8bbfcf-v49n5" Sep 11 00:27:52.809358 containerd[1746]: time="2025-09-11T00:27:52.809282424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2g4wl,Uid:ea058d6c-c874-4958-bdec-f53d507ed2b7,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:52.819243 containerd[1746]: time="2025-09-11T00:27:52.819184846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v49n5,Uid:6780a1c9-db99-412a-bd22-7eedcc4488fb,Namespace:kube-system,Attempt:0,}" Sep 11 00:27:53.292360 kubelet[3162]: I0911 00:27:53.292295 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kkv8d" podStartSLOduration=10.15498686 podStartE2EDuration="15.292279713s" podCreationTimestamp="2025-09-11 00:27:38 +0000 UTC" firstStartedPulling="2025-09-11 00:27:39.228853318 +0000 UTC m=+6.093413870" lastFinishedPulling="2025-09-11 00:27:44.366146179 +0000 UTC m=+11.230706723" observedRunningTime="2025-09-11 00:27:53.292131371 +0000 UTC m=+20.156691929" watchObservedRunningTime="2025-09-11 00:27:53.292279713 +0000 UTC m=+20.156840273" Sep 11 00:27:54.321443 systemd-networkd[1350]: cilium_host: Link UP Sep 11 00:27:54.321771 systemd-networkd[1350]: cilium_net: Link UP Sep 11 00:27:54.322157 systemd-networkd[1350]: cilium_net: Gained carrier Sep 11 00:27:54.322644 systemd-networkd[1350]: cilium_host: Gained carrier Sep 11 00:27:54.460807 systemd-networkd[1350]: cilium_vxlan: Link UP Sep 11 00:27:54.460893 systemd-networkd[1350]: cilium_vxlan: Gained carrier Sep 11 00:27:54.646691 kernel: NET: Registered PF_ALG protocol family Sep 11 00:27:55.067545 systemd-networkd[1350]: cilium_host: Gained IPv6LL Sep 11 00:27:55.082889 systemd-networkd[1350]: lxc_health: Link UP Sep 11 00:27:55.083121 systemd-networkd[1350]: lxc_health: Gained carrier Sep 11 00:27:55.259672 systemd-networkd[1350]: cilium_net: Gained IPv6LL Sep 11 00:27:55.339897 kernel: eth0: renamed from tmp03731 Sep 11 00:27:55.343670 systemd-networkd[1350]: lxcea9ca9f9812b: Link UP Sep 11 00:27:55.344813 systemd-networkd[1350]: lxcea9ca9f9812b: Gained carrier Sep 11 00:27:55.353412 systemd-networkd[1350]: lxc5160155d67d9: Link UP Sep 11 00:27:55.362676 kernel: eth0: renamed from tmp178f6 Sep 11 00:27:55.365731 systemd-networkd[1350]: lxc5160155d67d9: Gained carrier Sep 11 00:27:56.282804 systemd-networkd[1350]: lxc_health: Gained IPv6LL Sep 11 00:27:56.538781 systemd-networkd[1350]: cilium_vxlan: Gained IPv6LL Sep 11 00:27:56.858801 systemd-networkd[1350]: lxc5160155d67d9: Gained IPv6LL Sep 11 00:27:57.050781 systemd-networkd[1350]: lxcea9ca9f9812b: Gained IPv6LL Sep 11 00:27:57.871746 containerd[1746]: time="2025-09-11T00:27:57.871704295Z" level=info msg="connecting to shim 037310a93a71e15d4956350bb8782aaac5254974741e21974a05ee44deab3b9e" address="unix:///run/containerd/s/4b4e222ba971b3d08b7926f8df8ee45a666e4a8df920df4c35f86c0defcd93cd" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:57.898742 containerd[1746]: time="2025-09-11T00:27:57.898674498Z" level=info msg="connecting to shim 178f6e8d44b55bd5ac35edde16adf1b8b6484c07ea343c1fb550244eccf14152" address="unix:///run/containerd/s/ee3096714e7460be04c57aaa819e33c130a37e9a05aeae8918491bf3b7d0265d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:27:57.924757 systemd[1]: Started cri-containerd-037310a93a71e15d4956350bb8782aaac5254974741e21974a05ee44deab3b9e.scope - libcontainer container 037310a93a71e15d4956350bb8782aaac5254974741e21974a05ee44deab3b9e. Sep 11 00:27:57.927898 systemd[1]: Started cri-containerd-178f6e8d44b55bd5ac35edde16adf1b8b6484c07ea343c1fb550244eccf14152.scope - libcontainer container 178f6e8d44b55bd5ac35edde16adf1b8b6484c07ea343c1fb550244eccf14152. Sep 11 00:27:57.970933 containerd[1746]: time="2025-09-11T00:27:57.970890199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2g4wl,Uid:ea058d6c-c874-4958-bdec-f53d507ed2b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"037310a93a71e15d4956350bb8782aaac5254974741e21974a05ee44deab3b9e\"" Sep 11 00:27:57.975367 containerd[1746]: time="2025-09-11T00:27:57.975312420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v49n5,Uid:6780a1c9-db99-412a-bd22-7eedcc4488fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"178f6e8d44b55bd5ac35edde16adf1b8b6484c07ea343c1fb550244eccf14152\"" Sep 11 00:27:57.978477 containerd[1746]: time="2025-09-11T00:27:57.978456941Z" level=info msg="CreateContainer within sandbox \"037310a93a71e15d4956350bb8782aaac5254974741e21974a05ee44deab3b9e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:27:57.982627 containerd[1746]: time="2025-09-11T00:27:57.982597812Z" level=info msg="CreateContainer within sandbox \"178f6e8d44b55bd5ac35edde16adf1b8b6484c07ea343c1fb550244eccf14152\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:27:58.005161 containerd[1746]: time="2025-09-11T00:27:58.004925720Z" level=info msg="Container 7cf9d59a563c0118038874cc68ea8abb54a944459cadf500e68adf8d59dfbdcc: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:58.005302 containerd[1746]: time="2025-09-11T00:27:58.005286901Z" level=info msg="Container e7631245ac047f645d9cf9939c040f32a3dcfb70170d1cafc752cba24ee6d6cf: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:58.024182 containerd[1746]: time="2025-09-11T00:27:58.024160296Z" level=info msg="CreateContainer within sandbox \"037310a93a71e15d4956350bb8782aaac5254974741e21974a05ee44deab3b9e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cf9d59a563c0118038874cc68ea8abb54a944459cadf500e68adf8d59dfbdcc\"" Sep 11 00:27:58.025001 containerd[1746]: time="2025-09-11T00:27:58.024541051Z" level=info msg="StartContainer for \"7cf9d59a563c0118038874cc68ea8abb54a944459cadf500e68adf8d59dfbdcc\"" Sep 11 00:27:58.025162 containerd[1746]: time="2025-09-11T00:27:58.025141769Z" level=info msg="connecting to shim 7cf9d59a563c0118038874cc68ea8abb54a944459cadf500e68adf8d59dfbdcc" address="unix:///run/containerd/s/4b4e222ba971b3d08b7926f8df8ee45a666e4a8df920df4c35f86c0defcd93cd" protocol=ttrpc version=3 Sep 11 00:27:58.031029 containerd[1746]: time="2025-09-11T00:27:58.030702840Z" level=info msg="CreateContainer within sandbox \"178f6e8d44b55bd5ac35edde16adf1b8b6484c07ea343c1fb550244eccf14152\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e7631245ac047f645d9cf9939c040f32a3dcfb70170d1cafc752cba24ee6d6cf\"" Sep 11 00:27:58.031777 containerd[1746]: time="2025-09-11T00:27:58.031664773Z" level=info msg="StartContainer for \"e7631245ac047f645d9cf9939c040f32a3dcfb70170d1cafc752cba24ee6d6cf\"" Sep 11 00:27:58.032912 containerd[1746]: time="2025-09-11T00:27:58.032492837Z" level=info msg="connecting to shim e7631245ac047f645d9cf9939c040f32a3dcfb70170d1cafc752cba24ee6d6cf" address="unix:///run/containerd/s/ee3096714e7460be04c57aaa819e33c130a37e9a05aeae8918491bf3b7d0265d" protocol=ttrpc version=3 Sep 11 00:27:58.047738 systemd[1]: Started cri-containerd-7cf9d59a563c0118038874cc68ea8abb54a944459cadf500e68adf8d59dfbdcc.scope - libcontainer container 7cf9d59a563c0118038874cc68ea8abb54a944459cadf500e68adf8d59dfbdcc. Sep 11 00:27:58.050291 systemd[1]: Started cri-containerd-e7631245ac047f645d9cf9939c040f32a3dcfb70170d1cafc752cba24ee6d6cf.scope - libcontainer container e7631245ac047f645d9cf9939c040f32a3dcfb70170d1cafc752cba24ee6d6cf. Sep 11 00:27:58.084383 containerd[1746]: time="2025-09-11T00:27:58.084334944Z" level=info msg="StartContainer for \"7cf9d59a563c0118038874cc68ea8abb54a944459cadf500e68adf8d59dfbdcc\" returns successfully" Sep 11 00:27:58.092648 containerd[1746]: time="2025-09-11T00:27:58.092630543Z" level=info msg="StartContainer for \"e7631245ac047f645d9cf9939c040f32a3dcfb70170d1cafc752cba24ee6d6cf\" returns successfully" Sep 11 00:27:58.301308 kubelet[3162]: I0911 00:27:58.301249 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v49n5" podStartSLOduration=19.301233739 podStartE2EDuration="19.301233739s" podCreationTimestamp="2025-09-11 00:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:27:58.301028199 +0000 UTC m=+25.165588761" watchObservedRunningTime="2025-09-11 00:27:58.301233739 +0000 UTC m=+25.165794295" Sep 11 00:27:58.328625 kubelet[3162]: I0911 00:27:58.328567 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2g4wl" podStartSLOduration=19.328530315 podStartE2EDuration="19.328530315s" podCreationTimestamp="2025-09-11 00:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:27:58.314188933 +0000 UTC m=+25.178749495" watchObservedRunningTime="2025-09-11 00:27:58.328530315 +0000 UTC m=+25.193090876" Sep 11 00:28:06.899214 kubelet[3162]: I0911 00:28:06.899104 3162 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:28:41.468405 systemd[1]: Started sshd@7-10.200.8.50:22-10.200.16.10:59078.service - OpenSSH per-connection server daemon (10.200.16.10:59078). Sep 11 00:28:42.106465 sshd[4478]: Accepted publickey for core from 10.200.16.10 port 59078 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:28:42.107655 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:42.111665 systemd-logind[1700]: New session 10 of user core. Sep 11 00:28:42.118731 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:28:42.602771 sshd[4480]: Connection closed by 10.200.16.10 port 59078 Sep 11 00:28:42.603761 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:42.607019 systemd-logind[1700]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:28:42.607221 systemd[1]: sshd@7-10.200.8.50:22-10.200.16.10:59078.service: Deactivated successfully. Sep 11 00:28:42.609274 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:28:42.610542 systemd-logind[1700]: Removed session 10. Sep 11 00:28:47.720517 systemd[1]: Started sshd@8-10.200.8.50:22-10.200.16.10:59090.service - OpenSSH per-connection server daemon (10.200.16.10:59090). Sep 11 00:28:48.355046 sshd[4493]: Accepted publickey for core from 10.200.16.10 port 59090 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:28:48.356002 sshd-session[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:48.359825 systemd-logind[1700]: New session 11 of user core. Sep 11 00:28:48.367744 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:28:48.848683 sshd[4495]: Connection closed by 10.200.16.10 port 59090 Sep 11 00:28:48.849077 sshd-session[4493]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:48.851731 systemd[1]: sshd@8-10.200.8.50:22-10.200.16.10:59090.service: Deactivated successfully. Sep 11 00:28:48.853459 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:28:48.854050 systemd-logind[1700]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:28:48.855484 systemd-logind[1700]: Removed session 11. Sep 11 00:28:53.966356 systemd[1]: Started sshd@9-10.200.8.50:22-10.200.16.10:57218.service - OpenSSH per-connection server daemon (10.200.16.10:57218). Sep 11 00:28:54.603480 sshd[4508]: Accepted publickey for core from 10.200.16.10 port 57218 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:28:54.604407 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:54.607796 systemd-logind[1700]: New session 12 of user core. Sep 11 00:28:54.614747 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:28:55.093912 sshd[4510]: Connection closed by 10.200.16.10 port 57218 Sep 11 00:28:55.094211 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:55.097173 systemd[1]: sshd@9-10.200.8.50:22-10.200.16.10:57218.service: Deactivated successfully. Sep 11 00:28:55.099014 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:28:55.099734 systemd-logind[1700]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:28:55.101192 systemd-logind[1700]: Removed session 12. Sep 11 00:29:00.220322 systemd[1]: Started sshd@10-10.200.8.50:22-10.200.16.10:52496.service - OpenSSH per-connection server daemon (10.200.16.10:52496). Sep 11 00:29:00.856932 sshd[4524]: Accepted publickey for core from 10.200.16.10 port 52496 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:00.857978 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:00.861667 systemd-logind[1700]: New session 13 of user core. Sep 11 00:29:00.866744 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:29:01.344699 sshd[4526]: Connection closed by 10.200.16.10 port 52496 Sep 11 00:29:01.345107 sshd-session[4524]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:01.347259 systemd[1]: sshd@10-10.200.8.50:22-10.200.16.10:52496.service: Deactivated successfully. Sep 11 00:29:01.348815 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:29:01.349924 systemd-logind[1700]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:29:01.351412 systemd-logind[1700]: Removed session 13. Sep 11 00:29:01.455991 systemd[1]: Started sshd@11-10.200.8.50:22-10.200.16.10:52510.service - OpenSSH per-connection server daemon (10.200.16.10:52510). Sep 11 00:29:02.090337 sshd[4539]: Accepted publickey for core from 10.200.16.10 port 52510 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:02.091328 sshd-session[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:02.095246 systemd-logind[1700]: New session 14 of user core. Sep 11 00:29:02.100756 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:29:02.609565 sshd[4541]: Connection closed by 10.200.16.10 port 52510 Sep 11 00:29:02.610723 sshd-session[4539]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:02.613302 systemd[1]: sshd@11-10.200.8.50:22-10.200.16.10:52510.service: Deactivated successfully. Sep 11 00:29:02.615231 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:29:02.615936 systemd-logind[1700]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:29:02.617148 systemd-logind[1700]: Removed session 14. Sep 11 00:29:02.731928 systemd[1]: Started sshd@12-10.200.8.50:22-10.200.16.10:52522.service - OpenSSH per-connection server daemon (10.200.16.10:52522). Sep 11 00:29:03.371747 sshd[4551]: Accepted publickey for core from 10.200.16.10 port 52522 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:03.372889 sshd-session[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:03.377145 systemd-logind[1700]: New session 15 of user core. Sep 11 00:29:03.381756 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:29:03.865001 sshd[4553]: Connection closed by 10.200.16.10 port 52522 Sep 11 00:29:03.865374 sshd-session[4551]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:03.868084 systemd[1]: sshd@12-10.200.8.50:22-10.200.16.10:52522.service: Deactivated successfully. Sep 11 00:29:03.869731 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:29:03.870349 systemd-logind[1700]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:29:03.871550 systemd-logind[1700]: Removed session 15. Sep 11 00:29:08.980922 systemd[1]: Started sshd@13-10.200.8.50:22-10.200.16.10:52534.service - OpenSSH per-connection server daemon (10.200.16.10:52534). Sep 11 00:29:09.624589 sshd[4565]: Accepted publickey for core from 10.200.16.10 port 52534 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:09.625493 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:09.629324 systemd-logind[1700]: New session 16 of user core. Sep 11 00:29:09.633727 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:29:10.117667 sshd[4569]: Connection closed by 10.200.16.10 port 52534 Sep 11 00:29:10.118050 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:10.120145 systemd[1]: sshd@13-10.200.8.50:22-10.200.16.10:52534.service: Deactivated successfully. Sep 11 00:29:10.121921 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:29:10.123002 systemd-logind[1700]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:29:10.124171 systemd-logind[1700]: Removed session 16. Sep 11 00:29:10.241118 systemd[1]: Started sshd@14-10.200.8.50:22-10.200.16.10:43112.service - OpenSSH per-connection server daemon (10.200.16.10:43112). Sep 11 00:29:10.873803 sshd[4581]: Accepted publickey for core from 10.200.16.10 port 43112 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:10.875825 sshd-session[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:10.878883 systemd-logind[1700]: New session 17 of user core. Sep 11 00:29:10.883717 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:29:11.429188 sshd[4583]: Connection closed by 10.200.16.10 port 43112 Sep 11 00:29:11.429597 sshd-session[4581]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:11.432784 systemd[1]: sshd@14-10.200.8.50:22-10.200.16.10:43112.service: Deactivated successfully. Sep 11 00:29:11.433065 systemd-logind[1700]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:29:11.434714 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:29:11.436558 systemd-logind[1700]: Removed session 17. Sep 11 00:29:11.543713 systemd[1]: Started sshd@15-10.200.8.50:22-10.200.16.10:43124.service - OpenSSH per-connection server daemon (10.200.16.10:43124). Sep 11 00:29:12.180558 sshd[4593]: Accepted publickey for core from 10.200.16.10 port 43124 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:12.181367 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:12.184306 systemd-logind[1700]: New session 18 of user core. Sep 11 00:29:12.189725 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:29:13.088273 sshd[4595]: Connection closed by 10.200.16.10 port 43124 Sep 11 00:29:13.088775 sshd-session[4593]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:13.091186 systemd[1]: sshd@15-10.200.8.50:22-10.200.16.10:43124.service: Deactivated successfully. Sep 11 00:29:13.092518 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:29:13.093505 systemd-logind[1700]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:29:13.095762 systemd-logind[1700]: Removed session 18. Sep 11 00:29:13.201000 systemd[1]: Started sshd@16-10.200.8.50:22-10.200.16.10:43126.service - OpenSSH per-connection server daemon (10.200.16.10:43126). Sep 11 00:29:13.838053 sshd[4612]: Accepted publickey for core from 10.200.16.10 port 43126 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:13.838984 sshd-session[4612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:13.842556 systemd-logind[1700]: New session 19 of user core. Sep 11 00:29:13.855741 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:29:14.401979 sshd[4614]: Connection closed by 10.200.16.10 port 43126 Sep 11 00:29:14.402800 sshd-session[4612]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:14.405437 systemd[1]: sshd@16-10.200.8.50:22-10.200.16.10:43126.service: Deactivated successfully. Sep 11 00:29:14.407097 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:29:14.407969 systemd-logind[1700]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:29:14.408916 systemd-logind[1700]: Removed session 19. Sep 11 00:29:14.516752 systemd[1]: Started sshd@17-10.200.8.50:22-10.200.16.10:43132.service - OpenSSH per-connection server daemon (10.200.16.10:43132). Sep 11 00:29:15.154364 sshd[4624]: Accepted publickey for core from 10.200.16.10 port 43132 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:15.155229 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:15.158779 systemd-logind[1700]: New session 20 of user core. Sep 11 00:29:15.168752 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:29:15.642488 sshd[4626]: Connection closed by 10.200.16.10 port 43132 Sep 11 00:29:15.642851 sshd-session[4624]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:15.644795 systemd[1]: sshd@17-10.200.8.50:22-10.200.16.10:43132.service: Deactivated successfully. Sep 11 00:29:15.646314 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:29:15.646955 systemd-logind[1700]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:29:15.648473 systemd-logind[1700]: Removed session 20. Sep 11 00:29:20.755361 systemd[1]: Started sshd@18-10.200.8.50:22-10.200.16.10:38162.service - OpenSSH per-connection server daemon (10.200.16.10:38162). Sep 11 00:29:21.401281 sshd[4640]: Accepted publickey for core from 10.200.16.10 port 38162 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:21.402303 sshd-session[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:21.406128 systemd-logind[1700]: New session 21 of user core. Sep 11 00:29:21.411754 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:29:21.911038 sshd[4642]: Connection closed by 10.200.16.10 port 38162 Sep 11 00:29:21.911759 sshd-session[4640]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:21.913739 systemd[1]: sshd@18-10.200.8.50:22-10.200.16.10:38162.service: Deactivated successfully. Sep 11 00:29:21.915409 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:29:21.916528 systemd-logind[1700]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:29:21.917764 systemd-logind[1700]: Removed session 21. Sep 11 00:29:27.058230 systemd[1]: Started sshd@19-10.200.8.50:22-10.200.16.10:38168.service - OpenSSH per-connection server daemon (10.200.16.10:38168). Sep 11 00:29:27.692814 sshd[4654]: Accepted publickey for core from 10.200.16.10 port 38168 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:27.693742 sshd-session[4654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:27.697370 systemd-logind[1700]: New session 22 of user core. Sep 11 00:29:27.705739 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:29:28.179936 sshd[4656]: Connection closed by 10.200.16.10 port 38168 Sep 11 00:29:28.180493 sshd-session[4654]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:28.182601 systemd[1]: sshd@19-10.200.8.50:22-10.200.16.10:38168.service: Deactivated successfully. Sep 11 00:29:28.184251 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:29:28.184858 systemd-logind[1700]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:29:28.186482 systemd-logind[1700]: Removed session 22. Sep 11 00:29:28.298054 systemd[1]: Started sshd@20-10.200.8.50:22-10.200.16.10:38172.service - OpenSSH per-connection server daemon (10.200.16.10:38172). Sep 11 00:29:28.933210 sshd[4667]: Accepted publickey for core from 10.200.16.10 port 38172 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:28.934137 sshd-session[4667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:28.937853 systemd-logind[1700]: New session 23 of user core. Sep 11 00:29:28.944705 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:29:30.555831 containerd[1746]: time="2025-09-11T00:29:30.555681979Z" level=info msg="StopContainer for \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\" with timeout 30 (s)" Sep 11 00:29:30.556528 containerd[1746]: time="2025-09-11T00:29:30.556509277Z" level=info msg="Stop container \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\" with signal terminated" Sep 11 00:29:30.571333 systemd[1]: cri-containerd-0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a.scope: Deactivated successfully. Sep 11 00:29:30.572322 containerd[1746]: time="2025-09-11T00:29:30.572225452Z" level=info msg="received exit event container_id:\"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\" id:\"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\" pid:3641 exited_at:{seconds:1757550570 nanos:571907513}" Sep 11 00:29:30.572893 containerd[1746]: time="2025-09-11T00:29:30.572873682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\" id:\"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\" pid:3641 exited_at:{seconds:1757550570 nanos:571907513}" Sep 11 00:29:30.574707 containerd[1746]: time="2025-09-11T00:29:30.574673326Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:29:30.580423 containerd[1746]: time="2025-09-11T00:29:30.580301883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" id:\"c19f8dadab8495167b5b3ca097501b23dc2f1b0bf97cba2aed61254d199faf89\" pid:4689 exited_at:{seconds:1757550570 nanos:580084406}" Sep 11 00:29:30.581911 containerd[1746]: time="2025-09-11T00:29:30.581888043Z" level=info msg="StopContainer for \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" with timeout 2 (s)" Sep 11 00:29:30.582403 containerd[1746]: time="2025-09-11T00:29:30.582372621Z" level=info msg="Stop container \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" with signal terminated" Sep 11 00:29:30.589596 systemd-networkd[1350]: lxc_health: Link DOWN Sep 11 00:29:30.589601 systemd-networkd[1350]: lxc_health: Lost carrier Sep 11 00:29:30.597798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a-rootfs.mount: Deactivated successfully. Sep 11 00:29:30.605381 systemd[1]: cri-containerd-ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a.scope: Deactivated successfully. Sep 11 00:29:30.605695 systemd[1]: cri-containerd-ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a.scope: Consumed 4.542s CPU time, 125.5M memory peak, 136K read from disk, 13.3M written to disk. Sep 11 00:29:30.607392 containerd[1746]: time="2025-09-11T00:29:30.607371485Z" level=info msg="received exit event container_id:\"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" id:\"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" pid:3798 exited_at:{seconds:1757550570 nanos:607174239}" Sep 11 00:29:30.607461 containerd[1746]: time="2025-09-11T00:29:30.607383383Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" id:\"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" pid:3798 exited_at:{seconds:1757550570 nanos:607174239}" Sep 11 00:29:30.620054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a-rootfs.mount: Deactivated successfully. Sep 11 00:29:30.649734 containerd[1746]: time="2025-09-11T00:29:30.649714860Z" level=info msg="StopContainer for \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" returns successfully" Sep 11 00:29:30.650323 containerd[1746]: time="2025-09-11T00:29:30.650119210Z" level=info msg="StopPodSandbox for \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\"" Sep 11 00:29:30.650323 containerd[1746]: time="2025-09-11T00:29:30.650162095Z" level=info msg="Container to stop \"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:29:30.650323 containerd[1746]: time="2025-09-11T00:29:30.650171626Z" level=info msg="Container to stop \"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:29:30.650323 containerd[1746]: time="2025-09-11T00:29:30.650180295Z" level=info msg="Container to stop \"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:29:30.650323 containerd[1746]: time="2025-09-11T00:29:30.650194091Z" level=info msg="Container to stop \"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:29:30.650323 containerd[1746]: time="2025-09-11T00:29:30.650201694Z" level=info msg="Container to stop \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:29:30.655076 containerd[1746]: time="2025-09-11T00:29:30.654645319Z" level=info msg="StopContainer for \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\" returns successfully" Sep 11 00:29:30.655076 containerd[1746]: time="2025-09-11T00:29:30.654883258Z" level=info msg="StopPodSandbox for \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\"" Sep 11 00:29:30.655076 containerd[1746]: time="2025-09-11T00:29:30.654924613Z" level=info msg="Container to stop \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 00:29:30.654984 systemd[1]: cri-containerd-8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899.scope: Deactivated successfully. Sep 11 00:29:30.659041 containerd[1746]: time="2025-09-11T00:29:30.659020770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" id:\"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" pid:3318 exit_status:137 exited_at:{seconds:1757550570 nanos:658819809}" Sep 11 00:29:30.662926 systemd[1]: cri-containerd-2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54.scope: Deactivated successfully. Sep 11 00:29:30.680033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899-rootfs.mount: Deactivated successfully. Sep 11 00:29:30.687157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54-rootfs.mount: Deactivated successfully. Sep 11 00:29:30.700800 containerd[1746]: time="2025-09-11T00:29:30.700711483Z" level=info msg="shim disconnected" id=2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54 namespace=k8s.io Sep 11 00:29:30.700800 containerd[1746]: time="2025-09-11T00:29:30.700733841Z" level=warning msg="cleaning up after shim disconnected" id=2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54 namespace=k8s.io Sep 11 00:29:30.700800 containerd[1746]: time="2025-09-11T00:29:30.700740936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 00:29:30.701838 containerd[1746]: time="2025-09-11T00:29:30.701742676Z" level=info msg="shim disconnected" id=8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899 namespace=k8s.io Sep 11 00:29:30.701838 containerd[1746]: time="2025-09-11T00:29:30.701765408Z" level=warning msg="cleaning up after shim disconnected" id=8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899 namespace=k8s.io Sep 11 00:29:30.701838 containerd[1746]: time="2025-09-11T00:29:30.701772128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 00:29:30.715805 containerd[1746]: time="2025-09-11T00:29:30.715729730Z" level=info msg="received exit event sandbox_id:\"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\" exit_status:137 exited_at:{seconds:1757550570 nanos:667738566}" Sep 11 00:29:30.716655 containerd[1746]: time="2025-09-11T00:29:30.716204427Z" level=info msg="TearDown network for sandbox \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\" successfully" Sep 11 00:29:30.716655 containerd[1746]: time="2025-09-11T00:29:30.716225914Z" level=info msg="StopPodSandbox for \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\" returns successfully" Sep 11 00:29:30.716882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54-shm.mount: Deactivated successfully. Sep 11 00:29:30.719825 containerd[1746]: time="2025-09-11T00:29:30.719802169Z" level=info msg="received exit event sandbox_id:\"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" exit_status:137 exited_at:{seconds:1757550570 nanos:658819809}" Sep 11 00:29:30.720374 containerd[1746]: time="2025-09-11T00:29:30.720353096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\" id:\"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\" pid:3427 exit_status:137 exited_at:{seconds:1757550570 nanos:667738566}" Sep 11 00:29:30.720590 containerd[1746]: time="2025-09-11T00:29:30.720566242Z" level=info msg="TearDown network for sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" successfully" Sep 11 00:29:30.720659 containerd[1746]: time="2025-09-11T00:29:30.720592367Z" level=info msg="StopPodSandbox for \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" returns successfully" Sep 11 00:29:30.803629 kubelet[3162]: I0911 00:29:30.803515 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad318e34-f862-4f81-86ef-c4cfd183567e-cilium-config-path\") pod \"ad318e34-f862-4f81-86ef-c4cfd183567e\" (UID: \"ad318e34-f862-4f81-86ef-c4cfd183567e\") " Sep 11 00:29:30.803629 kubelet[3162]: I0911 00:29:30.803557 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-config-path\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.803629 kubelet[3162]: I0911 00:29:30.803576 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-bpf-maps\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.803629 kubelet[3162]: I0911 00:29:30.803593 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-host-proc-sys-kernel\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.804440 kubelet[3162]: I0911 00:29:30.804017 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lxhj\" (UniqueName: \"kubernetes.io/projected/ad318e34-f862-4f81-86ef-c4cfd183567e-kube-api-access-9lxhj\") pod \"ad318e34-f862-4f81-86ef-c4cfd183567e\" (UID: \"ad318e34-f862-4f81-86ef-c4cfd183567e\") " Sep 11 00:29:30.804440 kubelet[3162]: I0911 00:29:30.804040 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cni-path\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.804440 kubelet[3162]: I0911 00:29:30.804062 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-xtables-lock\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.804440 kubelet[3162]: I0911 00:29:30.804077 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-hostproc\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.804440 kubelet[3162]: I0911 00:29:30.804093 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-run\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.804440 kubelet[3162]: I0911 00:29:30.804110 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-lib-modules\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.804628 kubelet[3162]: I0911 00:29:30.804130 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4e6fdec-7896-4428-b005-af26ddb5d9cb-clustermesh-secrets\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.804628 kubelet[3162]: I0911 00:29:30.804150 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-host-proc-sys-net\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.804628 kubelet[3162]: I0911 00:29:30.804168 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnx9h\" (UniqueName: \"kubernetes.io/projected/b4e6fdec-7896-4428-b005-af26ddb5d9cb-kube-api-access-rnx9h\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.804628 kubelet[3162]: I0911 00:29:30.804185 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-etc-cni-netd\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.804628 kubelet[3162]: I0911 00:29:30.804204 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-cgroup\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.804628 kubelet[3162]: I0911 00:29:30.804221 3162 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4e6fdec-7896-4428-b005-af26ddb5d9cb-hubble-tls\") pod \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\" (UID: \"b4e6fdec-7896-4428-b005-af26ddb5d9cb\") " Sep 11 00:29:30.806504 kubelet[3162]: I0911 00:29:30.805671 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad318e34-f862-4f81-86ef-c4cfd183567e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ad318e34-f862-4f81-86ef-c4cfd183567e" (UID: "ad318e34-f862-4f81-86ef-c4cfd183567e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:29:30.806504 kubelet[3162]: I0911 00:29:30.805731 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-hostproc" (OuterVolumeSpecName: "hostproc") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:29:30.807019 kubelet[3162]: I0911 00:29:30.806997 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e6fdec-7896-4428-b005-af26ddb5d9cb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:29:30.807103 kubelet[3162]: I0911 00:29:30.807093 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:29:30.807157 kubelet[3162]: I0911 00:29:30.807149 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:29:30.808722 kubelet[3162]: I0911 00:29:30.808690 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:29:30.808790 kubelet[3162]: I0911 00:29:30.808738 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:29:30.808790 kubelet[3162]: I0911 00:29:30.808754 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:29:30.809385 kubelet[3162]: I0911 00:29:30.809365 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4e6fdec-7896-4428-b005-af26ddb5d9cb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:29:30.809482 kubelet[3162]: I0911 00:29:30.809472 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:29:30.811541 kubelet[3162]: I0911 00:29:30.811516 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e6fdec-7896-4428-b005-af26ddb5d9cb-kube-api-access-rnx9h" (OuterVolumeSpecName: "kube-api-access-rnx9h") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "kube-api-access-rnx9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:29:30.811903 kubelet[3162]: I0911 00:29:30.811828 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad318e34-f862-4f81-86ef-c4cfd183567e-kube-api-access-9lxhj" (OuterVolumeSpecName: "kube-api-access-9lxhj") pod "ad318e34-f862-4f81-86ef-c4cfd183567e" (UID: "ad318e34-f862-4f81-86ef-c4cfd183567e"). InnerVolumeSpecName "kube-api-access-9lxhj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:29:30.811976 kubelet[3162]: I0911 00:29:30.811844 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cni-path" (OuterVolumeSpecName: "cni-path") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:29:30.812027 kubelet[3162]: I0911 00:29:30.811852 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:29:30.812067 kubelet[3162]: I0911 00:29:30.811864 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:29:30.812112 kubelet[3162]: I0911 00:29:30.811874 3162 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b4e6fdec-7896-4428-b005-af26ddb5d9cb" (UID: "b4e6fdec-7896-4428-b005-af26ddb5d9cb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 00:29:30.905230 kubelet[3162]: I0911 00:29:30.905209 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad318e34-f862-4f81-86ef-c4cfd183567e-cilium-config-path\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905230 kubelet[3162]: I0911 00:29:30.905228 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-config-path\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905325 kubelet[3162]: I0911 00:29:30.905237 3162 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-bpf-maps\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905325 kubelet[3162]: I0911 00:29:30.905245 3162 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-host-proc-sys-kernel\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905325 kubelet[3162]: I0911 00:29:30.905253 3162 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9lxhj\" (UniqueName: \"kubernetes.io/projected/ad318e34-f862-4f81-86ef-c4cfd183567e-kube-api-access-9lxhj\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905325 kubelet[3162]: I0911 00:29:30.905277 3162 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cni-path\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905325 kubelet[3162]: I0911 00:29:30.905287 3162 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-xtables-lock\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905325 kubelet[3162]: I0911 00:29:30.905311 3162 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-hostproc\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905325 kubelet[3162]: I0911 00:29:30.905320 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-run\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905457 kubelet[3162]: I0911 00:29:30.905328 3162 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-lib-modules\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905457 kubelet[3162]: I0911 00:29:30.905336 3162 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4e6fdec-7896-4428-b005-af26ddb5d9cb-clustermesh-secrets\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905457 kubelet[3162]: I0911 00:29:30.905343 3162 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-host-proc-sys-net\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905457 kubelet[3162]: I0911 00:29:30.905353 3162 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rnx9h\" (UniqueName: \"kubernetes.io/projected/b4e6fdec-7896-4428-b005-af26ddb5d9cb-kube-api-access-rnx9h\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905457 kubelet[3162]: I0911 00:29:30.905360 3162 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-etc-cni-netd\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905457 kubelet[3162]: I0911 00:29:30.905368 3162 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4e6fdec-7896-4428-b005-af26ddb5d9cb-cilium-cgroup\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:30.905457 kubelet[3162]: I0911 00:29:30.905376 3162 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4e6fdec-7896-4428-b005-af26ddb5d9cb-hubble-tls\") on node \"ci-4372.1.0-n-1c5282f4e4\" DevicePath \"\"" Sep 11 00:29:31.219517 systemd[1]: Removed slice kubepods-besteffort-podad318e34_f862_4f81_86ef_c4cfd183567e.slice - libcontainer container kubepods-besteffort-podad318e34_f862_4f81_86ef_c4cfd183567e.slice. Sep 11 00:29:31.220680 systemd[1]: Removed slice kubepods-burstable-podb4e6fdec_7896_4428_b005_af26ddb5d9cb.slice - libcontainer container kubepods-burstable-podb4e6fdec_7896_4428_b005_af26ddb5d9cb.slice. Sep 11 00:29:31.220765 systemd[1]: kubepods-burstable-podb4e6fdec_7896_4428_b005_af26ddb5d9cb.slice: Consumed 4.602s CPU time, 126M memory peak, 136K read from disk, 13.3M written to disk. Sep 11 00:29:31.429840 kubelet[3162]: I0911 00:29:31.429751 3162 scope.go:117] "RemoveContainer" containerID="ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a" Sep 11 00:29:31.433417 containerd[1746]: time="2025-09-11T00:29:31.433388979Z" level=info msg="RemoveContainer for \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\"" Sep 11 00:29:31.446721 containerd[1746]: time="2025-09-11T00:29:31.446659085Z" level=info msg="RemoveContainer for \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" returns successfully" Sep 11 00:29:31.446927 kubelet[3162]: I0911 00:29:31.446904 3162 scope.go:117] "RemoveContainer" containerID="11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb" Sep 11 00:29:31.448595 containerd[1746]: time="2025-09-11T00:29:31.448569186Z" level=info msg="RemoveContainer for \"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\"" Sep 11 00:29:31.457215 containerd[1746]: time="2025-09-11T00:29:31.457169166Z" level=info msg="RemoveContainer for \"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\" returns successfully" Sep 11 00:29:31.457507 kubelet[3162]: I0911 00:29:31.457405 3162 scope.go:117] "RemoveContainer" containerID="d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b" Sep 11 00:29:31.459038 containerd[1746]: time="2025-09-11T00:29:31.459018424Z" level=info msg="RemoveContainer for \"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\"" Sep 11 00:29:31.466761 containerd[1746]: time="2025-09-11T00:29:31.466723502Z" level=info msg="RemoveContainer for \"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\" returns successfully" Sep 11 00:29:31.467923 kubelet[3162]: I0911 00:29:31.467828 3162 scope.go:117] "RemoveContainer" containerID="8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221" Sep 11 00:29:31.471208 containerd[1746]: time="2025-09-11T00:29:31.471143971Z" level=info msg="RemoveContainer for \"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\"" Sep 11 00:29:31.478160 containerd[1746]: time="2025-09-11T00:29:31.478128213Z" level=info msg="RemoveContainer for \"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\" returns successfully" Sep 11 00:29:31.478269 kubelet[3162]: I0911 00:29:31.478255 3162 scope.go:117] "RemoveContainer" containerID="b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec" Sep 11 00:29:31.479410 containerd[1746]: time="2025-09-11T00:29:31.479390025Z" level=info msg="RemoveContainer for \"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\"" Sep 11 00:29:31.485437 containerd[1746]: time="2025-09-11T00:29:31.485406096Z" level=info msg="RemoveContainer for \"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\" returns successfully" Sep 11 00:29:31.485656 kubelet[3162]: I0911 00:29:31.485540 3162 scope.go:117] "RemoveContainer" containerID="ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a" Sep 11 00:29:31.485789 containerd[1746]: time="2025-09-11T00:29:31.485750944Z" level=error msg="ContainerStatus for \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\": not found" Sep 11 00:29:31.485974 kubelet[3162]: E0911 00:29:31.485855 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\": not found" containerID="ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a" Sep 11 00:29:31.485974 kubelet[3162]: I0911 00:29:31.485872 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a"} err="failed to get container status \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff2211384d8a4b77de4cc5c191e87e91bd4e3ce5e74b4d40e5b1e78fc0629e3a\": not found" Sep 11 00:29:31.485974 kubelet[3162]: I0911 00:29:31.485905 3162 scope.go:117] "RemoveContainer" containerID="11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb" Sep 11 00:29:31.486146 containerd[1746]: time="2025-09-11T00:29:31.486122929Z" level=error msg="ContainerStatus for \"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\": not found" Sep 11 00:29:31.486231 kubelet[3162]: E0911 00:29:31.486217 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\": not found" containerID="11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb" Sep 11 00:29:31.486279 kubelet[3162]: I0911 00:29:31.486234 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb"} err="failed to get container status \"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\": rpc error: code = NotFound desc = an error occurred when try to find container \"11c8f173cad4eb223b1a12f3e566a45656ef01dc5b291ecdaeb3921892643adb\": not found" Sep 11 00:29:31.486279 kubelet[3162]: I0911 00:29:31.486247 3162 scope.go:117] "RemoveContainer" containerID="d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b" Sep 11 00:29:31.486443 containerd[1746]: time="2025-09-11T00:29:31.486406887Z" level=error msg="ContainerStatus for \"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\": not found" Sep 11 00:29:31.486519 kubelet[3162]: E0911 00:29:31.486493 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\": not found" containerID="d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b" Sep 11 00:29:31.486552 kubelet[3162]: I0911 00:29:31.486520 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b"} err="failed to get container status \"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4634fe90c145e4ffa38d1e44a724690da092dec5398ff1c95538d8253504f8b\": not found" Sep 11 00:29:31.486552 kubelet[3162]: I0911 00:29:31.486542 3162 scope.go:117] "RemoveContainer" containerID="8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221" Sep 11 00:29:31.486702 containerd[1746]: time="2025-09-11T00:29:31.486669810Z" level=error msg="ContainerStatus for \"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\": not found" Sep 11 00:29:31.486799 kubelet[3162]: E0911 00:29:31.486786 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\": not found" containerID="8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221" Sep 11 00:29:31.486842 kubelet[3162]: I0911 00:29:31.486801 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221"} err="failed to get container status \"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ac9161779330e5f88406bf182e53b3f4ec2022911882aa377a134a0f0788221\": not found" Sep 11 00:29:31.486842 kubelet[3162]: I0911 00:29:31.486813 3162 scope.go:117] "RemoveContainer" containerID="b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec" Sep 11 00:29:31.486935 containerd[1746]: time="2025-09-11T00:29:31.486907753Z" level=error msg="ContainerStatus for \"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\": not found" Sep 11 00:29:31.486994 kubelet[3162]: E0911 00:29:31.486981 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\": not found" containerID="b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec" Sep 11 00:29:31.487019 kubelet[3162]: I0911 00:29:31.486994 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec"} err="failed to get container status \"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\": rpc error: code = NotFound desc = an error occurred when try to find container \"b50656033d78d6c316a809129739e3cd7c5a7e6d125cf9d37bfd3ad03d2fbfec\": not found" Sep 11 00:29:31.487019 kubelet[3162]: I0911 00:29:31.487005 3162 scope.go:117] "RemoveContainer" containerID="0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a" Sep 11 00:29:31.488039 containerd[1746]: time="2025-09-11T00:29:31.488015494Z" level=info msg="RemoveContainer for \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\"" Sep 11 00:29:31.517503 containerd[1746]: time="2025-09-11T00:29:31.517467482Z" level=info msg="RemoveContainer for \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\" returns successfully" Sep 11 00:29:31.517648 kubelet[3162]: I0911 00:29:31.517630 3162 scope.go:117] "RemoveContainer" containerID="0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a" Sep 11 00:29:31.517817 containerd[1746]: time="2025-09-11T00:29:31.517796753Z" level=error msg="ContainerStatus for \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\": not found" Sep 11 00:29:31.517925 kubelet[3162]: E0911 00:29:31.517910 3162 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\": not found" containerID="0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a" Sep 11 00:29:31.517966 kubelet[3162]: I0911 00:29:31.517928 3162 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a"} err="failed to get container status \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0fc5f121d3b2627da4f3b665550ed99756ff6e78c691389ce919c54ddf3d7e9a\": not found" Sep 11 00:29:31.594834 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899-shm.mount: Deactivated successfully. Sep 11 00:29:31.594923 systemd[1]: var-lib-kubelet-pods-ad318e34\x2df862\x2d4f81\x2d86ef\x2dc4cfd183567e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9lxhj.mount: Deactivated successfully. Sep 11 00:29:31.594981 systemd[1]: var-lib-kubelet-pods-b4e6fdec\x2d7896\x2d4428\x2db005\x2daf26ddb5d9cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drnx9h.mount: Deactivated successfully. Sep 11 00:29:31.595030 systemd[1]: var-lib-kubelet-pods-b4e6fdec\x2d7896\x2d4428\x2db005\x2daf26ddb5d9cb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 11 00:29:31.595079 systemd[1]: var-lib-kubelet-pods-b4e6fdec\x2d7896\x2d4428\x2db005\x2daf26ddb5d9cb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 11 00:29:32.609877 sshd[4669]: Connection closed by 10.200.16.10 port 38172 Sep 11 00:29:32.610393 sshd-session[4667]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:32.613066 systemd[1]: sshd@20-10.200.8.50:22-10.200.16.10:38172.service: Deactivated successfully. Sep 11 00:29:32.614971 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:29:32.615085 systemd-logind[1700]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:29:32.616962 systemd-logind[1700]: Removed session 23. Sep 11 00:29:32.722051 systemd[1]: Started sshd@21-10.200.8.50:22-10.200.16.10:60636.service - OpenSSH per-connection server daemon (10.200.16.10:60636). Sep 11 00:29:33.207188 containerd[1746]: time="2025-09-11T00:29:33.207151028Z" level=info msg="StopPodSandbox for \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\"" Sep 11 00:29:33.207586 containerd[1746]: time="2025-09-11T00:29:33.207289953Z" level=info msg="TearDown network for sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" successfully" Sep 11 00:29:33.207586 containerd[1746]: time="2025-09-11T00:29:33.207301161Z" level=info msg="StopPodSandbox for \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" returns successfully" Sep 11 00:29:33.207648 containerd[1746]: time="2025-09-11T00:29:33.207597613Z" level=info msg="RemovePodSandbox for \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\"" Sep 11 00:29:33.207703 containerd[1746]: time="2025-09-11T00:29:33.207688412Z" level=info msg="Forcibly stopping sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\"" Sep 11 00:29:33.207807 containerd[1746]: time="2025-09-11T00:29:33.207789988Z" level=info msg="TearDown network for sandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" successfully" Sep 11 00:29:33.208634 containerd[1746]: time="2025-09-11T00:29:33.208605547Z" level=info msg="Ensure that sandbox 8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899 in task-service has been cleanup successfully" Sep 11 00:29:33.215551 kubelet[3162]: I0911 00:29:33.215526 3162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad318e34-f862-4f81-86ef-c4cfd183567e" path="/var/lib/kubelet/pods/ad318e34-f862-4f81-86ef-c4cfd183567e/volumes" Sep 11 00:29:33.215880 kubelet[3162]: I0911 00:29:33.215856 3162 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e6fdec-7896-4428-b005-af26ddb5d9cb" path="/var/lib/kubelet/pods/b4e6fdec-7896-4428-b005-af26ddb5d9cb/volumes" Sep 11 00:29:33.217190 containerd[1746]: time="2025-09-11T00:29:33.217167754Z" level=info msg="RemovePodSandbox \"8f440c0a4e17037443f624351a9ec4f171244f67aba83acef75eda6825d72899\" returns successfully" Sep 11 00:29:33.217546 containerd[1746]: time="2025-09-11T00:29:33.217526538Z" level=info msg="StopPodSandbox for \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\"" Sep 11 00:29:33.217682 containerd[1746]: time="2025-09-11T00:29:33.217668308Z" level=info msg="TearDown network for sandbox \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\" successfully" Sep 11 00:29:33.217711 containerd[1746]: time="2025-09-11T00:29:33.217680738Z" level=info msg="StopPodSandbox for \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\" returns successfully" Sep 11 00:29:33.217915 containerd[1746]: time="2025-09-11T00:29:33.217902973Z" level=info msg="RemovePodSandbox for \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\"" Sep 11 00:29:33.217979 containerd[1746]: time="2025-09-11T00:29:33.217966390Z" level=info msg="Forcibly stopping sandbox \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\"" Sep 11 00:29:33.218046 containerd[1746]: time="2025-09-11T00:29:33.218035996Z" level=info msg="TearDown network for sandbox \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\" successfully" Sep 11 00:29:33.218734 containerd[1746]: time="2025-09-11T00:29:33.218718388Z" level=info msg="Ensure that sandbox 2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54 in task-service has been cleanup successfully" Sep 11 00:29:33.227232 containerd[1746]: time="2025-09-11T00:29:33.227208855Z" level=info msg="RemovePodSandbox \"2f2fb29410b33964468b72c26b97f2b671f895fbf795e88f2a14a4eecad20e54\" returns successfully" Sep 11 00:29:33.266508 kubelet[3162]: E0911 00:29:33.266471 3162 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 11 00:29:33.356927 sshd[4820]: Accepted publickey for core from 10.200.16.10 port 60636 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:33.357931 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:33.361918 systemd-logind[1700]: New session 24 of user core. Sep 11 00:29:33.365922 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:29:34.183475 systemd[1]: Created slice kubepods-burstable-podd5c03811_d79c_4bfb_a2e9_712dc248891e.slice - libcontainer container kubepods-burstable-podd5c03811_d79c_4bfb_a2e9_712dc248891e.slice. Sep 11 00:29:34.222447 kubelet[3162]: I0911 00:29:34.222423 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5c03811-d79c-4bfb-a2e9-712dc248891e-etc-cni-netd\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222447 kubelet[3162]: I0911 00:29:34.222450 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5c03811-d79c-4bfb-a2e9-712dc248891e-xtables-lock\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222803 kubelet[3162]: I0911 00:29:34.222468 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5c03811-d79c-4bfb-a2e9-712dc248891e-cilium-config-path\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222803 kubelet[3162]: I0911 00:29:34.222496 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5c03811-d79c-4bfb-a2e9-712dc248891e-hubble-tls\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222803 kubelet[3162]: I0911 00:29:34.222517 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5c03811-d79c-4bfb-a2e9-712dc248891e-cilium-run\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222803 kubelet[3162]: I0911 00:29:34.222531 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5c03811-d79c-4bfb-a2e9-712dc248891e-cni-path\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222803 kubelet[3162]: I0911 00:29:34.222547 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d5c03811-d79c-4bfb-a2e9-712dc248891e-cilium-ipsec-secrets\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222803 kubelet[3162]: I0911 00:29:34.222564 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5c03811-d79c-4bfb-a2e9-712dc248891e-cilium-cgroup\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222895 kubelet[3162]: I0911 00:29:34.222581 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5c03811-d79c-4bfb-a2e9-712dc248891e-lib-modules\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222895 kubelet[3162]: I0911 00:29:34.222598 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5c03811-d79c-4bfb-a2e9-712dc248891e-clustermesh-secrets\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222895 kubelet[3162]: I0911 00:29:34.222627 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5c03811-d79c-4bfb-a2e9-712dc248891e-host-proc-sys-net\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222895 kubelet[3162]: I0911 00:29:34.222645 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5c03811-d79c-4bfb-a2e9-712dc248891e-host-proc-sys-kernel\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222895 kubelet[3162]: I0911 00:29:34.222661 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5c03811-d79c-4bfb-a2e9-712dc248891e-hostproc\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222895 kubelet[3162]: I0911 00:29:34.222678 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5c03811-d79c-4bfb-a2e9-712dc248891e-bpf-maps\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.222977 kubelet[3162]: I0911 00:29:34.222693 3162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgcs\" (UniqueName: \"kubernetes.io/projected/d5c03811-d79c-4bfb-a2e9-712dc248891e-kube-api-access-rqgcs\") pod \"cilium-l62js\" (UID: \"d5c03811-d79c-4bfb-a2e9-712dc248891e\") " pod="kube-system/cilium-l62js" Sep 11 00:29:34.223133 sshd[4824]: Connection closed by 10.200.16.10 port 60636 Sep 11 00:29:34.223782 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:34.226493 systemd[1]: sshd@21-10.200.8.50:22-10.200.16.10:60636.service: Deactivated successfully. Sep 11 00:29:34.227916 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:29:34.228484 systemd-logind[1700]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:29:34.229819 systemd-logind[1700]: Removed session 24. Sep 11 00:29:34.346292 systemd[1]: Started sshd@22-10.200.8.50:22-10.200.16.10:60650.service - OpenSSH per-connection server daemon (10.200.16.10:60650). Sep 11 00:29:34.486478 containerd[1746]: time="2025-09-11T00:29:34.486447768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l62js,Uid:d5c03811-d79c-4bfb-a2e9-712dc248891e,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:34.517760 containerd[1746]: time="2025-09-11T00:29:34.517719835Z" level=info msg="connecting to shim 05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef" address="unix:///run/containerd/s/546f0dd470129a886e2fd6c772491c359ca392992b2294a116bbe81e2a5d14ed" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:34.539778 systemd[1]: Started cri-containerd-05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef.scope - libcontainer container 05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef. Sep 11 00:29:34.559083 containerd[1746]: time="2025-09-11T00:29:34.559061369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l62js,Uid:d5c03811-d79c-4bfb-a2e9-712dc248891e,Namespace:kube-system,Attempt:0,} returns sandbox id \"05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef\"" Sep 11 00:29:34.570967 containerd[1746]: time="2025-09-11T00:29:34.570936146Z" level=info msg="CreateContainer within sandbox \"05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 11 00:29:34.584930 containerd[1746]: time="2025-09-11T00:29:34.584355486Z" level=info msg="Container af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:34.598329 containerd[1746]: time="2025-09-11T00:29:34.598307277Z" level=info msg="CreateContainer within sandbox \"05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020\"" Sep 11 00:29:34.598743 containerd[1746]: time="2025-09-11T00:29:34.598677858Z" level=info msg="StartContainer for \"af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020\"" Sep 11 00:29:34.599546 containerd[1746]: time="2025-09-11T00:29:34.599507284Z" level=info msg="connecting to shim af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020" address="unix:///run/containerd/s/546f0dd470129a886e2fd6c772491c359ca392992b2294a116bbe81e2a5d14ed" protocol=ttrpc version=3 Sep 11 00:29:34.614745 systemd[1]: Started cri-containerd-af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020.scope - libcontainer container af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020. Sep 11 00:29:34.637911 containerd[1746]: time="2025-09-11T00:29:34.637888999Z" level=info msg="StartContainer for \"af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020\" returns successfully" Sep 11 00:29:34.638675 systemd[1]: cri-containerd-af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020.scope: Deactivated successfully. Sep 11 00:29:34.641354 containerd[1746]: time="2025-09-11T00:29:34.641332188Z" level=info msg="received exit event container_id:\"af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020\" id:\"af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020\" pid:4897 exited_at:{seconds:1757550574 nanos:641104457}" Sep 11 00:29:34.641513 containerd[1746]: time="2025-09-11T00:29:34.641344202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020\" id:\"af21caad0d9df06c1c924fc441b8499f9ecea498cc75c71e0451ecf067c21020\" pid:4897 exited_at:{seconds:1757550574 nanos:641104457}" Sep 11 00:29:34.985336 sshd[4838]: Accepted publickey for core from 10.200.16.10 port 60650 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:34.986281 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:34.989608 systemd-logind[1700]: New session 25 of user core. Sep 11 00:29:34.993715 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 11 00:29:35.431977 sshd[4930]: Connection closed by 10.200.16.10 port 60650 Sep 11 00:29:35.432595 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:35.434685 systemd[1]: sshd@22-10.200.8.50:22-10.200.16.10:60650.service: Deactivated successfully. Sep 11 00:29:35.436838 systemd-logind[1700]: Session 25 logged out. Waiting for processes to exit. Sep 11 00:29:35.437238 systemd[1]: session-25.scope: Deactivated successfully. Sep 11 00:29:35.438861 systemd-logind[1700]: Removed session 25. Sep 11 00:29:35.451171 containerd[1746]: time="2025-09-11T00:29:35.451145622Z" level=info msg="CreateContainer within sandbox \"05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 11 00:29:35.479757 containerd[1746]: time="2025-09-11T00:29:35.479707108Z" level=info msg="Container bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:35.500688 containerd[1746]: time="2025-09-11T00:29:35.500666532Z" level=info msg="CreateContainer within sandbox \"05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2\"" Sep 11 00:29:35.502484 containerd[1746]: time="2025-09-11T00:29:35.501766803Z" level=info msg="StartContainer for \"bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2\"" Sep 11 00:29:35.502707 containerd[1746]: time="2025-09-11T00:29:35.502672068Z" level=info msg="connecting to shim bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2" address="unix:///run/containerd/s/546f0dd470129a886e2fd6c772491c359ca392992b2294a116bbe81e2a5d14ed" protocol=ttrpc version=3 Sep 11 00:29:35.520770 systemd[1]: Started cri-containerd-bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2.scope - libcontainer container bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2. Sep 11 00:29:35.546816 systemd[1]: Started sshd@23-10.200.8.50:22-10.200.16.10:60656.service - OpenSSH per-connection server daemon (10.200.16.10:60656). Sep 11 00:29:35.549080 containerd[1746]: time="2025-09-11T00:29:35.549052833Z" level=info msg="StartContainer for \"bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2\" returns successfully" Sep 11 00:29:35.549993 systemd[1]: cri-containerd-bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2.scope: Deactivated successfully. Sep 11 00:29:35.550712 containerd[1746]: time="2025-09-11T00:29:35.550538205Z" level=info msg="received exit event container_id:\"bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2\" id:\"bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2\" pid:4949 exited_at:{seconds:1757550575 nanos:550325347}" Sep 11 00:29:35.551067 containerd[1746]: time="2025-09-11T00:29:35.551049014Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2\" id:\"bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2\" pid:4949 exited_at:{seconds:1757550575 nanos:550325347}" Sep 11 00:29:35.568918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd5ff6105d377665ce26a7d381e166e61d2fd150d5af9fe11b8241921853dcc2-rootfs.mount: Deactivated successfully. Sep 11 00:29:36.093686 kubelet[3162]: I0911 00:29:36.093640 3162 setters.go:618] "Node became not ready" node="ci-4372.1.0-n-1c5282f4e4" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-11T00:29:36Z","lastTransitionTime":"2025-09-11T00:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 11 00:29:36.185998 sshd[4965]: Accepted publickey for core from 10.200.16.10 port 60656 ssh2: RSA SHA256:WsqnZe1Vz7kcM6FJ5Bl6636L4nXESJA3OI736agNivA Sep 11 00:29:36.187467 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:36.193871 systemd-logind[1700]: New session 26 of user core. Sep 11 00:29:36.200928 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 11 00:29:36.454638 containerd[1746]: time="2025-09-11T00:29:36.454017143Z" level=info msg="CreateContainer within sandbox \"05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 11 00:29:36.471185 containerd[1746]: time="2025-09-11T00:29:36.471137407Z" level=info msg="Container 169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:36.475396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount345506444.mount: Deactivated successfully. Sep 11 00:29:36.485912 containerd[1746]: time="2025-09-11T00:29:36.485889725Z" level=info msg="CreateContainer within sandbox \"05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a\"" Sep 11 00:29:36.486329 containerd[1746]: time="2025-09-11T00:29:36.486231287Z" level=info msg="StartContainer for \"169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a\"" Sep 11 00:29:36.488078 containerd[1746]: time="2025-09-11T00:29:36.488045253Z" level=info msg="connecting to shim 169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a" address="unix:///run/containerd/s/546f0dd470129a886e2fd6c772491c359ca392992b2294a116bbe81e2a5d14ed" protocol=ttrpc version=3 Sep 11 00:29:36.507739 systemd[1]: Started cri-containerd-169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a.scope - libcontainer container 169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a. Sep 11 00:29:36.533379 systemd[1]: cri-containerd-169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a.scope: Deactivated successfully. Sep 11 00:29:36.540930 containerd[1746]: time="2025-09-11T00:29:36.540886918Z" level=info msg="received exit event container_id:\"169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a\" id:\"169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a\" pid:4997 exited_at:{seconds:1757550576 nanos:540546619}" Sep 11 00:29:36.542158 containerd[1746]: time="2025-09-11T00:29:36.542134596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a\" id:\"169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a\" pid:4997 exited_at:{seconds:1757550576 nanos:540546619}" Sep 11 00:29:36.543783 containerd[1746]: time="2025-09-11T00:29:36.543747829Z" level=info msg="StartContainer for \"169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a\" returns successfully" Sep 11 00:29:36.562938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-169cda7b04a1dda67133dd3de390d69f7d78323538de87806f916ec40d4ad63a-rootfs.mount: Deactivated successfully. Sep 11 00:29:37.458972 containerd[1746]: time="2025-09-11T00:29:37.458934248Z" level=info msg="CreateContainer within sandbox \"05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 11 00:29:37.476654 containerd[1746]: time="2025-09-11T00:29:37.475677761Z" level=info msg="Container 73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:37.478571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount739631249.mount: Deactivated successfully. Sep 11 00:29:37.490445 containerd[1746]: time="2025-09-11T00:29:37.490421439Z" level=info msg="CreateContainer within sandbox \"05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47\"" Sep 11 00:29:37.490866 containerd[1746]: time="2025-09-11T00:29:37.490832116Z" level=info msg="StartContainer for \"73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47\"" Sep 11 00:29:37.491757 containerd[1746]: time="2025-09-11T00:29:37.491721328Z" level=info msg="connecting to shim 73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47" address="unix:///run/containerd/s/546f0dd470129a886e2fd6c772491c359ca392992b2294a116bbe81e2a5d14ed" protocol=ttrpc version=3 Sep 11 00:29:37.511747 systemd[1]: Started cri-containerd-73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47.scope - libcontainer container 73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47. Sep 11 00:29:37.530061 systemd[1]: cri-containerd-73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47.scope: Deactivated successfully. Sep 11 00:29:37.531081 containerd[1746]: time="2025-09-11T00:29:37.531045833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47\" id:\"73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47\" pid:5042 exited_at:{seconds:1757550577 nanos:530820157}" Sep 11 00:29:37.536181 containerd[1746]: time="2025-09-11T00:29:37.536150163Z" level=info msg="received exit event container_id:\"73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47\" id:\"73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47\" pid:5042 exited_at:{seconds:1757550577 nanos:530820157}" Sep 11 00:29:37.548330 containerd[1746]: time="2025-09-11T00:29:37.548284710Z" level=info msg="StartContainer for \"73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47\" returns successfully" Sep 11 00:29:37.557150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73700208cf975c8783bc2534e5a481c201a035faaece65865ef50ef1bbaedd47-rootfs.mount: Deactivated successfully. Sep 11 00:29:38.267572 kubelet[3162]: E0911 00:29:38.267518 3162 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 11 00:29:38.463453 containerd[1746]: time="2025-09-11T00:29:38.463389041Z" level=info msg="CreateContainer within sandbox \"05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 11 00:29:38.479598 containerd[1746]: time="2025-09-11T00:29:38.478963640Z" level=info msg="Container 4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:38.498301 containerd[1746]: time="2025-09-11T00:29:38.498276855Z" level=info msg="CreateContainer within sandbox \"05082e71377fb69c4502f49dcaaaa06fdad10004ae8822032b2541ba20741bef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c\"" Sep 11 00:29:38.498777 containerd[1746]: time="2025-09-11T00:29:38.498756536Z" level=info msg="StartContainer for \"4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c\"" Sep 11 00:29:38.499787 containerd[1746]: time="2025-09-11T00:29:38.499763586Z" level=info msg="connecting to shim 4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c" address="unix:///run/containerd/s/546f0dd470129a886e2fd6c772491c359ca392992b2294a116bbe81e2a5d14ed" protocol=ttrpc version=3 Sep 11 00:29:38.522749 systemd[1]: Started cri-containerd-4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c.scope - libcontainer container 4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c. Sep 11 00:29:38.549945 containerd[1746]: time="2025-09-11T00:29:38.549922173Z" level=info msg="StartContainer for \"4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c\" returns successfully" Sep 11 00:29:38.607580 containerd[1746]: time="2025-09-11T00:29:38.607543172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c\" id:\"577465ea4783252d5be87814b8f033ef329693c64ee31e777890a7ec3350f5ab\" pid:5108 exited_at:{seconds:1757550578 nanos:607278900}" Sep 11 00:29:38.869996 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Sep 11 00:29:39.472847 kubelet[3162]: I0911 00:29:39.472210 3162 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l62js" podStartSLOduration=5.472197513 podStartE2EDuration="5.472197513s" podCreationTimestamp="2025-09-11 00:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:29:39.471649967 +0000 UTC m=+126.336210525" watchObservedRunningTime="2025-09-11 00:29:39.472197513 +0000 UTC m=+126.336758069" Sep 11 00:29:40.788910 containerd[1746]: time="2025-09-11T00:29:40.788870704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c\" id:\"d3fbe30033be3f0286d2a22d2ab3f56db1da9e75402449fb81c1eb1140aed3ac\" pid:5430 exit_status:1 exited_at:{seconds:1757550580 nanos:788273744}" Sep 11 00:29:41.190433 systemd-networkd[1350]: lxc_health: Link UP Sep 11 00:29:41.198350 systemd-networkd[1350]: lxc_health: Gained carrier Sep 11 00:29:42.586806 systemd-networkd[1350]: lxc_health: Gained IPv6LL Sep 11 00:29:42.964912 containerd[1746]: time="2025-09-11T00:29:42.964865261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c\" id:\"430451e844e8c1636ec2528a0b698ddf1a5a015a5e6de4a227c2793dfbb26617\" pid:5640 exited_at:{seconds:1757550582 nanos:964555755}" Sep 11 00:29:42.979631 kubelet[3162]: E0911 00:29:42.978791 3162 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:42206->127.0.0.1:36315: read tcp 127.0.0.1:42206->127.0.0.1:36315: read: connection reset by peer Sep 11 00:29:45.050607 containerd[1746]: time="2025-09-11T00:29:45.050471309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c\" id:\"bc3b14218431c1f512b08116be265dedc162601b367821ce6759da38e5320e66\" pid:5673 exited_at:{seconds:1757550585 nanos:50036339}" Sep 11 00:29:47.129228 containerd[1746]: time="2025-09-11T00:29:47.129184444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bd473632761d092e51aa1c0e3558812029e7188f09be6520196ca773ec1315c\" id:\"d7e67b3e6d8c8d0fd0d78a7728c3db342ebaccbdb6d8c8bc599a1cfdb7829540\" pid:5697 exited_at:{seconds:1757550587 nanos:128959324}" Sep 11 00:29:47.235381 sshd[4984]: Connection closed by 10.200.16.10 port 60656 Sep 11 00:29:47.235866 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:47.238319 systemd[1]: sshd@23-10.200.8.50:22-10.200.16.10:60656.service: Deactivated successfully. Sep 11 00:29:47.239978 systemd[1]: session-26.scope: Deactivated successfully. Sep 11 00:29:47.241473 systemd-logind[1700]: Session 26 logged out. Waiting for processes to exit. Sep 11 00:29:47.242820 systemd-logind[1700]: Removed session 26.