Jul 6 23:28:49.057895 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:53:45 -00 2025 Jul 6 23:28:49.057934 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:28:49.057950 kernel: BIOS-provided physical RAM map: Jul 6 23:28:49.057961 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:28:49.057971 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 6 23:28:49.057982 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 6 23:28:49.057996 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jul 6 23:28:49.058007 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 6 23:28:49.058021 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 6 23:28:49.058033 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 6 23:28:49.058043 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 6 23:28:49.058053 kernel: printk: bootconsole [earlyser0] enabled Jul 6 23:28:49.058064 kernel: NX (Execute Disable) protection: active Jul 6 23:28:49.058075 kernel: APIC: Static calls initialized Jul 6 23:28:49.058094 kernel: efi: EFI v2.7 by Microsoft Jul 6 23:28:49.058106 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Jul 6 23:28:49.058119 kernel: random: crng init done Jul 6 23:28:49.058131 kernel: secureboot: Secure boot disabled Jul 6 23:28:49.058143 kernel: SMBIOS 3.1.0 present. Jul 6 23:28:49.058155 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 6 23:28:49.058167 kernel: Hypervisor detected: Microsoft Hyper-V Jul 6 23:28:49.058178 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 6 23:28:49.058191 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jul 6 23:28:49.058204 kernel: Hyper-V: Nested features: 0x1e0101 Jul 6 23:28:49.058222 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 6 23:28:49.058234 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 6 23:28:49.058245 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:28:49.058258 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 6 23:28:49.058271 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 6 23:28:49.058284 kernel: tsc: Detected 2593.906 MHz processor Jul 6 23:28:49.058297 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:28:49.058311 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:28:49.058324 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 6 23:28:49.058340 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:28:49.058352 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:28:49.058365 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 6 23:28:49.058377 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 6 23:28:49.058390 kernel: Using GB pages for direct mapping Jul 6 23:28:49.058401 kernel: ACPI: Early table checksum verification disabled Jul 6 23:28:49.058416 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 6 23:28:49.058436 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:28:49.058453 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:28:49.058467 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 6 23:28:49.058480 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 6 23:28:49.058494 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:28:49.058508 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:28:49.058522 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:28:49.058539 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:28:49.058552 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:28:49.058565 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:28:49.058579 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 6 23:28:49.058592 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 6 23:28:49.058606 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 6 23:28:49.058620 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 6 23:28:49.058634 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 6 23:28:49.058647 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 6 23:28:49.058665 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 6 23:28:49.058679 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 6 23:28:49.058692 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 6 23:28:49.058706 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 6 23:28:49.058719 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 6 23:28:49.058733 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:28:49.058747 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:28:49.058760 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 6 23:28:49.058774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 6 23:28:49.058791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 6 23:28:49.058835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 6 23:28:49.058849 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 6 23:28:49.058862 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 6 23:28:49.058876 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 6 23:28:49.058890 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 6 23:28:49.058903 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 6 23:28:49.058917 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 6 23:28:49.058935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 6 23:28:49.058949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 6 23:28:49.058963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 6 23:28:49.058976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 6 23:28:49.058990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 6 23:28:49.059003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 6 23:28:49.059017 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 6 23:28:49.059031 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 6 23:28:49.059045 kernel: Zone ranges: Jul 6 23:28:49.059062 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:28:49.059076 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 6 23:28:49.059089 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:28:49.059103 kernel: Movable zone start for each node Jul 6 23:28:49.059116 kernel: Early memory node ranges Jul 6 23:28:49.059130 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:28:49.059143 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 6 23:28:49.059157 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 6 23:28:49.059170 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 6 23:28:49.059188 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 6 23:28:49.059201 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:28:49.059215 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:28:49.059228 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 6 23:28:49.059240 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 6 23:28:49.059253 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 6 23:28:49.059266 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:28:49.059278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:28:49.059292 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:28:49.059310 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 6 23:28:49.059323 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:28:49.059336 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 6 23:28:49.059349 kernel: Booting paravirtualized kernel on Hyper-V Jul 6 23:28:49.059363 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:28:49.059376 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:28:49.059390 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:28:49.059404 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:28:49.059418 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:28:49.059434 kernel: Hyper-V: PV spinlocks enabled Jul 6 23:28:49.059448 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:28:49.059463 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:28:49.059477 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:28:49.059490 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 6 23:28:49.059503 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:28:49.059517 kernel: Fallback order for Node 0: 0 Jul 6 23:28:49.059530 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 6 23:28:49.059547 kernel: Policy zone: Normal Jul 6 23:28:49.059573 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:28:49.059588 kernel: software IO TLB: area num 2. Jul 6 23:28:49.059605 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43492K init, 1584K bss, 312164K reserved, 0K cma-reserved) Jul 6 23:28:49.059620 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:28:49.059634 kernel: ftrace: allocating 37940 entries in 149 pages Jul 6 23:28:49.059649 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:28:49.059663 kernel: Dynamic Preempt: voluntary Jul 6 23:28:49.059677 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:28:49.059696 kernel: rcu: RCU event tracing is enabled. Jul 6 23:28:49.059711 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:28:49.059729 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:28:49.059743 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:28:49.059758 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:28:49.059773 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:28:49.059787 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:28:49.059810 kernel: Using NULL legacy PIC Jul 6 23:28:49.059837 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 6 23:28:49.059851 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:28:49.059866 kernel: Console: colour dummy device 80x25 Jul 6 23:28:49.059880 kernel: printk: console [tty1] enabled Jul 6 23:28:49.059895 kernel: printk: console [ttyS0] enabled Jul 6 23:28:49.059909 kernel: printk: bootconsole [earlyser0] disabled Jul 6 23:28:49.059923 kernel: ACPI: Core revision 20230628 Jul 6 23:28:49.059938 kernel: Failed to register legacy timer interrupt Jul 6 23:28:49.059952 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:28:49.059970 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 6 23:28:49.059984 kernel: Hyper-V: Using IPI hypercalls Jul 6 23:28:49.059999 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 6 23:28:49.060013 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 6 23:28:49.060028 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 6 23:28:49.060042 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 6 23:28:49.060057 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 6 23:28:49.060072 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 6 23:28:49.060086 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jul 6 23:28:49.060104 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:28:49.060119 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:28:49.060133 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:28:49.060147 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:28:49.060162 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:28:49.060176 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 6 23:28:49.060191 kernel: RETBleed: Vulnerable Jul 6 23:28:49.060205 kernel: Speculative Store Bypass: Vulnerable Jul 6 23:28:49.060220 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:28:49.060234 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:28:49.060251 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:28:49.060266 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:28:49.060280 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:28:49.060294 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:28:49.060309 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 6 23:28:49.060323 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 6 23:28:49.060337 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 6 23:28:49.060352 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:28:49.060366 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 6 23:28:49.060380 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 6 23:28:49.060395 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 6 23:28:49.060413 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 6 23:28:49.060427 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:28:49.060441 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:28:49.060456 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:28:49.060470 kernel: landlock: Up and running. Jul 6 23:28:49.060484 kernel: SELinux: Initializing. Jul 6 23:28:49.060498 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:28:49.060512 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:28:49.060527 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 6 23:28:49.060542 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:28:49.060557 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:28:49.060576 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:28:49.060592 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 6 23:28:49.060606 kernel: signal: max sigframe size: 3632 Jul 6 23:28:49.060621 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:28:49.060635 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:28:49.060649 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:28:49.060664 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:28:49.060679 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:28:49.060693 kernel: .... node #0, CPUs: #1 Jul 6 23:28:49.060712 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 6 23:28:49.060728 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 6 23:28:49.060742 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:28:49.060756 kernel: smpboot: Max logical packages: 1 Jul 6 23:28:49.060771 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 6 23:28:49.060786 kernel: devtmpfs: initialized Jul 6 23:28:49.060817 kernel: x86/mm: Memory block size: 128MB Jul 6 23:28:49.064471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 6 23:28:49.064504 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:28:49.064520 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:28:49.064535 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:28:49.064550 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:28:49.064565 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:28:49.064579 kernel: audit: type=2000 audit(1751844527.030:1): state=initialized audit_enabled=0 res=1 Jul 6 23:28:49.064594 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:28:49.064609 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:28:49.064623 kernel: cpuidle: using governor menu Jul 6 23:28:49.064642 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:28:49.064657 kernel: dca service started, version 1.12.1 Jul 6 23:28:49.064672 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 6 23:28:49.064687 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:28:49.064702 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:28:49.064716 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:28:49.064731 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:28:49.064746 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:28:49.064760 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:28:49.064778 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:28:49.064792 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:28:49.064829 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:28:49.064844 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:28:49.064858 kernel: ACPI: Interpreter enabled Jul 6 23:28:49.064873 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:28:49.064887 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:28:49.064902 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:28:49.064917 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 6 23:28:49.064936 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 6 23:28:49.064951 kernel: iommu: Default domain type: Translated Jul 6 23:28:49.064966 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:28:49.064980 kernel: efivars: Registered efivars operations Jul 6 23:28:49.064994 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:28:49.065009 kernel: PCI: System does not support PCI Jul 6 23:28:49.065023 kernel: vgaarb: loaded Jul 6 23:28:49.065038 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 6 23:28:49.065052 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:28:49.065071 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:28:49.065086 kernel: pnp: PnP ACPI init Jul 6 23:28:49.065100 kernel: pnp: PnP ACPI: found 3 devices Jul 6 23:28:49.065115 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:28:49.065129 kernel: NET: Registered PF_INET protocol family Jul 6 23:28:49.065144 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:28:49.065159 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 6 23:28:49.065173 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:28:49.065188 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:28:49.065206 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 6 23:28:49.065221 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 6 23:28:49.065236 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:28:49.065251 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 6 23:28:49.065265 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:28:49.065280 kernel: NET: Registered PF_XDP protocol family Jul 6 23:28:49.065295 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:28:49.065310 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 6 23:28:49.065324 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Jul 6 23:28:49.065343 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:28:49.065359 kernel: Initialise system trusted keyrings Jul 6 23:28:49.065373 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 6 23:28:49.065388 kernel: Key type asymmetric registered Jul 6 23:28:49.065402 kernel: Asymmetric key parser 'x509' registered Jul 6 23:28:49.065416 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:28:49.065431 kernel: io scheduler mq-deadline registered Jul 6 23:28:49.065445 kernel: io scheduler kyber registered Jul 6 23:28:49.065460 kernel: io scheduler bfq registered Jul 6 23:28:49.065478 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:28:49.065493 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:28:49.065507 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:28:49.065522 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 6 23:28:49.065537 kernel: i8042: PNP: No PS/2 controller found. Jul 6 23:28:49.065726 kernel: rtc_cmos 00:02: registered as rtc0 Jul 6 23:28:49.065884 kernel: rtc_cmos 00:02: setting system clock to 2025-07-06T23:28:48 UTC (1751844528) Jul 6 23:28:49.066014 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 6 23:28:49.066038 kernel: intel_pstate: CPU model not supported Jul 6 23:28:49.066064 kernel: efifb: probing for efifb Jul 6 23:28:49.066080 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 6 23:28:49.066094 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 6 23:28:49.066108 kernel: efifb: scrolling: redraw Jul 6 23:28:49.066120 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:28:49.066134 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:28:49.066148 kernel: fb0: EFI VGA frame buffer device Jul 6 23:28:49.066163 kernel: pstore: Using crash dump compression: deflate Jul 6 23:28:49.066182 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:28:49.066197 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:28:49.066212 kernel: Segment Routing with IPv6 Jul 6 23:28:49.066226 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:28:49.066242 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:28:49.066256 kernel: Key type dns_resolver registered Jul 6 23:28:49.066271 kernel: IPI shorthand broadcast: enabled Jul 6 23:28:49.066286 kernel: sched_clock: Marking stable (927002800, 53579600)->(1235416200, -254833800) Jul 6 23:28:49.066300 kernel: registered taskstats version 1 Jul 6 23:28:49.066319 kernel: Loading compiled-in X.509 certificates Jul 6 23:28:49.066334 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: f74b958d282931d4f0d8d911dd18abd0ec707734' Jul 6 23:28:49.066349 kernel: Key type .fscrypt registered Jul 6 23:28:49.066363 kernel: Key type fscrypt-provisioning registered Jul 6 23:28:49.066378 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:28:49.066393 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:28:49.066408 kernel: ima: No architecture policies found Jul 6 23:28:49.066423 kernel: clk: Disabling unused clocks Jul 6 23:28:49.066441 kernel: Freeing unused kernel image (initmem) memory: 43492K Jul 6 23:28:49.066457 kernel: Write protecting the kernel read-only data: 38912k Jul 6 23:28:49.066472 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 6 23:28:49.066487 kernel: Run /init as init process Jul 6 23:28:49.066502 kernel: with arguments: Jul 6 23:28:49.066516 kernel: /init Jul 6 23:28:49.066531 kernel: with environment: Jul 6 23:28:49.066544 kernel: HOME=/ Jul 6 23:28:49.066559 kernel: TERM=linux Jul 6 23:28:49.066573 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:28:49.066593 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:28:49.066612 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:28:49.066628 systemd[1]: Detected virtualization microsoft. Jul 6 23:28:49.066644 systemd[1]: Detected architecture x86-64. Jul 6 23:28:49.066659 systemd[1]: Running in initrd. Jul 6 23:28:49.066674 systemd[1]: No hostname configured, using default hostname. Jul 6 23:28:49.066691 systemd[1]: Hostname set to . Jul 6 23:28:49.066710 systemd[1]: Initializing machine ID from random generator. Jul 6 23:28:49.066725 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:28:49.066742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:28:49.066757 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:28:49.066774 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:28:49.066790 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:28:49.066883 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:28:49.066906 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:28:49.066923 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:28:49.066940 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:28:49.066956 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:28:49.066972 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:28:49.066987 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:28:49.067004 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:28:49.067020 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:28:49.067040 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:28:49.067056 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:28:49.067071 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:28:49.067088 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:28:49.067104 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:28:49.067120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:28:49.067135 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:28:49.067151 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:28:49.067171 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:28:49.067186 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:28:49.067202 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:28:49.067217 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:28:49.067233 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:28:49.067249 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:28:49.067266 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:28:49.067281 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:28:49.067323 systemd-journald[177]: Collecting audit messages is disabled. Jul 6 23:28:49.067362 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:28:49.067379 systemd-journald[177]: Journal started Jul 6 23:28:49.067419 systemd-journald[177]: Runtime Journal (/run/log/journal/b7584f113fe64d8c8566b3a0278e4816) is 8M, max 158.8M, 150.8M free. Jul 6 23:28:49.048346 systemd-modules-load[178]: Inserted module 'overlay' Jul 6 23:28:49.083959 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:28:49.084593 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:28:49.099530 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:28:49.099572 kernel: Bridge firewalling registered Jul 6 23:28:49.090584 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:28:49.095955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:28:49.100289 systemd-modules-load[178]: Inserted module 'br_netfilter' Jul 6 23:28:49.101083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:28:49.115986 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:28:49.122632 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:28:49.128273 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:28:49.129300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:28:49.145399 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:28:49.159511 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:28:49.160862 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:28:49.162824 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:28:49.166938 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:28:49.181042 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:28:49.187138 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:28:49.195350 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:28:49.215091 dracut-cmdline[215]: dracut-dracut-053 Jul 6 23:28:49.218100 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:28:49.247647 systemd-resolved[211]: Positive Trust Anchors: Jul 6 23:28:49.247661 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:28:49.247717 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:28:49.272296 systemd-resolved[211]: Defaulting to hostname 'linux'. Jul 6 23:28:49.273565 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:28:49.276270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:28:49.298819 kernel: SCSI subsystem initialized Jul 6 23:28:49.308821 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:28:49.319823 kernel: iscsi: registered transport (tcp) Jul 6 23:28:49.340753 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:28:49.340835 kernel: QLogic iSCSI HBA Driver Jul 6 23:28:49.376758 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:28:49.385060 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:28:49.412744 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:28:49.412835 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:28:49.415624 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:28:49.456823 kernel: raid6: avx512x4 gen() 25974 MB/s Jul 6 23:28:49.476820 kernel: raid6: avx512x2 gen() 26314 MB/s Jul 6 23:28:49.495814 kernel: raid6: avx512x1 gen() 26361 MB/s Jul 6 23:28:49.514812 kernel: raid6: avx2x4 gen() 21943 MB/s Jul 6 23:28:49.533820 kernel: raid6: avx2x2 gen() 22996 MB/s Jul 6 23:28:49.553271 kernel: raid6: avx2x1 gen() 20657 MB/s Jul 6 23:28:49.553325 kernel: raid6: using algorithm avx512x1 gen() 26361 MB/s Jul 6 23:28:49.574303 kernel: raid6: .... xor() 26092 MB/s, rmw enabled Jul 6 23:28:49.574343 kernel: raid6: using avx512x2 recovery algorithm Jul 6 23:28:49.597829 kernel: xor: automatically using best checksumming function avx Jul 6 23:28:49.740828 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:28:49.750317 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:28:49.760026 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:28:49.776606 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jul 6 23:28:49.781842 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:28:49.794046 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:28:49.806843 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jul 6 23:28:49.832790 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:28:49.845977 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:28:49.887974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:28:49.904939 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:28:49.925373 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:28:49.931259 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:28:49.937344 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:28:49.940168 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:28:49.952992 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:28:49.973080 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:28:49.976310 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:28:49.998271 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:28:50.001256 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:28:50.004145 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:28:50.013565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:28:50.013723 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:28:50.022491 kernel: hv_vmbus: Vmbus version:5.2 Jul 6 23:28:50.020088 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:28:50.034896 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:28:50.047188 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:28:50.047225 kernel: AES CTR mode by8 optimization enabled Jul 6 23:28:50.051405 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:28:50.051619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:28:50.061211 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:28:50.066969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:28:50.079150 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 6 23:28:50.079197 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:28:50.082018 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:28:50.091835 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 6 23:28:50.099852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:28:50.109217 kernel: PTP clock support registered Jul 6 23:28:50.113371 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:28:50.128342 kernel: hv_utils: Registering HyperV Utility Driver Jul 6 23:28:50.128388 kernel: hv_vmbus: registering driver hv_utils Jul 6 23:28:50.130669 kernel: hv_utils: Heartbeat IC version 3.0 Jul 6 23:28:50.133402 kernel: hv_utils: Shutdown IC version 3.2 Jul 6 23:28:50.133436 kernel: hv_utils: TimeSync IC version 4.0 Jul 6 23:28:50.662729 systemd-resolved[211]: Clock change detected. Flushing caches. Jul 6 23:28:50.671984 kernel: hv_vmbus: registering driver hv_storvsc Jul 6 23:28:50.675219 kernel: scsi host0: storvsc_host_t Jul 6 23:28:50.678222 kernel: scsi host1: storvsc_host_t Jul 6 23:28:50.678402 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:28:50.683300 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 6 23:28:50.686258 kernel: hv_vmbus: registering driver hv_netvsc Jul 6 23:28:50.689842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:28:50.699730 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 6 23:28:50.704219 kernel: hv_vmbus: registering driver hid_hyperv Jul 6 23:28:50.708980 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 6 23:28:50.709021 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 6 23:28:50.725768 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 6 23:28:50.725985 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:28:50.727256 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 6 23:28:50.747704 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 6 23:28:50.747936 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:28:50.751828 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:28:50.752052 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 6 23:28:50.755219 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 6 23:28:50.760252 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:28:50.760304 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:28:50.798262 kernel: hv_netvsc 7ced8d40-7504-7ced-8d40-75047ced8d40 eth0: VF slot 1 added Jul 6 23:28:50.808601 kernel: hv_vmbus: registering driver hv_pci Jul 6 23:28:50.808648 kernel: hv_pci 1667ea80-dfe0-45d8-ad62-0a5cc2337864: PCI VMBus probing: Using version 0x10004 Jul 6 23:28:50.813226 kernel: hv_pci 1667ea80-dfe0-45d8-ad62-0a5cc2337864: PCI host bridge to bus dfe0:00 Jul 6 23:28:50.813386 kernel: pci_bus dfe0:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 6 23:28:50.817871 kernel: pci_bus dfe0:00: No busn resource found for root bus, will use [bus 00-ff] Jul 6 23:28:50.822401 kernel: pci dfe0:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 6 23:28:50.826325 kernel: pci dfe0:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:28:50.830223 kernel: pci dfe0:00:02.0: enabling Extended Tags Jul 6 23:28:50.839218 kernel: pci dfe0:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at dfe0:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 6 23:28:50.844498 kernel: pci_bus dfe0:00: busn_res: [bus 00-ff] end is updated to 00 Jul 6 23:28:50.844678 kernel: pci dfe0:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 6 23:28:51.020551 kernel: mlx5_core dfe0:00:02.0: enabling device (0000 -> 0002) Jul 6 23:28:51.024237 kernel: mlx5_core dfe0:00:02.0: firmware version: 14.30.5000 Jul 6 23:28:51.109791 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 6 23:28:51.145249 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (444) Jul 6 23:28:51.162298 kernel: BTRFS: device fsid 25bdfe43-d649-4808-8940-e1722efc7a2e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (464) Jul 6 23:28:51.187604 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 6 23:28:51.211509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:28:51.230587 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 6 23:28:51.237301 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 6 23:28:51.246402 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:28:51.262768 kernel: hv_netvsc 7ced8d40-7504-7ced-8d40-75047ced8d40 eth0: VF registering: eth1 Jul 6 23:28:51.263099 kernel: mlx5_core dfe0:00:02.0 eth1: joined to eth0 Jul 6 23:28:51.264226 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:28:51.269121 kernel: mlx5_core dfe0:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 6 23:28:51.275285 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:28:51.287260 kernel: mlx5_core dfe0:00:02.0 enP57312s1: renamed from eth1 Jul 6 23:28:52.285125 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:28:52.285282 disk-uuid[600]: The operation has completed successfully. Jul 6 23:28:52.364722 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:28:52.364856 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:28:52.421354 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:28:52.428803 sh[687]: Success Jul 6 23:28:52.455236 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:28:52.603127 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:28:52.612321 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:28:52.615461 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:28:52.631920 kernel: BTRFS info (device dm-0): first mount of filesystem 25bdfe43-d649-4808-8940-e1722efc7a2e Jul 6 23:28:52.631985 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:28:52.635294 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:28:52.638146 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:28:52.640460 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:28:52.793631 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:28:52.799455 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:28:52.808356 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:28:52.817351 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:28:52.836538 kernel: BTRFS info (device sda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:28:52.836589 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:28:52.836611 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:28:52.852286 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:28:52.859243 kernel: BTRFS info (device sda6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:28:52.863073 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:28:52.870422 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:28:52.916277 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:28:52.926445 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:28:52.950827 systemd-networkd[868]: lo: Link UP Jul 6 23:28:52.950836 systemd-networkd[868]: lo: Gained carrier Jul 6 23:28:52.953542 systemd-networkd[868]: Enumeration completed Jul 6 23:28:52.953750 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:28:52.955977 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:28:52.955981 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:28:52.957134 systemd[1]: Reached target network.target - Network. Jul 6 23:28:53.022728 kernel: mlx5_core dfe0:00:02.0 enP57312s1: Link up Jul 6 23:28:53.052229 kernel: hv_netvsc 7ced8d40-7504-7ced-8d40-75047ced8d40 eth0: Data path switched to VF: enP57312s1 Jul 6 23:28:53.052902 systemd-networkd[868]: enP57312s1: Link UP Jul 6 23:28:53.053049 systemd-networkd[868]: eth0: Link UP Jul 6 23:28:53.053271 systemd-networkd[868]: eth0: Gained carrier Jul 6 23:28:53.053285 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:28:53.065421 systemd-networkd[868]: enP57312s1: Gained carrier Jul 6 23:28:53.094247 systemd-networkd[868]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:28:53.543812 ignition[799]: Ignition 2.20.0 Jul 6 23:28:53.543826 ignition[799]: Stage: fetch-offline Jul 6 23:28:53.543870 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:53.543880 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:28:53.543997 ignition[799]: parsed url from cmdline: "" Jul 6 23:28:53.544001 ignition[799]: no config URL provided Jul 6 23:28:53.544009 ignition[799]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:28:53.555843 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:28:53.544020 ignition[799]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:28:53.544028 ignition[799]: failed to fetch config: resource requires networking Jul 6 23:28:53.545843 ignition[799]: Ignition finished successfully Jul 6 23:28:53.569538 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:28:53.586253 ignition[879]: Ignition 2.20.0 Jul 6 23:28:53.586265 ignition[879]: Stage: fetch Jul 6 23:28:53.586469 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:53.586482 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:28:53.586578 ignition[879]: parsed url from cmdline: "" Jul 6 23:28:53.586581 ignition[879]: no config URL provided Jul 6 23:28:53.586588 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:28:53.586596 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:28:53.586623 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 6 23:28:53.670815 ignition[879]: GET result: OK Jul 6 23:28:53.670954 ignition[879]: config has been read from IMDS userdata Jul 6 23:28:53.670999 ignition[879]: parsing config with SHA512: 11aae43390236c70257324bad772192191142c1b827db45b601aae812a4556ba0af6b1886577405778f878cfe21737ab7c964590f1df39fe5057c9de74ce16e9 Jul 6 23:28:53.679737 unknown[879]: fetched base config from "system" Jul 6 23:28:53.679752 unknown[879]: fetched base config from "system" Jul 6 23:28:53.680236 ignition[879]: fetch: fetch complete Jul 6 23:28:53.679762 unknown[879]: fetched user config from "azure" Jul 6 23:28:53.680242 ignition[879]: fetch: fetch passed Jul 6 23:28:53.680289 ignition[879]: Ignition finished successfully Jul 6 23:28:53.690320 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:28:53.699350 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:28:53.717768 ignition[886]: Ignition 2.20.0 Jul 6 23:28:53.717779 ignition[886]: Stage: kargs Jul 6 23:28:53.717999 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:53.719918 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:28:53.718011 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:28:53.718877 ignition[886]: kargs: kargs passed Jul 6 23:28:53.718920 ignition[886]: Ignition finished successfully Jul 6 23:28:53.733975 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:28:53.748094 ignition[892]: Ignition 2.20.0 Jul 6 23:28:53.748105 ignition[892]: Stage: disks Jul 6 23:28:53.748340 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:53.748353 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:28:53.756134 ignition[892]: disks: disks passed Jul 6 23:28:53.756189 ignition[892]: Ignition finished successfully Jul 6 23:28:53.757006 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:28:53.764414 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:28:53.769541 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:28:53.769637 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:28:53.770028 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:28:53.770411 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:28:53.786385 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:28:53.826972 systemd-fsck[900]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 6 23:28:53.836617 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:28:53.852362 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:28:53.948231 kernel: EXT4-fs (sda9): mounted filesystem daab0c95-3783-44c0-bef8-9d61a5c53c14 r/w with ordered data mode. Quota mode: none. Jul 6 23:28:53.948776 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:28:53.953794 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:28:53.983342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:28:53.988745 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:28:53.998541 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (911) Jul 6 23:28:53.999377 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:28:54.017517 kernel: BTRFS info (device sda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:28:54.017553 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:28:54.017573 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:28:54.017592 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:28:54.011079 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:28:54.011120 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:28:54.026088 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:28:54.028059 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:28:54.034145 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:28:54.461912 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:28:54.464798 coreos-metadata[913]: Jul 06 23:28:54.462 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:28:54.468354 coreos-metadata[913]: Jul 06 23:28:54.465 INFO Fetch successful Jul 6 23:28:54.468354 coreos-metadata[913]: Jul 06 23:28:54.465 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:28:54.475233 coreos-metadata[913]: Jul 06 23:28:54.474 INFO Fetch successful Jul 6 23:28:54.475233 coreos-metadata[913]: Jul 06 23:28:54.474 INFO wrote hostname ci-4230.2.1-a-d392076d12 to /sysroot/etc/hostname Jul 6 23:28:54.479057 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:28:54.499226 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:28:54.505900 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:28:54.521107 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:28:54.889467 systemd-networkd[868]: enP57312s1: Gained IPv6LL Jul 6 23:28:54.953414 systemd-networkd[868]: eth0: Gained IPv6LL Jul 6 23:28:55.136478 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:28:55.148428 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:28:55.155512 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:28:55.162833 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:28:55.168485 kernel: BTRFS info (device sda6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:28:55.193667 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:28:55.202615 ignition[1030]: INFO : Ignition 2.20.0 Jul 6 23:28:55.202615 ignition[1030]: INFO : Stage: mount Jul 6 23:28:55.206402 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:55.206402 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:28:55.206402 ignition[1030]: INFO : mount: mount passed Jul 6 23:28:55.206402 ignition[1030]: INFO : Ignition finished successfully Jul 6 23:28:55.205380 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:28:55.223348 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:28:55.230301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:28:55.247615 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1041) Jul 6 23:28:55.247668 kernel: BTRFS info (device sda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:28:55.250628 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:28:55.252887 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:28:55.258228 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:28:55.259219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:28:55.283476 ignition[1058]: INFO : Ignition 2.20.0 Jul 6 23:28:55.283476 ignition[1058]: INFO : Stage: files Jul 6 23:28:55.287374 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:55.287374 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:28:55.287374 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:28:55.287374 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:28:55.287374 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:28:55.346550 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:28:55.350472 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:28:55.350472 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:28:55.347065 unknown[1058]: wrote ssh authorized keys file for user: core Jul 6 23:28:55.359922 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:28:55.364356 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 6 23:28:55.431407 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:28:55.778044 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:28:55.778044 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:28:55.789396 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 6 23:28:56.365740 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:28:56.618103 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:28:56.618103 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:28:56.629151 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:28:56.629151 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:28:56.629151 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:28:56.629151 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:28:56.649113 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:28:56.649113 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:28:56.649113 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:28:56.666221 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:28:56.666221 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:28:56.666221 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:28:56.666221 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:28:56.666221 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:28:56.666221 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 6 23:28:57.436620 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:28:57.704213 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:28:57.704213 ignition[1058]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:28:57.719551 ignition[1058]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:28:57.724514 ignition[1058]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:28:57.724514 ignition[1058]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:28:57.731539 ignition[1058]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:28:57.734803 ignition[1058]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:28:57.738141 ignition[1058]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:28:57.742286 ignition[1058]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:28:57.746148 ignition[1058]: INFO : files: files passed Jul 6 23:28:57.747931 ignition[1058]: INFO : Ignition finished successfully Jul 6 23:28:57.752029 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:28:57.763409 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:28:57.768880 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:28:57.775799 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:28:57.775909 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:28:57.792230 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:28:57.792230 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:28:57.799455 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:28:57.798221 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:28:57.809075 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:28:57.818330 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:28:57.842742 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:28:57.842847 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:28:57.848267 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:28:57.855796 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:28:57.858160 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:28:57.868392 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:28:57.882077 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:28:57.889718 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:28:57.901855 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:28:57.902022 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:28:57.902513 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:28:57.902861 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:28:57.902977 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:28:57.903615 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:28:57.904001 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:28:57.904359 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:28:57.904787 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:28:57.905135 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:28:57.905491 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:28:57.905847 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:28:57.906229 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:28:57.906700 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:28:57.907058 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:28:57.907882 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:28:57.908013 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:28:57.908636 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:28:57.909158 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:28:57.909567 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:28:57.943969 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:28:57.951140 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:28:57.963068 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:28:57.977384 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:28:57.977536 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:28:57.984168 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:28:57.984325 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:28:57.991022 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:28:58.065247 ignition[1111]: INFO : Ignition 2.20.0 Jul 6 23:28:58.065247 ignition[1111]: INFO : Stage: umount Jul 6 23:28:58.065247 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:28:58.065247 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 6 23:28:58.065247 ignition[1111]: INFO : umount: umount passed Jul 6 23:28:58.065247 ignition[1111]: INFO : Ignition finished successfully Jul 6 23:28:57.991169 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:28:58.021945 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:28:58.029421 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:28:58.038471 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:28:58.038650 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:28:58.046375 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:28:58.046561 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:28:58.053492 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:28:58.053585 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:28:58.060218 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:28:58.060325 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:28:58.082573 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:28:58.082628 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:28:58.086151 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:28:58.086216 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:28:58.090690 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:28:58.090746 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:28:58.095223 systemd[1]: Stopped target network.target - Network. Jul 6 23:28:58.099721 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:28:58.101990 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:28:58.124736 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:28:58.136504 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:28:58.140265 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:28:58.146389 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:28:58.148345 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:28:58.152431 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:28:58.152484 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:28:58.160922 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:28:58.160989 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:28:58.165705 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:28:58.165770 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:28:58.174360 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:28:58.174423 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:28:58.181120 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:28:58.183640 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:28:58.189270 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:28:58.189866 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:28:58.189970 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:28:58.193122 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:28:58.193239 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:28:58.204173 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:28:58.204297 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:28:58.209365 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:28:58.209539 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:28:58.209573 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:28:58.225465 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:28:58.228567 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:28:58.228631 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:28:58.233316 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:28:58.243187 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:28:58.243313 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:28:58.249367 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:28:58.260371 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:28:58.260502 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:28:58.265179 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:28:58.265251 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:28:58.267661 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:28:58.267714 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:28:58.271636 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:28:58.271698 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:28:58.271995 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:28:58.272134 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:28:58.311565 kernel: hv_netvsc 7ced8d40-7504-7ced-8d40-75047ced8d40 eth0: Data path switched from VF: enP57312s1 Jul 6 23:28:58.276403 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:28:58.276480 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:28:58.280638 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:28:58.280680 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:28:58.285384 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:28:58.285445 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:28:58.290128 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:28:58.290175 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:28:58.295224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:28:58.295278 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:28:58.319372 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:28:58.323390 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:28:58.323453 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:28:58.329171 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:28:58.329464 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:28:58.334257 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:28:58.334309 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:28:58.339077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:28:58.339125 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:28:58.370291 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:28:58.370363 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:28:58.370728 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:28:58.370835 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:28:58.383122 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:28:58.383236 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:28:58.389024 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:28:58.399361 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:28:58.432357 systemd[1]: Switching root. Jul 6 23:28:58.490875 systemd-journald[177]: Journal stopped Jul 6 23:29:03.001045 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jul 6 23:29:03.001087 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:29:03.001100 kernel: SELinux: policy capability open_perms=1 Jul 6 23:29:03.001111 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:29:03.001119 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:29:03.001130 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:29:03.001140 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:29:03.001154 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:29:03.001167 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:29:03.001176 kernel: audit: type=1403 audit(1751844540.059:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:29:03.001188 systemd[1]: Successfully loaded SELinux policy in 95.672ms. Jul 6 23:29:03.001200 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.237ms. Jul 6 23:29:03.001228 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:29:03.001238 systemd[1]: Detected virtualization microsoft. Jul 6 23:29:03.001253 systemd[1]: Detected architecture x86-64. Jul 6 23:29:03.001266 systemd[1]: Detected first boot. Jul 6 23:29:03.001277 systemd[1]: Hostname set to . Jul 6 23:29:03.001289 systemd[1]: Initializing machine ID from random generator. Jul 6 23:29:03.001300 zram_generator::config[1156]: No configuration found. Jul 6 23:29:03.001314 kernel: Guest personality initialized and is inactive Jul 6 23:29:03.001326 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jul 6 23:29:03.001336 kernel: Initialized host personality Jul 6 23:29:03.001347 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:29:03.001357 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:29:03.001369 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:29:03.001380 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:29:03.001392 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:29:03.001407 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:29:03.001419 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:29:03.001433 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:29:03.001446 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:29:03.001457 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:29:03.001470 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:29:03.001481 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:29:03.001496 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:29:03.001508 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:29:03.001519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:29:03.001532 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:29:03.001543 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:29:03.001555 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:29:03.001571 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:29:03.001582 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:29:03.001595 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:29:03.001611 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:29:03.001621 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:29:03.001634 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:29:03.001646 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:29:03.001658 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:29:03.001671 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:29:03.001686 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:29:03.001704 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:29:03.001721 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:29:03.001738 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:29:03.001755 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:29:03.001772 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:29:03.001789 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:29:03.001810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:29:03.001828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:29:03.001846 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:29:03.001863 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:29:03.001881 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:29:03.001898 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:29:03.001916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:29:03.001937 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:29:03.001955 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:29:03.001973 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:29:03.001991 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:29:03.002009 systemd[1]: Reached target machines.target - Containers. Jul 6 23:29:03.002027 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:29:03.002046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:29:03.002063 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:29:03.002086 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:29:03.002104 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:29:03.002122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:29:03.002140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:29:03.002156 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:29:03.002174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:29:03.002194 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:29:03.002236 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:29:03.002262 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:29:03.002281 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:29:03.002299 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:29:03.002317 kernel: loop: module loaded Jul 6 23:29:03.002337 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:29:03.002355 kernel: fuse: init (API version 7.39) Jul 6 23:29:03.002373 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:29:03.002392 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:29:03.002414 kernel: ACPI: bus type drm_connector registered Jul 6 23:29:03.002429 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:29:03.002447 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:29:03.002464 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:29:03.002508 systemd-journald[1263]: Collecting audit messages is disabled. Jul 6 23:29:03.002549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:29:03.002567 systemd-journald[1263]: Journal started Jul 6 23:29:03.002601 systemd-journald[1263]: Runtime Journal (/run/log/journal/cdcc1ff0a6104effac5a76ac65a11253) is 8M, max 158.8M, 150.8M free. Jul 6 23:29:03.015536 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:29:03.015580 systemd[1]: Stopped verity-setup.service. Jul 6 23:29:03.015604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:29:02.401594 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:29:02.416038 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:29:02.416550 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:29:03.022327 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:29:03.025876 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:29:03.028567 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:29:03.031248 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:29:03.034856 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:29:03.040516 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:29:03.043193 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:29:03.045846 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:29:03.048839 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:29:03.051915 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:29:03.052115 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:29:03.055045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:29:03.055305 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:29:03.058249 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:29:03.058485 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:29:03.061373 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:29:03.061597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:29:03.065279 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:29:03.065530 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:29:03.068641 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:29:03.068862 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:29:03.071981 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:29:03.075189 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:29:03.079262 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:29:03.082654 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:29:03.104672 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:29:03.114269 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:29:03.121336 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:29:03.124312 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:29:03.124443 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:29:03.131037 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:29:03.138146 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:29:03.147372 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:29:03.150300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:29:03.153345 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:29:03.160909 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:29:03.163503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:29:03.166394 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:29:03.170775 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:29:03.172761 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:29:03.177176 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:29:03.188540 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:29:03.196583 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:29:03.199692 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:29:03.202694 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:29:03.206713 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:29:03.216305 systemd-journald[1263]: Time spent on flushing to /var/log/journal/cdcc1ff0a6104effac5a76ac65a11253 is 37.279ms for 978 entries. Jul 6 23:29:03.216305 systemd-journald[1263]: System Journal (/var/log/journal/cdcc1ff0a6104effac5a76ac65a11253) is 8M, max 2.6G, 2.6G free. Jul 6 23:29:03.303375 systemd-journald[1263]: Received client request to flush runtime journal. Jul 6 23:29:03.303452 kernel: loop0: detected capacity change from 0 to 147912 Jul 6 23:29:03.222477 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:29:03.233787 udevadm[1306]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:29:03.239971 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:29:03.243015 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:29:03.253380 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:29:03.262257 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Jul 6 23:29:03.262278 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Jul 6 23:29:03.269757 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:29:03.282271 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:29:03.284943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:29:03.304826 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:29:03.339537 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:29:03.354042 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:29:03.366558 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:29:03.387349 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jul 6 23:29:03.387372 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jul 6 23:29:03.391903 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:29:03.418530 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:29:03.559237 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:29:03.604242 kernel: loop1: detected capacity change from 0 to 138176 Jul 6 23:29:03.907235 kernel: loop2: detected capacity change from 0 to 28272 Jul 6 23:29:04.159240 kernel: loop3: detected capacity change from 0 to 224512 Jul 6 23:29:04.196232 kernel: loop4: detected capacity change from 0 to 147912 Jul 6 23:29:04.213323 kernel: loop5: detected capacity change from 0 to 138176 Jul 6 23:29:04.225232 kernel: loop6: detected capacity change from 0 to 28272 Jul 6 23:29:04.233231 kernel: loop7: detected capacity change from 0 to 224512 Jul 6 23:29:04.237875 (sd-merge)[1326]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 6 23:29:04.238474 (sd-merge)[1326]: Merged extensions into '/usr'. Jul 6 23:29:04.242115 systemd[1]: Reload requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:29:04.242133 systemd[1]: Reloading... Jul 6 23:29:04.325729 zram_generator::config[1354]: No configuration found. Jul 6 23:29:04.597390 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:29:04.681136 systemd[1]: Reloading finished in 438 ms. Jul 6 23:29:04.699297 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:29:04.702508 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:29:04.713867 systemd[1]: Starting ensure-sysext.service... Jul 6 23:29:04.720318 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:29:04.726408 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:29:04.755138 systemd[1]: Reload requested from client PID 1413 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:29:04.755155 systemd[1]: Reloading... Jul 6 23:29:04.773851 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:29:04.776352 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:29:04.778756 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:29:04.779189 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jul 6 23:29:04.779301 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jul 6 23:29:04.790482 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Jul 6 23:29:04.792664 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:29:04.792673 systemd-tmpfiles[1414]: Skipping /boot Jul 6 23:29:04.809837 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:29:04.809859 systemd-tmpfiles[1414]: Skipping /boot Jul 6 23:29:04.873114 zram_generator::config[1445]: No configuration found. Jul 6 23:29:05.119814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:29:05.183788 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:29:05.208230 kernel: hv_vmbus: registering driver hyperv_fb Jul 6 23:29:05.217256 kernel: hv_vmbus: registering driver hv_balloon Jul 6 23:29:05.260253 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 6 23:29:05.267559 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 6 23:29:05.273121 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 6 23:29:05.275887 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:29:05.282257 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:29:05.282698 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:29:05.283463 systemd[1]: Reloading finished in 527 ms. Jul 6 23:29:05.299100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:29:05.304119 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:29:05.506503 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:29:05.518632 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:29:05.539517 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:29:05.545881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:29:05.549288 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:29:05.569477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:29:05.578680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:29:05.589549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:29:05.592484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:29:05.592774 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:29:05.595101 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:29:05.607229 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 6 23:29:05.606364 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:29:05.635398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:29:05.639970 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:29:05.652489 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:29:05.666377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:29:05.669083 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:29:05.671275 systemd[1]: Finished ensure-sysext.service. Jul 6 23:29:05.676393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:29:05.677291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:29:05.684687 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:29:05.685271 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:29:05.691601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:29:05.692301 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:29:05.697794 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1510) Jul 6 23:29:05.706754 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:29:05.708331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:29:05.763615 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:29:05.770897 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:29:05.770972 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:29:05.779781 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:29:05.789042 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:29:05.789967 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:29:05.799880 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:29:05.810388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:29:05.867030 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 6 23:29:05.881377 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:29:05.893024 augenrules[1646]: No rules Jul 6 23:29:05.893643 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:29:05.894290 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:29:05.900857 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:29:05.931684 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:29:05.939132 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:29:05.953435 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:29:05.972302 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:29:06.066309 systemd-resolved[1571]: Positive Trust Anchors: Jul 6 23:29:06.066320 systemd-resolved[1571]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:29:06.066368 systemd-resolved[1571]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:29:06.067974 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:29:06.073171 systemd-resolved[1571]: Using system hostname 'ci-4230.2.1-a-d392076d12'. Jul 6 23:29:06.075877 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:29:06.076178 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:29:06.092660 systemd-networkd[1568]: lo: Link UP Jul 6 23:29:06.092669 systemd-networkd[1568]: lo: Gained carrier Jul 6 23:29:06.095494 systemd-networkd[1568]: Enumeration completed Jul 6 23:29:06.096340 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:29:06.096375 systemd-networkd[1568]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:29:06.096381 systemd-networkd[1568]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:29:06.096838 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:29:06.097743 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:29:06.098935 systemd[1]: Reached target network.target - Network. Jul 6 23:29:06.110774 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:29:06.114412 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:29:06.122110 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:29:06.123470 lvm[1667]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:29:06.161541 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:29:06.174244 kernel: mlx5_core dfe0:00:02.0 enP57312s1: Link up Jul 6 23:29:06.193519 kernel: hv_netvsc 7ced8d40-7504-7ced-8d40-75047ced8d40 eth0: Data path switched to VF: enP57312s1 Jul 6 23:29:06.195069 systemd-networkd[1568]: enP57312s1: Link UP Jul 6 23:29:06.196150 systemd-networkd[1568]: eth0: Link UP Jul 6 23:29:06.196160 systemd-networkd[1568]: eth0: Gained carrier Jul 6 23:29:06.196185 systemd-networkd[1568]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:29:06.197510 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:29:06.201611 systemd-networkd[1568]: enP57312s1: Gained carrier Jul 6 23:29:06.241358 systemd-networkd[1568]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:29:06.297513 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:29:06.297949 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:29:06.351786 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:29:07.369537 systemd-networkd[1568]: enP57312s1: Gained IPv6LL Jul 6 23:29:07.433356 systemd-networkd[1568]: eth0: Gained IPv6LL Jul 6 23:29:07.436760 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:29:07.441352 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:29:07.687811 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:29:07.698040 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:29:07.706405 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:29:07.716276 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:29:07.719241 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:29:07.721730 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:29:07.724606 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:29:07.727590 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:29:07.730108 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:29:07.732983 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:29:07.735728 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:29:07.735754 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:29:07.737875 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:29:07.741118 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:29:07.744927 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:29:07.749750 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:29:07.752902 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:29:07.755781 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:29:07.760013 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:29:07.762856 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:29:07.766135 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:29:07.768678 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:29:07.771168 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:29:07.773222 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:29:07.773253 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:29:07.779292 systemd[1]: Starting chronyd.service - NTP client/server... Jul 6 23:29:07.784326 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:29:07.791938 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:29:07.797119 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:29:07.803322 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:29:07.812002 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:29:07.814420 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:29:07.814471 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 6 23:29:07.817473 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 6 23:29:07.820066 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 6 23:29:07.825346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:29:07.832931 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:29:07.836618 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:29:07.841338 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:29:07.849264 KVP[1688]: KVP starting; pid is:1688 Jul 6 23:29:07.851350 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:29:07.852380 jq[1686]: false Jul 6 23:29:07.858417 kernel: hv_utils: KVP IC version 4.0 Jul 6 23:29:07.858301 KVP[1688]: KVP LIC Version: 3.1 Jul 6 23:29:07.859822 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:29:07.868723 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:29:07.876654 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:29:07.877193 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:29:07.885346 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:29:07.886989 (chronyd)[1682]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 6 23:29:07.890330 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:29:07.894535 chronyd[1703]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 6 23:29:07.901599 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:29:07.901856 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:29:07.910771 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:29:07.912261 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:29:07.934029 chronyd[1703]: Timezone right/UTC failed leap second check, ignoring Jul 6 23:29:07.934961 chronyd[1703]: Loaded seccomp filter (level 2) Jul 6 23:29:07.938530 systemd[1]: Started chronyd.service - NTP client/server. Jul 6 23:29:07.943423 jq[1701]: true Jul 6 23:29:07.949714 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:29:07.966564 (ntainerd)[1714]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:29:07.997238 update_engine[1699]: I20250706 23:29:07.996429 1699 main.cc:92] Flatcar Update Engine starting Jul 6 23:29:07.998630 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:29:08.001986 tar[1707]: linux-amd64/LICENSE Jul 6 23:29:08.001986 tar[1707]: linux-amd64/helm Jul 6 23:29:08.002303 jq[1724]: true Jul 6 23:29:07.998916 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:29:08.020064 extend-filesystems[1687]: Found loop4 Jul 6 23:29:08.020064 extend-filesystems[1687]: Found loop5 Jul 6 23:29:08.020064 extend-filesystems[1687]: Found loop6 Jul 6 23:29:08.020064 extend-filesystems[1687]: Found loop7 Jul 6 23:29:08.020064 extend-filesystems[1687]: Found sda Jul 6 23:29:08.020064 extend-filesystems[1687]: Found sda1 Jul 6 23:29:08.020064 extend-filesystems[1687]: Found sda2 Jul 6 23:29:08.020064 extend-filesystems[1687]: Found sda3 Jul 6 23:29:08.020064 extend-filesystems[1687]: Found usr Jul 6 23:29:08.020064 extend-filesystems[1687]: Found sda4 Jul 6 23:29:08.020064 extend-filesystems[1687]: Found sda6 Jul 6 23:29:08.020064 extend-filesystems[1687]: Found sda7 Jul 6 23:29:08.020064 extend-filesystems[1687]: Found sda9 Jul 6 23:29:08.020064 extend-filesystems[1687]: Checking size of /dev/sda9 Jul 6 23:29:08.034567 dbus-daemon[1685]: [system] SELinux support is enabled Jul 6 23:29:08.141615 update_engine[1699]: I20250706 23:29:08.088164 1699 update_check_scheduler.cc:74] Next update check in 9m34s Jul 6 23:29:08.034736 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:29:08.046304 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:29:08.144176 extend-filesystems[1687]: Old size kept for /dev/sda9 Jul 6 23:29:08.144176 extend-filesystems[1687]: Found sr0 Jul 6 23:29:08.046374 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:29:08.054369 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:29:08.054396 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:29:08.074746 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:29:08.087443 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:29:08.146117 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:29:08.148946 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:29:08.205236 bash[1757]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:29:08.209867 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:29:08.213629 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:29:08.240516 coreos-metadata[1684]: Jul 06 23:29:08.240 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 6 23:29:08.245245 coreos-metadata[1684]: Jul 06 23:29:08.244 INFO Fetch successful Jul 6 23:29:08.245245 coreos-metadata[1684]: Jul 06 23:29:08.244 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 6 23:29:08.257232 coreos-metadata[1684]: Jul 06 23:29:08.253 INFO Fetch successful Jul 6 23:29:08.257232 coreos-metadata[1684]: Jul 06 23:29:08.256 INFO Fetching http://168.63.129.16/machine/4d6ba873-9335-4b4c-877a-98a0024d98e5/09ee3f60%2D4d19%2D4e49%2D9ce8%2D501424959488.%5Fci%2D4230.2.1%2Da%2Dd392076d12?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 6 23:29:08.259377 coreos-metadata[1684]: Jul 06 23:29:08.259 INFO Fetch successful Jul 6 23:29:08.260793 coreos-metadata[1684]: Jul 06 23:29:08.260 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 6 23:29:08.276124 coreos-metadata[1684]: Jul 06 23:29:08.274 INFO Fetch successful Jul 6 23:29:08.277565 systemd-logind[1698]: New seat seat0. Jul 6 23:29:08.295049 systemd-logind[1698]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:29:08.295583 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:29:08.318170 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1511) Jul 6 23:29:08.362596 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:29:08.373123 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:29:08.519700 sshd_keygen[1733]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:29:08.567294 locksmithd[1744]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:29:08.599976 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:29:08.612356 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:29:08.621784 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 6 23:29:08.633140 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:29:08.633388 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:29:08.647362 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:29:08.695381 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 6 23:29:08.700818 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:29:08.714356 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:29:08.722109 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:29:08.726909 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:29:08.938739 tar[1707]: linux-amd64/README.md Jul 6 23:29:08.954651 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:29:09.110561 containerd[1714]: time="2025-07-06T23:29:09.110472600Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:29:09.141434 containerd[1714]: time="2025-07-06T23:29:09.141389700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:29:09.143452 containerd[1714]: time="2025-07-06T23:29:09.143403600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:29:09.143452 containerd[1714]: time="2025-07-06T23:29:09.143448800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:29:09.143560 containerd[1714]: time="2025-07-06T23:29:09.143469900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:29:09.143655 containerd[1714]: time="2025-07-06T23:29:09.143635900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:29:09.143704 containerd[1714]: time="2025-07-06T23:29:09.143668100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:29:09.143799 containerd[1714]: time="2025-07-06T23:29:09.143769800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:29:09.143851 containerd[1714]: time="2025-07-06T23:29:09.143798700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:29:09.144116 containerd[1714]: time="2025-07-06T23:29:09.144082300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:29:09.144172 containerd[1714]: time="2025-07-06T23:29:09.144115200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:29:09.144172 containerd[1714]: time="2025-07-06T23:29:09.144133800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:29:09.144172 containerd[1714]: time="2025-07-06T23:29:09.144146700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:29:09.144313 containerd[1714]: time="2025-07-06T23:29:09.144280600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:29:09.144603 containerd[1714]: time="2025-07-06T23:29:09.144550100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:29:09.145151 containerd[1714]: time="2025-07-06T23:29:09.144816600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:29:09.145151 containerd[1714]: time="2025-07-06T23:29:09.144842200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:29:09.145151 containerd[1714]: time="2025-07-06T23:29:09.144955500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:29:09.145151 containerd[1714]: time="2025-07-06T23:29:09.145015600Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.155558200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.155610700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.155632300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.155665300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.155686300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.155825100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.156093300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.156198400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.156233400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.156252000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.156270800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:29:09.156289 containerd[1714]: time="2025-07-06T23:29:09.156288300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156303100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156332900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156351400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156369700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156385400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156401400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156429200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156448800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156467300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156484400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156500800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156518000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156533700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.156714 containerd[1714]: time="2025-07-06T23:29:09.156550400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156585900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156609300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156626500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156643400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156661100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156680800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156709400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156726600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156743500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156791700Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156814700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156830200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:29:09.158149 containerd[1714]: time="2025-07-06T23:29:09.156846300Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:29:09.164690 containerd[1714]: time="2025-07-06T23:29:09.156859600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.164690 containerd[1714]: time="2025-07-06T23:29:09.156876000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:29:09.164690 containerd[1714]: time="2025-07-06T23:29:09.156889400Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:29:09.164690 containerd[1714]: time="2025-07-06T23:29:09.156903000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:29:09.160054 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.157459500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.157531000Z" level=info msg="Connect containerd service" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.157575200Z" level=info msg="using legacy CRI server" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.157585800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.157732100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.158832400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.159276600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.159336900Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.159369200Z" level=info msg="Start subscribing containerd event" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.159412200Z" level=info msg="Start recovering state" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.159479200Z" level=info msg="Start event monitor" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.159490900Z" level=info msg="Start snapshots syncer" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.159502800Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.159512800Z" level=info msg="Start streaming server" Jul 6 23:29:09.164964 containerd[1714]: time="2025-07-06T23:29:09.159580300Z" level=info msg="containerd successfully booted in 0.050815s" Jul 6 23:29:09.735052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:29:09.738312 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:29:09.741690 systemd[1]: Startup finished in 643ms (firmware) + 18.475s (loader) + 1.071s (kernel) + 10.742s (initrd) + 9.775s (userspace) = 40.707s. Jul 6 23:29:09.749320 (kubelet)[1859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:29:10.130379 login[1846]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jul 6 23:29:10.133645 login[1847]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:29:10.147148 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:29:10.155508 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:29:10.160551 systemd-logind[1698]: New session 2 of user core. Jul 6 23:29:10.189104 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:29:10.197490 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:29:10.230987 (systemd)[1870]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:29:10.238465 systemd-logind[1698]: New session c1 of user core. Jul 6 23:29:10.514651 systemd[1870]: Queued start job for default target default.target. Jul 6 23:29:10.522386 systemd[1870]: Created slice app.slice - User Application Slice. Jul 6 23:29:10.522420 systemd[1870]: Reached target paths.target - Paths. Jul 6 23:29:10.522480 systemd[1870]: Reached target timers.target - Timers. Jul 6 23:29:10.523816 systemd[1870]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:29:10.545774 systemd[1870]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:29:10.546086 systemd[1870]: Reached target sockets.target - Sockets. Jul 6 23:29:10.546253 systemd[1870]: Reached target basic.target - Basic System. Jul 6 23:29:10.546391 systemd[1870]: Reached target default.target - Main User Target. Jul 6 23:29:10.546484 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:29:10.546518 systemd[1870]: Startup finished in 291ms. Jul 6 23:29:10.554639 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:29:10.603404 waagent[1844]: 2025-07-06T23:29:10.603304Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 6 23:29:10.604926 waagent[1844]: 2025-07-06T23:29:10.604865Z INFO Daemon Daemon OS: flatcar 4230.2.1 Jul 6 23:29:10.605906 waagent[1844]: 2025-07-06T23:29:10.605859Z INFO Daemon Daemon Python: 3.11.11 Jul 6 23:29:10.607112 waagent[1844]: 2025-07-06T23:29:10.607066Z INFO Daemon Daemon Run daemon Jul 6 23:29:10.607895 waagent[1844]: 2025-07-06T23:29:10.607858Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.1' Jul 6 23:29:10.608802 waagent[1844]: 2025-07-06T23:29:10.608765Z INFO Daemon Daemon Using waagent for provisioning Jul 6 23:29:10.609636 waagent[1844]: 2025-07-06T23:29:10.609598Z INFO Daemon Daemon Activate resource disk Jul 6 23:29:10.610314 waagent[1844]: 2025-07-06T23:29:10.610276Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 6 23:29:10.617336 waagent[1844]: 2025-07-06T23:29:10.615832Z INFO Daemon Daemon Found device: None Jul 6 23:29:10.617336 waagent[1844]: 2025-07-06T23:29:10.616412Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 6 23:29:10.617797 waagent[1844]: 2025-07-06T23:29:10.617762Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 6 23:29:10.618946 waagent[1844]: 2025-07-06T23:29:10.618906Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:29:10.619671 waagent[1844]: 2025-07-06T23:29:10.619635Z INFO Daemon Daemon Running default provisioning handler Jul 6 23:29:10.643146 waagent[1844]: 2025-07-06T23:29:10.642802Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 6 23:29:10.644146 kubelet[1859]: E0706 23:29:10.644110 1859 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:29:10.647416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:29:10.647583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:29:10.647945 systemd[1]: kubelet.service: Consumed 1.024s CPU time, 264.5M memory peak. Jul 6 23:29:10.649322 waagent[1844]: 2025-07-06T23:29:10.649253Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 6 23:29:10.653425 waagent[1844]: 2025-07-06T23:29:10.653349Z INFO Daemon Daemon cloud-init is enabled: False Jul 6 23:29:10.655479 waagent[1844]: 2025-07-06T23:29:10.655369Z INFO Daemon Daemon Copying ovf-env.xml Jul 6 23:29:10.715621 waagent[1844]: 2025-07-06T23:29:10.712743Z INFO Daemon Daemon Successfully mounted dvd Jul 6 23:29:10.736008 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 6 23:29:10.738168 waagent[1844]: 2025-07-06T23:29:10.737623Z INFO Daemon Daemon Detect protocol endpoint Jul 6 23:29:10.743317 waagent[1844]: 2025-07-06T23:29:10.739806Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 6 23:29:10.743317 waagent[1844]: 2025-07-06T23:29:10.740049Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 6 23:29:10.743317 waagent[1844]: 2025-07-06T23:29:10.740752Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 6 23:29:10.743317 waagent[1844]: 2025-07-06T23:29:10.741628Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 6 23:29:10.743317 waagent[1844]: 2025-07-06T23:29:10.742258Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 6 23:29:10.779164 waagent[1844]: 2025-07-06T23:29:10.779057Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 6 23:29:10.786381 waagent[1844]: 2025-07-06T23:29:10.779478Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 6 23:29:10.786381 waagent[1844]: 2025-07-06T23:29:10.780199Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 6 23:29:10.876053 waagent[1844]: 2025-07-06T23:29:10.875951Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 6 23:29:10.879089 waagent[1844]: 2025-07-06T23:29:10.879023Z INFO Daemon Daemon Forcing an update of the goal state. Jul 6 23:29:10.885039 waagent[1844]: 2025-07-06T23:29:10.884982Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:29:10.901442 waagent[1844]: 2025-07-06T23:29:10.901390Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 6 23:29:10.916697 waagent[1844]: 2025-07-06T23:29:10.901965Z INFO Daemon Jul 6 23:29:10.916697 waagent[1844]: 2025-07-06T23:29:10.902796Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 246e3682-0eb4-4c71-abb9-a011bc3bd244 eTag: 13348317816678713739 source: Fabric] Jul 6 23:29:10.916697 waagent[1844]: 2025-07-06T23:29:10.903792Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 6 23:29:10.916697 waagent[1844]: 2025-07-06T23:29:10.904742Z INFO Daemon Jul 6 23:29:10.916697 waagent[1844]: 2025-07-06T23:29:10.905441Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:29:10.916697 waagent[1844]: 2025-07-06T23:29:10.909838Z INFO Daemon Daemon Downloading artifacts profile blob Jul 6 23:29:11.048552 waagent[1844]: 2025-07-06T23:29:11.048480Z INFO Daemon Downloaded certificate {'thumbprint': '4B0BE596FA4D2A9EEE6CFAED3F5CAA9437FFE4F3', 'hasPrivateKey': True} Jul 6 23:29:11.054623 waagent[1844]: 2025-07-06T23:29:11.054564Z INFO Daemon Fetch goal state completed Jul 6 23:29:11.097871 waagent[1844]: 2025-07-06T23:29:11.097790Z INFO Daemon Daemon Starting provisioning Jul 6 23:29:11.100614 waagent[1844]: 2025-07-06T23:29:11.100538Z INFO Daemon Daemon Handle ovf-env.xml. Jul 6 23:29:11.102936 waagent[1844]: 2025-07-06T23:29:11.102866Z INFO Daemon Daemon Set hostname [ci-4230.2.1-a-d392076d12] Jul 6 23:29:11.125091 waagent[1844]: 2025-07-06T23:29:11.125015Z INFO Daemon Daemon Publish hostname [ci-4230.2.1-a-d392076d12] Jul 6 23:29:11.127930 waagent[1844]: 2025-07-06T23:29:11.127864Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 6 23:29:11.132114 waagent[1844]: 2025-07-06T23:29:11.128270Z INFO Daemon Daemon Primary interface is [eth0] Jul 6 23:29:11.132580 login[1846]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:29:11.138944 systemd-networkd[1568]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:29:11.138958 systemd-networkd[1568]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:29:11.139010 systemd-networkd[1568]: eth0: DHCP lease lost Jul 6 23:29:11.140727 systemd-logind[1698]: New session 1 of user core. Jul 6 23:29:11.148516 waagent[1844]: 2025-07-06T23:29:11.141071Z INFO Daemon Daemon Create user account if not exists Jul 6 23:29:11.148516 waagent[1844]: 2025-07-06T23:29:11.141443Z INFO Daemon Daemon User core already exists, skip useradd Jul 6 23:29:11.148516 waagent[1844]: 2025-07-06T23:29:11.142156Z INFO Daemon Daemon Configure sudoer Jul 6 23:29:11.148516 waagent[1844]: 2025-07-06T23:29:11.143121Z INFO Daemon Daemon Configure sshd Jul 6 23:29:11.148516 waagent[1844]: 2025-07-06T23:29:11.143941Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 6 23:29:11.148516 waagent[1844]: 2025-07-06T23:29:11.144122Z INFO Daemon Daemon Deploy ssh public key. Jul 6 23:29:11.157430 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:29:11.187863 systemd-networkd[1568]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 6 23:29:12.265984 waagent[1844]: 2025-07-06T23:29:12.265900Z INFO Daemon Daemon Provisioning complete Jul 6 23:29:12.278323 waagent[1844]: 2025-07-06T23:29:12.278266Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 6 23:29:12.284372 waagent[1844]: 2025-07-06T23:29:12.278548Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 6 23:29:12.284372 waagent[1844]: 2025-07-06T23:29:12.279798Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 6 23:29:12.434398 waagent[1923]: 2025-07-06T23:29:12.434304Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 6 23:29:12.434826 waagent[1923]: 2025-07-06T23:29:12.434458Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.1 Jul 6 23:29:12.434826 waagent[1923]: 2025-07-06T23:29:12.434539Z INFO ExtHandler ExtHandler Python: 3.11.11 Jul 6 23:29:12.497490 waagent[1923]: 2025-07-06T23:29:12.497387Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 6 23:29:12.497750 waagent[1923]: 2025-07-06T23:29:12.497690Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:29:12.497865 waagent[1923]: 2025-07-06T23:29:12.497814Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:29:12.505710 waagent[1923]: 2025-07-06T23:29:12.505646Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 6 23:29:12.510460 waagent[1923]: 2025-07-06T23:29:12.510416Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 6 23:29:12.510882 waagent[1923]: 2025-07-06T23:29:12.510832Z INFO ExtHandler Jul 6 23:29:12.511007 waagent[1923]: 2025-07-06T23:29:12.510923Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: cb8c87bc-1822-4c2a-97df-806d48f0d145 eTag: 13348317816678713739 source: Fabric] Jul 6 23:29:12.511327 waagent[1923]: 2025-07-06T23:29:12.511275Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 6 23:29:12.511894 waagent[1923]: 2025-07-06T23:29:12.511837Z INFO ExtHandler Jul 6 23:29:12.511959 waagent[1923]: 2025-07-06T23:29:12.511919Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 6 23:29:12.515449 waagent[1923]: 2025-07-06T23:29:12.515408Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 6 23:29:12.588688 waagent[1923]: 2025-07-06T23:29:12.588612Z INFO ExtHandler Downloaded certificate {'thumbprint': '4B0BE596FA4D2A9EEE6CFAED3F5CAA9437FFE4F3', 'hasPrivateKey': True} Jul 6 23:29:12.589170 waagent[1923]: 2025-07-06T23:29:12.589115Z INFO ExtHandler Fetch goal state completed Jul 6 23:29:12.603500 waagent[1923]: 2025-07-06T23:29:12.603440Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1923 Jul 6 23:29:12.603645 waagent[1923]: 2025-07-06T23:29:12.603599Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 6 23:29:12.605192 waagent[1923]: 2025-07-06T23:29:12.605135Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 6 23:29:12.605565 waagent[1923]: 2025-07-06T23:29:12.605513Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 6 23:29:12.643399 waagent[1923]: 2025-07-06T23:29:12.643349Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 6 23:29:12.643625 waagent[1923]: 2025-07-06T23:29:12.643571Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 6 23:29:12.650930 waagent[1923]: 2025-07-06T23:29:12.650660Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 6 23:29:12.657667 systemd[1]: Reload requested from client PID 1936 ('systemctl') (unit waagent.service)... Jul 6 23:29:12.657690 systemd[1]: Reloading... Jul 6 23:29:12.759980 zram_generator::config[1978]: No configuration found. Jul 6 23:29:12.882231 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:29:13.000137 systemd[1]: Reloading finished in 341 ms. Jul 6 23:29:13.019224 waagent[1923]: 2025-07-06T23:29:13.016708Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 6 23:29:13.025473 systemd[1]: Reload requested from client PID 2032 ('systemctl') (unit waagent.service)... Jul 6 23:29:13.025493 systemd[1]: Reloading... Jul 6 23:29:13.129251 zram_generator::config[2074]: No configuration found. Jul 6 23:29:13.259229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:29:13.376390 systemd[1]: Reloading finished in 350 ms. Jul 6 23:29:13.395229 waagent[1923]: 2025-07-06T23:29:13.393124Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 6 23:29:13.395229 waagent[1923]: 2025-07-06T23:29:13.393339Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 6 23:29:14.526457 waagent[1923]: 2025-07-06T23:29:14.526366Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 6 23:29:14.527115 waagent[1923]: 2025-07-06T23:29:14.527044Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 6 23:29:14.527888 waagent[1923]: 2025-07-06T23:29:14.527834Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 6 23:29:14.528311 waagent[1923]: 2025-07-06T23:29:14.528249Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 6 23:29:14.528444 waagent[1923]: 2025-07-06T23:29:14.528391Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:29:14.528567 waagent[1923]: 2025-07-06T23:29:14.528518Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 6 23:29:14.528849 waagent[1923]: 2025-07-06T23:29:14.528796Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 6 23:29:14.528965 waagent[1923]: 2025-07-06T23:29:14.528915Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:29:14.529065 waagent[1923]: 2025-07-06T23:29:14.529025Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 6 23:29:14.529438 waagent[1923]: 2025-07-06T23:29:14.529393Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 6 23:29:14.529655 waagent[1923]: 2025-07-06T23:29:14.529603Z INFO EnvHandler ExtHandler Configure routes Jul 6 23:29:14.529893 waagent[1923]: 2025-07-06T23:29:14.529837Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 6 23:29:14.530094 waagent[1923]: 2025-07-06T23:29:14.530050Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 6 23:29:14.530190 waagent[1923]: 2025-07-06T23:29:14.530141Z INFO EnvHandler ExtHandler Gateway:None Jul 6 23:29:14.530327 waagent[1923]: 2025-07-06T23:29:14.530288Z INFO EnvHandler ExtHandler Routes:None Jul 6 23:29:14.530679 waagent[1923]: 2025-07-06T23:29:14.530628Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 6 23:29:14.531349 waagent[1923]: 2025-07-06T23:29:14.531278Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 6 23:29:14.531549 waagent[1923]: 2025-07-06T23:29:14.531500Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 6 23:29:14.531549 waagent[1923]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 6 23:29:14.531549 waagent[1923]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 6 23:29:14.531549 waagent[1923]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 6 23:29:14.531549 waagent[1923]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:29:14.531549 waagent[1923]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:29:14.531549 waagent[1923]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 6 23:29:14.540812 waagent[1923]: 2025-07-06T23:29:14.539378Z INFO ExtHandler ExtHandler Jul 6 23:29:14.540812 waagent[1923]: 2025-07-06T23:29:14.539476Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bf93e5d9-5253-4cc2-a87d-015521754dfe correlation 22d1a285-1105-4cc0-b252-552d5fb05ff2 created: 2025-07-06T23:28:20.354240Z] Jul 6 23:29:14.540812 waagent[1923]: 2025-07-06T23:29:14.539888Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 6 23:29:14.540812 waagent[1923]: 2025-07-06T23:29:14.540589Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jul 6 23:29:14.572750 waagent[1923]: 2025-07-06T23:29:14.572296Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8CEB64E1-89CA-4B45-880B-A79038574CF8;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 6 23:29:14.577498 waagent[1923]: 2025-07-06T23:29:14.577441Z INFO MonitorHandler ExtHandler Network interfaces: Jul 6 23:29:14.577498 waagent[1923]: Executing ['ip', '-a', '-o', 'link']: Jul 6 23:29:14.577498 waagent[1923]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 6 23:29:14.577498 waagent[1923]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:40:75:04 brd ff:ff:ff:ff:ff:ff Jul 6 23:29:14.577498 waagent[1923]: 3: enP57312s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:40:75:04 brd ff:ff:ff:ff:ff:ff\ altname enP57312p0s2 Jul 6 23:29:14.577498 waagent[1923]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 6 23:29:14.577498 waagent[1923]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 6 23:29:14.577498 waagent[1923]: 2: eth0 inet 10.200.8.45/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 6 23:29:14.577498 waagent[1923]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 6 23:29:14.577498 waagent[1923]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 6 23:29:14.577498 waagent[1923]: 2: eth0 inet6 fe80::7eed:8dff:fe40:7504/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:29:14.577498 waagent[1923]: 3: enP57312s1 inet6 fe80::7eed:8dff:fe40:7504/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 6 23:29:14.603679 waagent[1923]: 2025-07-06T23:29:14.603620Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 6 23:29:14.603679 waagent[1923]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:29:14.603679 waagent[1923]: pkts bytes target prot opt in out source destination Jul 6 23:29:14.603679 waagent[1923]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:29:14.603679 waagent[1923]: pkts bytes target prot opt in out source destination Jul 6 23:29:14.603679 waagent[1923]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:29:14.603679 waagent[1923]: pkts bytes target prot opt in out source destination Jul 6 23:29:14.603679 waagent[1923]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:29:14.603679 waagent[1923]: 9 1060 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:29:14.603679 waagent[1923]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:29:14.608815 waagent[1923]: 2025-07-06T23:29:14.608758Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 6 23:29:14.608815 waagent[1923]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:29:14.608815 waagent[1923]: pkts bytes target prot opt in out source destination Jul 6 23:29:14.608815 waagent[1923]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:29:14.608815 waagent[1923]: pkts bytes target prot opt in out source destination Jul 6 23:29:14.608815 waagent[1923]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 6 23:29:14.608815 waagent[1923]: pkts bytes target prot opt in out source destination Jul 6 23:29:14.608815 waagent[1923]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 6 23:29:14.608815 waagent[1923]: 19 2105 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 6 23:29:14.608815 waagent[1923]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 6 23:29:14.609192 waagent[1923]: 2025-07-06T23:29:14.609139Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 6 23:29:20.782166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:29:20.787474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:29:20.902657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:29:20.906585 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:29:21.693130 kubelet[2167]: E0706 23:29:21.693072 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:29:21.697225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:29:21.697455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:29:21.697919 systemd[1]: kubelet.service: Consumed 148ms CPU time, 112.2M memory peak. Jul 6 23:29:31.726585 chronyd[1703]: Selected source PHC0 Jul 6 23:29:31.782263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:29:31.787473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:29:31.897559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:29:31.901677 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:29:32.552433 kubelet[2182]: E0706 23:29:32.552370 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:29:32.555162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:29:32.555405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:29:32.555862 systemd[1]: kubelet.service: Consumed 146ms CPU time, 110.6M memory peak. Jul 6 23:29:38.537879 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:29:38.544551 systemd[1]: Started sshd@0-10.200.8.45:22-10.200.16.10:44648.service - OpenSSH per-connection server daemon (10.200.16.10:44648). Jul 6 23:29:39.280413 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 44648 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:29:39.282109 sshd-session[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:39.287423 systemd-logind[1698]: New session 3 of user core. Jul 6 23:29:39.297385 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:29:39.837594 systemd[1]: Started sshd@1-10.200.8.45:22-10.200.16.10:59594.service - OpenSSH per-connection server daemon (10.200.16.10:59594). Jul 6 23:29:40.465693 sshd[2195]: Accepted publickey for core from 10.200.16.10 port 59594 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:29:40.467450 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:40.471923 systemd-logind[1698]: New session 4 of user core. Jul 6 23:29:40.483602 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:29:40.909865 sshd[2197]: Connection closed by 10.200.16.10 port 59594 Jul 6 23:29:40.910841 sshd-session[2195]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:40.914286 systemd[1]: sshd@1-10.200.8.45:22-10.200.16.10:59594.service: Deactivated successfully. Jul 6 23:29:40.916612 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:29:40.918072 systemd-logind[1698]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:29:40.919126 systemd-logind[1698]: Removed session 4. Jul 6 23:29:41.026535 systemd[1]: Started sshd@2-10.200.8.45:22-10.200.16.10:59600.service - OpenSSH per-connection server daemon (10.200.16.10:59600). Jul 6 23:29:41.654493 sshd[2203]: Accepted publickey for core from 10.200.16.10 port 59600 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:29:41.656220 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:41.660628 systemd-logind[1698]: New session 5 of user core. Jul 6 23:29:41.672358 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:29:42.096475 sshd[2205]: Connection closed by 10.200.16.10 port 59600 Jul 6 23:29:42.097459 sshd-session[2203]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:42.100968 systemd[1]: sshd@2-10.200.8.45:22-10.200.16.10:59600.service: Deactivated successfully. Jul 6 23:29:42.103682 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:29:42.105559 systemd-logind[1698]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:29:42.106607 systemd-logind[1698]: Removed session 5. Jul 6 23:29:42.213695 systemd[1]: Started sshd@3-10.200.8.45:22-10.200.16.10:59608.service - OpenSSH per-connection server daemon (10.200.16.10:59608). Jul 6 23:29:42.728368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:29:42.734516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:29:42.838627 sshd[2211]: Accepted publickey for core from 10.200.16.10 port 59608 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:29:42.840047 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:42.844650 systemd-logind[1698]: New session 6 of user core. Jul 6 23:29:42.855384 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:29:43.197010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:29:43.202593 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:29:43.288755 sshd[2216]: Connection closed by 10.200.16.10 port 59608 Jul 6 23:29:43.289611 sshd-session[2211]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:43.294023 systemd[1]: sshd@3-10.200.8.45:22-10.200.16.10:59608.service: Deactivated successfully. Jul 6 23:29:43.296153 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:29:43.296921 systemd-logind[1698]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:29:43.297873 systemd-logind[1698]: Removed session 6. Jul 6 23:29:43.405498 systemd[1]: Started sshd@4-10.200.8.45:22-10.200.16.10:59618.service - OpenSSH per-connection server daemon (10.200.16.10:59618). Jul 6 23:29:43.488624 kubelet[2223]: E0706 23:29:43.488488 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:29:43.491493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:29:43.491703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:29:43.492145 systemd[1]: kubelet.service: Consumed 158ms CPU time, 107.8M memory peak. Jul 6 23:29:44.030941 sshd[2232]: Accepted publickey for core from 10.200.16.10 port 59618 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:29:44.032452 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:44.037005 systemd-logind[1698]: New session 7 of user core. Jul 6 23:29:44.045362 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:29:46.689359 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:29:46.689739 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:29:46.715706 sudo[2236]: pam_unix(sudo:session): session closed for user root Jul 6 23:29:46.815996 sshd[2235]: Connection closed by 10.200.16.10 port 59618 Jul 6 23:29:46.817178 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:46.820881 systemd[1]: sshd@4-10.200.8.45:22-10.200.16.10:59618.service: Deactivated successfully. Jul 6 23:29:46.823272 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:29:46.824829 systemd-logind[1698]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:29:46.825804 systemd-logind[1698]: Removed session 7. Jul 6 23:29:46.932743 systemd[1]: Started sshd@5-10.200.8.45:22-10.200.16.10:59634.service - OpenSSH per-connection server daemon (10.200.16.10:59634). Jul 6 23:29:47.560345 sshd[2242]: Accepted publickey for core from 10.200.16.10 port 59634 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:29:47.561835 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:47.567318 systemd-logind[1698]: New session 8 of user core. Jul 6 23:29:47.573494 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:29:47.906139 sudo[2246]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:29:47.906531 sudo[2246]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:29:47.910096 sudo[2246]: pam_unix(sudo:session): session closed for user root Jul 6 23:29:47.915134 sudo[2245]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:29:47.915500 sudo[2245]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:29:47.934787 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:29:47.961496 augenrules[2268]: No rules Jul 6 23:29:47.962926 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:29:47.963245 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:29:47.964349 sudo[2245]: pam_unix(sudo:session): session closed for user root Jul 6 23:29:48.064790 sshd[2244]: Connection closed by 10.200.16.10 port 59634 Jul 6 23:29:48.065597 sshd-session[2242]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:48.069839 systemd[1]: sshd@5-10.200.8.45:22-10.200.16.10:59634.service: Deactivated successfully. Jul 6 23:29:48.072011 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:29:48.072799 systemd-logind[1698]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:29:48.073776 systemd-logind[1698]: Removed session 8. Jul 6 23:29:48.179697 systemd[1]: Started sshd@6-10.200.8.45:22-10.200.16.10:59644.service - OpenSSH per-connection server daemon (10.200.16.10:59644). Jul 6 23:29:48.805303 sshd[2277]: Accepted publickey for core from 10.200.16.10 port 59644 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:29:48.806793 sshd-session[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:48.812327 systemd-logind[1698]: New session 9 of user core. Jul 6 23:29:48.819566 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:29:49.159780 sudo[2280]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:29:49.160248 sudo[2280]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:29:50.596517 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:29:50.597791 (dockerd)[2296]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:29:52.235483 dockerd[2296]: time="2025-07-06T23:29:52.235153376Z" level=info msg="Starting up" Jul 6 23:29:52.598812 dockerd[2296]: time="2025-07-06T23:29:52.597702147Z" level=info msg="Loading containers: start." Jul 6 23:29:52.796239 kernel: Initializing XFRM netlink socket Jul 6 23:29:52.906477 systemd-networkd[1568]: docker0: Link UP Jul 6 23:29:52.942413 dockerd[2296]: time="2025-07-06T23:29:52.942369807Z" level=info msg="Loading containers: done." Jul 6 23:29:52.973188 dockerd[2296]: time="2025-07-06T23:29:52.973138869Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:29:52.973377 dockerd[2296]: time="2025-07-06T23:29:52.973252570Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:29:52.973436 dockerd[2296]: time="2025-07-06T23:29:52.973376072Z" level=info msg="Daemon has completed initialization" Jul 6 23:29:53.044965 dockerd[2296]: time="2025-07-06T23:29:53.044902514Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:29:53.045260 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:29:53.154518 update_engine[1699]: I20250706 23:29:53.153684 1699 update_attempter.cc:509] Updating boot flags... Jul 6 23:29:53.222538 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2493) Jul 6 23:29:53.376250 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 6 23:29:53.418475 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2405) Jul 6 23:29:53.507817 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:29:53.560958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:29:54.354811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:29:54.359149 (kubelet)[2601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:29:54.411093 kubelet[2601]: E0706 23:29:54.410997 2601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:29:54.415566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:29:54.415760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:29:54.416673 systemd[1]: kubelet.service: Consumed 153ms CPU time, 112.9M memory peak. Jul 6 23:29:55.071251 containerd[1714]: time="2025-07-06T23:29:55.071191182Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:29:55.687320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2243157857.mount: Deactivated successfully. Jul 6 23:29:57.443576 containerd[1714]: time="2025-07-06T23:29:57.443519846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:57.446987 containerd[1714]: time="2025-07-06T23:29:57.446926263Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jul 6 23:29:57.450983 containerd[1714]: time="2025-07-06T23:29:57.450946683Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:57.460669 containerd[1714]: time="2025-07-06T23:29:57.460541532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:57.461887 containerd[1714]: time="2025-07-06T23:29:57.461702038Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.390445855s" Jul 6 23:29:57.461887 containerd[1714]: time="2025-07-06T23:29:57.461742038Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 6 23:29:57.462697 containerd[1714]: time="2025-07-06T23:29:57.462479742Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:29:59.111835 containerd[1714]: time="2025-07-06T23:29:59.111778694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:59.117153 containerd[1714]: time="2025-07-06T23:29:59.117094421Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jul 6 23:29:59.124730 containerd[1714]: time="2025-07-06T23:29:59.124694860Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:59.131585 containerd[1714]: time="2025-07-06T23:29:59.131524294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:29:59.132713 containerd[1714]: time="2025-07-06T23:29:59.132569100Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.670054558s" Jul 6 23:29:59.132713 containerd[1714]: time="2025-07-06T23:29:59.132606000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 6 23:29:59.133285 containerd[1714]: time="2025-07-06T23:29:59.133259703Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:30:00.585783 containerd[1714]: time="2025-07-06T23:30:00.585727459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:00.588626 containerd[1714]: time="2025-07-06T23:30:00.588562273Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jul 6 23:30:00.598157 containerd[1714]: time="2025-07-06T23:30:00.598102322Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:00.605153 containerd[1714]: time="2025-07-06T23:30:00.605103557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:00.606359 containerd[1714]: time="2025-07-06T23:30:00.606323263Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.47303606s" Jul 6 23:30:00.606359 containerd[1714]: time="2025-07-06T23:30:00.606354564Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 6 23:30:00.607139 containerd[1714]: time="2025-07-06T23:30:00.607116167Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:30:01.903219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345348941.mount: Deactivated successfully. Jul 6 23:30:02.472230 containerd[1714]: time="2025-07-06T23:30:02.472140813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:02.479177 containerd[1714]: time="2025-07-06T23:30:02.479105348Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jul 6 23:30:02.483183 containerd[1714]: time="2025-07-06T23:30:02.483125268Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:02.487954 containerd[1714]: time="2025-07-06T23:30:02.487882592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:02.488656 containerd[1714]: time="2025-07-06T23:30:02.488456895Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.881291727s" Jul 6 23:30:02.488656 containerd[1714]: time="2025-07-06T23:30:02.488495796Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 6 23:30:02.489177 containerd[1714]: time="2025-07-06T23:30:02.488969898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:30:03.031547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569468131.mount: Deactivated successfully. Jul 6 23:30:04.475101 containerd[1714]: time="2025-07-06T23:30:04.475046052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:04.481343 containerd[1714]: time="2025-07-06T23:30:04.481307402Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 6 23:30:04.484934 containerd[1714]: time="2025-07-06T23:30:04.484880931Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:04.489847 containerd[1714]: time="2025-07-06T23:30:04.489815471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:04.491345 containerd[1714]: time="2025-07-06T23:30:04.490940380Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.001934582s" Jul 6 23:30:04.491345 containerd[1714]: time="2025-07-06T23:30:04.490978180Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:30:04.491499 containerd[1714]: time="2025-07-06T23:30:04.491472684Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:30:04.531940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 6 23:30:04.537459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:30:04.658596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:30:04.663523 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:30:04.702817 kubelet[2739]: E0706 23:30:04.702765 2739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:30:04.705219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:30:04.705438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:30:04.705865 systemd[1]: kubelet.service: Consumed 153ms CPU time, 108.2M memory peak. Jul 6 23:30:06.133348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602352270.mount: Deactivated successfully. Jul 6 23:30:06.157145 containerd[1714]: time="2025-07-06T23:30:06.157101617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:06.159259 containerd[1714]: time="2025-07-06T23:30:06.159184133Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 6 23:30:06.164177 containerd[1714]: time="2025-07-06T23:30:06.164111873Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:06.171319 containerd[1714]: time="2025-07-06T23:30:06.171270931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:06.172587 containerd[1714]: time="2025-07-06T23:30:06.171999637Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.680490752s" Jul 6 23:30:06.172587 containerd[1714]: time="2025-07-06T23:30:06.172035737Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:30:06.172817 containerd[1714]: time="2025-07-06T23:30:06.172792043Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:30:06.833530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088958582.mount: Deactivated successfully. Jul 6 23:30:09.287090 containerd[1714]: time="2025-07-06T23:30:09.287027457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:09.289636 containerd[1714]: time="2025-07-06T23:30:09.289572178Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 6 23:30:09.294883 containerd[1714]: time="2025-07-06T23:30:09.294822320Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:09.300367 containerd[1714]: time="2025-07-06T23:30:09.300331765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:09.301940 containerd[1714]: time="2025-07-06T23:30:09.301518674Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.128671331s" Jul 6 23:30:09.301940 containerd[1714]: time="2025-07-06T23:30:09.301554775Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 6 23:30:13.181236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:30:13.181499 systemd[1]: kubelet.service: Consumed 153ms CPU time, 108.2M memory peak. Jul 6 23:30:13.193490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:30:13.232439 systemd[1]: Reload requested from client PID 2831 ('systemctl') (unit session-9.scope)... Jul 6 23:30:13.232455 systemd[1]: Reloading... Jul 6 23:30:13.347282 zram_generator::config[2877]: No configuration found. Jul 6 23:30:13.483486 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:30:13.617246 systemd[1]: Reloading finished in 384 ms. Jul 6 23:30:14.522120 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:30:14.522287 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:30:14.522724 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:30:14.522799 systemd[1]: kubelet.service: Consumed 119ms CPU time, 97.3M memory peak. Jul 6 23:30:14.531653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:30:14.895162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:30:14.908476 (kubelet)[2947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:30:15.286406 kubelet[2947]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:30:15.286406 kubelet[2947]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:30:15.286406 kubelet[2947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:30:15.286406 kubelet[2947]: I0706 23:30:14.945584 2947 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:30:15.286406 kubelet[2947]: I0706 23:30:15.252573 2947 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:30:15.286406 kubelet[2947]: I0706 23:30:15.252604 2947 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:30:15.286406 kubelet[2947]: I0706 23:30:15.253179 2947 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:30:15.343396 kubelet[2947]: E0706 23:30:15.343337 2947 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:30:15.344893 kubelet[2947]: I0706 23:30:15.344227 2947 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:30:15.350985 kubelet[2947]: E0706 23:30:15.350950 2947 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:30:15.350985 kubelet[2947]: I0706 23:30:15.350984 2947 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:30:15.354574 kubelet[2947]: I0706 23:30:15.354552 2947 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:30:15.354806 kubelet[2947]: I0706 23:30:15.354771 2947 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:30:15.354976 kubelet[2947]: I0706 23:30:15.354803 2947 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-d392076d12","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:30:15.355143 kubelet[2947]: I0706 23:30:15.354985 2947 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:30:15.355143 kubelet[2947]: I0706 23:30:15.354998 2947 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:30:15.355143 kubelet[2947]: I0706 23:30:15.355142 2947 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:30:15.360103 kubelet[2947]: I0706 23:30:15.360080 2947 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:30:15.362113 kubelet[2947]: I0706 23:30:15.362091 2947 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:30:15.362458 kubelet[2947]: I0706 23:30:15.362230 2947 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:30:15.362458 kubelet[2947]: I0706 23:30:15.362250 2947 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:30:15.364556 kubelet[2947]: W0706 23:30:15.363764 2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-d392076d12&limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 6 23:30:15.364556 kubelet[2947]: E0706 23:30:15.363840 2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-d392076d12&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:30:15.365050 kubelet[2947]: W0706 23:30:15.365010 2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 6 23:30:15.365930 kubelet[2947]: E0706 23:30:15.365169 2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:30:15.365930 kubelet[2947]: I0706 23:30:15.365278 2947 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:30:15.365930 kubelet[2947]: I0706 23:30:15.365789 2947 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:30:15.367089 kubelet[2947]: W0706 23:30:15.366543 2947 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:30:15.370014 kubelet[2947]: I0706 23:30:15.369989 2947 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:30:15.370091 kubelet[2947]: I0706 23:30:15.370053 2947 server.go:1287] "Started kubelet" Jul 6 23:30:15.378944 kubelet[2947]: I0706 23:30:15.378336 2947 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:30:15.378944 kubelet[2947]: I0706 23:30:15.378782 2947 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:30:15.381981 kubelet[2947]: I0706 23:30:15.381963 2947 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:30:15.383388 kubelet[2947]: E0706 23:30:15.381024 2947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.45:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.1-a-d392076d12.184fcd6560cf5327 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-a-d392076d12,UID:ci-4230.2.1-a-d392076d12,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-a-d392076d12,},FirstTimestamp:2025-07-06 23:30:15.370003239 +0000 UTC m=+0.457977063,LastTimestamp:2025-07-06 23:30:15.370003239 +0000 UTC m=+0.457977063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-a-d392076d12,}" Jul 6 23:30:15.386225 kubelet[2947]: I0706 23:30:15.384430 2947 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:30:15.386225 kubelet[2947]: I0706 23:30:15.385631 2947 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:30:15.386986 kubelet[2947]: I0706 23:30:15.386965 2947 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:30:15.391009 kubelet[2947]: I0706 23:30:15.390973 2947 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:30:15.391268 kubelet[2947]: E0706 23:30:15.391246 2947 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-d392076d12\" not found" Jul 6 23:30:15.392117 kubelet[2947]: E0706 23:30:15.392090 2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-d392076d12?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="200ms" Jul 6 23:30:15.392600 kubelet[2947]: I0706 23:30:15.392576 2947 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:30:15.392797 kubelet[2947]: I0706 23:30:15.392776 2947 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:30:15.393632 kubelet[2947]: I0706 23:30:15.393616 2947 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:30:15.394062 kubelet[2947]: I0706 23:30:15.394046 2947 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:30:15.394603 kubelet[2947]: W0706 23:30:15.394562 2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 6 23:30:15.394791 kubelet[2947]: E0706 23:30:15.394766 2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:30:15.395102 kubelet[2947]: E0706 23:30:15.395084 2947 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:30:15.395646 kubelet[2947]: I0706 23:30:15.395629 2947 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:30:15.415714 kubelet[2947]: I0706 23:30:15.415694 2947 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:30:15.415714 kubelet[2947]: I0706 23:30:15.415713 2947 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:30:15.415839 kubelet[2947]: I0706 23:30:15.415731 2947 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:30:15.419858 kubelet[2947]: I0706 23:30:15.419834 2947 policy_none.go:49] "None policy: Start" Jul 6 23:30:15.419858 kubelet[2947]: I0706 23:30:15.419857 2947 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:30:15.419995 kubelet[2947]: I0706 23:30:15.419870 2947 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:30:15.429437 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:30:15.439662 kubelet[2947]: I0706 23:30:15.439595 2947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:30:15.441780 kubelet[2947]: I0706 23:30:15.441253 2947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:30:15.441780 kubelet[2947]: I0706 23:30:15.441284 2947 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:30:15.441780 kubelet[2947]: I0706 23:30:15.441329 2947 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:30:15.441780 kubelet[2947]: I0706 23:30:15.441363 2947 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:30:15.441780 kubelet[2947]: E0706 23:30:15.441424 2947 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:30:15.450125 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:30:15.454040 kubelet[2947]: W0706 23:30:15.453410 2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 6 23:30:15.454040 kubelet[2947]: E0706 23:30:15.453479 2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:30:15.455651 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:30:15.468111 kubelet[2947]: I0706 23:30:15.468090 2947 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:30:15.469267 kubelet[2947]: I0706 23:30:15.468412 2947 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:30:15.469267 kubelet[2947]: I0706 23:30:15.468430 2947 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:30:15.469267 kubelet[2947]: I0706 23:30:15.468662 2947 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:30:15.470072 kubelet[2947]: E0706 23:30:15.470037 2947 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:30:15.470202 kubelet[2947]: E0706 23:30:15.470186 2947 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.1-a-d392076d12\" not found" Jul 6 23:30:15.553376 systemd[1]: Created slice kubepods-burstable-pod85691432053ce3b1b48293a24af93e03.slice - libcontainer container kubepods-burstable-pod85691432053ce3b1b48293a24af93e03.slice. Jul 6 23:30:15.569126 kubelet[2947]: E0706 23:30:15.568559 2947 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.570076 kubelet[2947]: I0706 23:30:15.570058 2947 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.570900 kubelet[2947]: E0706 23:30:15.570872 2947 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.573416 systemd[1]: Created slice kubepods-burstable-pode19d766fdc6924dd146698fcb3c3131e.slice - libcontainer container kubepods-burstable-pode19d766fdc6924dd146698fcb3c3131e.slice. Jul 6 23:30:15.575776 kubelet[2947]: E0706 23:30:15.575595 2947 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.577687 systemd[1]: Created slice kubepods-burstable-pod149077b10eea7d936df0278e8df400ce.slice - libcontainer container kubepods-burstable-pod149077b10eea7d936df0278e8df400ce.slice. Jul 6 23:30:15.584403 kubelet[2947]: E0706 23:30:15.584372 2947 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.593360 kubelet[2947]: E0706 23:30:15.593323 2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-d392076d12?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="400ms" Jul 6 23:30:15.594871 kubelet[2947]: I0706 23:30:15.594846 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85691432053ce3b1b48293a24af93e03-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-d392076d12\" (UID: \"85691432053ce3b1b48293a24af93e03\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.594962 kubelet[2947]: I0706 23:30:15.594880 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e19d766fdc6924dd146698fcb3c3131e-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-d392076d12\" (UID: \"e19d766fdc6924dd146698fcb3c3131e\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.594962 kubelet[2947]: I0706 23:30:15.594906 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/149077b10eea7d936df0278e8df400ce-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" (UID: \"149077b10eea7d936df0278e8df400ce\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.594962 kubelet[2947]: I0706 23:30:15.594927 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/149077b10eea7d936df0278e8df400ce-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" (UID: \"149077b10eea7d936df0278e8df400ce\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.594962 kubelet[2947]: I0706 23:30:15.594949 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/149077b10eea7d936df0278e8df400ce-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" (UID: \"149077b10eea7d936df0278e8df400ce\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.595123 kubelet[2947]: I0706 23:30:15.594970 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/149077b10eea7d936df0278e8df400ce-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" (UID: \"149077b10eea7d936df0278e8df400ce\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.595123 kubelet[2947]: I0706 23:30:15.594996 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/149077b10eea7d936df0278e8df400ce-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" (UID: \"149077b10eea7d936df0278e8df400ce\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.595123 kubelet[2947]: I0706 23:30:15.595018 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85691432053ce3b1b48293a24af93e03-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-d392076d12\" (UID: \"85691432053ce3b1b48293a24af93e03\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.595123 kubelet[2947]: I0706 23:30:15.595069 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85691432053ce3b1b48293a24af93e03-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-d392076d12\" (UID: \"85691432053ce3b1b48293a24af93e03\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.773350 kubelet[2947]: I0706 23:30:15.773313 2947 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.773792 kubelet[2947]: E0706 23:30:15.773759 2947 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:15.870642 containerd[1714]: time="2025-07-06T23:30:15.870519701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-d392076d12,Uid:85691432053ce3b1b48293a24af93e03,Namespace:kube-system,Attempt:0,}" Jul 6 23:30:15.877027 containerd[1714]: time="2025-07-06T23:30:15.876990332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-d392076d12,Uid:e19d766fdc6924dd146698fcb3c3131e,Namespace:kube-system,Attempt:0,}" Jul 6 23:30:15.885850 containerd[1714]: time="2025-07-06T23:30:15.885814174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-d392076d12,Uid:149077b10eea7d936df0278e8df400ce,Namespace:kube-system,Attempt:0,}" Jul 6 23:30:15.994548 kubelet[2947]: E0706 23:30:15.994498 2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-d392076d12?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="800ms" Jul 6 23:30:16.176695 kubelet[2947]: I0706 23:30:16.176389 2947 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:16.176951 kubelet[2947]: E0706 23:30:16.176905 2947 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:16.231746 kubelet[2947]: W0706 23:30:16.231567 2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 6 23:30:16.231879 kubelet[2947]: E0706 23:30:16.231742 2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:30:16.421681 kubelet[2947]: W0706 23:30:16.421619 2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 6 23:30:16.421681 kubelet[2947]: E0706 23:30:16.421687 2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:30:16.443973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201887055.mount: Deactivated successfully. Jul 6 23:30:16.482498 containerd[1714]: time="2025-07-06T23:30:16.482444990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:30:16.485662 kubelet[2947]: W0706 23:30:16.485609 2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-d392076d12&limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 6 23:30:16.485768 kubelet[2947]: E0706 23:30:16.485676 2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-d392076d12&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:30:16.508509 containerd[1714]: time="2025-07-06T23:30:16.508366312Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 6 23:30:16.521435 containerd[1714]: time="2025-07-06T23:30:16.521387074Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:30:16.529134 containerd[1714]: time="2025-07-06T23:30:16.529084110Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:30:16.539964 containerd[1714]: time="2025-07-06T23:30:16.539258058Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:30:16.545577 containerd[1714]: time="2025-07-06T23:30:16.545493088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:30:16.548852 containerd[1714]: time="2025-07-06T23:30:16.548817303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:30:16.549605 containerd[1714]: time="2025-07-06T23:30:16.549577007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 678.953705ms" Jul 6 23:30:16.552985 containerd[1714]: time="2025-07-06T23:30:16.552947523Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:30:16.557389 containerd[1714]: time="2025-07-06T23:30:16.557359544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 680.285012ms" Jul 6 23:30:16.598412 containerd[1714]: time="2025-07-06T23:30:16.598370337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 712.408763ms" Jul 6 23:30:16.745039 kubelet[2947]: W0706 23:30:16.744909 2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused Jul 6 23:30:16.745039 kubelet[2947]: E0706 23:30:16.744976 2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:30:16.794984 kubelet[2947]: E0706 23:30:16.794929 2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-d392076d12?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="1.6s" Jul 6 23:30:16.979763 kubelet[2947]: I0706 23:30:16.979727 2947 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:16.980173 kubelet[2947]: E0706 23:30:16.980135 2947 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:17.141040 containerd[1714]: time="2025-07-06T23:30:17.139233791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:30:17.141040 containerd[1714]: time="2025-07-06T23:30:17.140928399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:30:17.141040 containerd[1714]: time="2025-07-06T23:30:17.140946399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:17.141988 containerd[1714]: time="2025-07-06T23:30:17.141042199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:17.142156 containerd[1714]: time="2025-07-06T23:30:17.142083404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:30:17.142587 containerd[1714]: time="2025-07-06T23:30:17.142239305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:30:17.142587 containerd[1714]: time="2025-07-06T23:30:17.142259405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:17.144274 containerd[1714]: time="2025-07-06T23:30:17.143879612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:17.145778 containerd[1714]: time="2025-07-06T23:30:17.145701721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:30:17.146351 containerd[1714]: time="2025-07-06T23:30:17.146252524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:30:17.146351 containerd[1714]: time="2025-07-06T23:30:17.146282924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:17.149258 containerd[1714]: time="2025-07-06T23:30:17.148268833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:17.185416 systemd[1]: Started cri-containerd-1bc2c10539f21d8cfb0318c84e92628fe7c52dc9b33564c482d4d7bda18d9663.scope - libcontainer container 1bc2c10539f21d8cfb0318c84e92628fe7c52dc9b33564c482d4d7bda18d9663. Jul 6 23:30:17.186946 systemd[1]: Started cri-containerd-e83dd68005790314b8df096a5a34e6236709f3621bd05dc7fa92c6435779f977.scope - libcontainer container e83dd68005790314b8df096a5a34e6236709f3621bd05dc7fa92c6435779f977. Jul 6 23:30:17.190925 systemd[1]: Started cri-containerd-edeb1ffd1af088135663b5b1a466e4146541831a59e56af6d9bc9dfe4f555651.scope - libcontainer container edeb1ffd1af088135663b5b1a466e4146541831a59e56af6d9bc9dfe4f555651. Jul 6 23:30:17.269673 containerd[1714]: time="2025-07-06T23:30:17.269511506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-d392076d12,Uid:149077b10eea7d936df0278e8df400ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bc2c10539f21d8cfb0318c84e92628fe7c52dc9b33564c482d4d7bda18d9663\"" Jul 6 23:30:17.275389 containerd[1714]: time="2025-07-06T23:30:17.275241933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-d392076d12,Uid:85691432053ce3b1b48293a24af93e03,Namespace:kube-system,Attempt:0,} returns sandbox id \"e83dd68005790314b8df096a5a34e6236709f3621bd05dc7fa92c6435779f977\"" Jul 6 23:30:17.278804 containerd[1714]: time="2025-07-06T23:30:17.278521148Z" level=info msg="CreateContainer within sandbox \"1bc2c10539f21d8cfb0318c84e92628fe7c52dc9b33564c482d4d7bda18d9663\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:30:17.279441 containerd[1714]: time="2025-07-06T23:30:17.279344052Z" level=info msg="CreateContainer within sandbox \"e83dd68005790314b8df096a5a34e6236709f3621bd05dc7fa92c6435779f977\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:30:17.283978 containerd[1714]: time="2025-07-06T23:30:17.283918974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-d392076d12,Uid:e19d766fdc6924dd146698fcb3c3131e,Namespace:kube-system,Attempt:0,} returns sandbox id \"edeb1ffd1af088135663b5b1a466e4146541831a59e56af6d9bc9dfe4f555651\"" Jul 6 23:30:17.287373 containerd[1714]: time="2025-07-06T23:30:17.287349290Z" level=info msg="CreateContainer within sandbox \"edeb1ffd1af088135663b5b1a466e4146541831a59e56af6d9bc9dfe4f555651\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:30:17.377321 containerd[1714]: time="2025-07-06T23:30:17.377284214Z" level=info msg="CreateContainer within sandbox \"1bc2c10539f21d8cfb0318c84e92628fe7c52dc9b33564c482d4d7bda18d9663\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a972b8e433ab77f9b523859df20b4df520ea6363eac75fd52841a49de5e2bf17\"" Jul 6 23:30:17.377847 containerd[1714]: time="2025-07-06T23:30:17.377818817Z" level=info msg="StartContainer for \"a972b8e433ab77f9b523859df20b4df520ea6363eac75fd52841a49de5e2bf17\"" Jul 6 23:30:17.389689 containerd[1714]: time="2025-07-06T23:30:17.389650973Z" level=info msg="CreateContainer within sandbox \"edeb1ffd1af088135663b5b1a466e4146541831a59e56af6d9bc9dfe4f555651\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"255e019225ddfc3e44022038c9985a96ca545d6a769f1a7d980672007624bcc5\"" Jul 6 23:30:17.392321 containerd[1714]: time="2025-07-06T23:30:17.391306881Z" level=info msg="StartContainer for \"255e019225ddfc3e44022038c9985a96ca545d6a769f1a7d980672007624bcc5\"" Jul 6 23:30:17.395770 containerd[1714]: time="2025-07-06T23:30:17.395740001Z" level=info msg="CreateContainer within sandbox \"e83dd68005790314b8df096a5a34e6236709f3621bd05dc7fa92c6435779f977\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa9d288e6c26ce44a185764e6c1cf43c625503e6fb513094ff21f4c980c18146\"" Jul 6 23:30:17.396274 containerd[1714]: time="2025-07-06T23:30:17.396250904Z" level=info msg="StartContainer for \"aa9d288e6c26ce44a185764e6c1cf43c625503e6fb513094ff21f4c980c18146\"" Jul 6 23:30:17.414420 systemd[1]: Started cri-containerd-a972b8e433ab77f9b523859df20b4df520ea6363eac75fd52841a49de5e2bf17.scope - libcontainer container a972b8e433ab77f9b523859df20b4df520ea6363eac75fd52841a49de5e2bf17. Jul 6 23:30:17.445185 kubelet[2947]: E0706 23:30:17.445150 2947 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:30:17.468112 systemd[1]: Started cri-containerd-255e019225ddfc3e44022038c9985a96ca545d6a769f1a7d980672007624bcc5.scope - libcontainer container 255e019225ddfc3e44022038c9985a96ca545d6a769f1a7d980672007624bcc5. Jul 6 23:30:17.469959 systemd[1]: Started cri-containerd-aa9d288e6c26ce44a185764e6c1cf43c625503e6fb513094ff21f4c980c18146.scope - libcontainer container aa9d288e6c26ce44a185764e6c1cf43c625503e6fb513094ff21f4c980c18146. Jul 6 23:30:17.526198 containerd[1714]: time="2025-07-06T23:30:17.525890716Z" level=info msg="StartContainer for \"a972b8e433ab77f9b523859df20b4df520ea6363eac75fd52841a49de5e2bf17\" returns successfully" Jul 6 23:30:17.553544 containerd[1714]: time="2025-07-06T23:30:17.553347445Z" level=info msg="StartContainer for \"aa9d288e6c26ce44a185764e6c1cf43c625503e6fb513094ff21f4c980c18146\" returns successfully" Jul 6 23:30:17.568907 containerd[1714]: time="2025-07-06T23:30:17.568712218Z" level=info msg="StartContainer for \"255e019225ddfc3e44022038c9985a96ca545d6a769f1a7d980672007624bcc5\" returns successfully" Jul 6 23:30:18.470641 kubelet[2947]: E0706 23:30:18.470605 2947 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:18.476393 kubelet[2947]: E0706 23:30:18.476361 2947 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:18.476736 kubelet[2947]: E0706 23:30:18.476715 2947 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:18.582621 kubelet[2947]: I0706 23:30:18.582593 2947 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:19.478522 kubelet[2947]: E0706 23:30:19.478486 2947 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:19.479455 kubelet[2947]: E0706 23:30:19.479429 2947 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:19.479813 kubelet[2947]: E0706 23:30:19.479793 2947 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:20.381937 kubelet[2947]: I0706 23:30:20.381875 2947 apiserver.go:52] "Watching apiserver" Jul 6 23:30:20.395373 kubelet[2947]: I0706 23:30:20.395336 2947 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:30:20.395994 kubelet[2947]: E0706 23:30:20.395964 2947 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:20.486876 kubelet[2947]: E0706 23:30:20.486839 2947 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:20.488535 kubelet[2947]: E0706 23:30:20.487962 2947 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-d392076d12\" not found" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:20.521178 kubelet[2947]: I0706 23:30:20.521134 2947 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:20.521315 kubelet[2947]: E0706 23:30:20.521186 2947 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.1-a-d392076d12\": node \"ci-4230.2.1-a-d392076d12\" not found" Jul 6 23:30:20.592275 kubelet[2947]: I0706 23:30:20.592246 2947 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:20.663219 kubelet[2947]: E0706 23:30:20.662646 2947 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-a-d392076d12\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:20.663219 kubelet[2947]: I0706 23:30:20.662680 2947 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:20.665278 kubelet[2947]: E0706 23:30:20.665086 2947 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:20.665278 kubelet[2947]: I0706 23:30:20.665115 2947 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-d392076d12" Jul 6 23:30:20.666483 kubelet[2947]: E0706 23:30:20.666445 2947 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-a-d392076d12\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.1-a-d392076d12" Jul 6 23:30:21.479727 kubelet[2947]: I0706 23:30:21.479690 2947 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-d392076d12" Jul 6 23:30:21.519118 kubelet[2947]: W0706 23:30:21.518766 2947 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:30:21.953733 kubelet[2947]: I0706 23:30:21.953706 2947 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:21.964656 kubelet[2947]: W0706 23:30:21.964295 2947 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:30:22.033558 kubelet[2947]: I0706 23:30:22.033530 2947 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:22.042456 kubelet[2947]: W0706 23:30:22.042411 2947 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:30:22.779194 systemd[1]: Reload requested from client PID 3219 ('systemctl') (unit session-9.scope)... Jul 6 23:30:22.779223 systemd[1]: Reloading... Jul 6 23:30:22.874251 zram_generator::config[3262]: No configuration found. Jul 6 23:30:23.022731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:30:23.158163 systemd[1]: Reloading finished in 378 ms. Jul 6 23:30:23.187671 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:30:23.199443 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:30:23.199775 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:30:23.199841 systemd[1]: kubelet.service: Consumed 845ms CPU time, 130.7M memory peak. Jul 6 23:30:23.207035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:30:23.382982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:30:23.395533 (kubelet)[3333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:30:23.434771 kubelet[3333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:30:23.434771 kubelet[3333]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:30:23.434771 kubelet[3333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:30:23.435383 kubelet[3333]: I0706 23:30:23.435067 3333 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:30:23.440901 kubelet[3333]: I0706 23:30:23.440876 3333 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:30:23.441073 kubelet[3333]: I0706 23:30:23.440999 3333 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:30:23.441571 kubelet[3333]: I0706 23:30:23.441552 3333 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:30:23.445037 kubelet[3333]: I0706 23:30:23.444856 3333 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:30:23.446883 kubelet[3333]: I0706 23:30:23.446861 3333 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:30:23.449857 kubelet[3333]: E0706 23:30:23.449813 3333 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:30:23.449857 kubelet[3333]: I0706 23:30:23.449847 3333 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:30:23.454081 kubelet[3333]: I0706 23:30:23.454051 3333 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:30:23.454340 kubelet[3333]: I0706 23:30:23.454305 3333 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:30:23.454500 kubelet[3333]: I0706 23:30:23.454336 3333 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-d392076d12","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:30:23.454633 kubelet[3333]: I0706 23:30:23.454509 3333 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:30:23.454633 kubelet[3333]: I0706 23:30:23.454522 3333 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:30:23.454633 kubelet[3333]: I0706 23:30:23.454574 3333 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:30:23.454877 kubelet[3333]: I0706 23:30:23.454727 3333 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:30:23.454877 kubelet[3333]: I0706 23:30:23.454753 3333 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:30:23.454877 kubelet[3333]: I0706 23:30:23.454772 3333 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:30:23.454877 kubelet[3333]: I0706 23:30:23.454783 3333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:30:23.459226 kubelet[3333]: I0706 23:30:23.458322 3333 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:30:23.459226 kubelet[3333]: I0706 23:30:23.458759 3333 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:30:23.459226 kubelet[3333]: I0706 23:30:23.459181 3333 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:30:23.459725 kubelet[3333]: I0706 23:30:23.459631 3333 server.go:1287] "Started kubelet" Jul 6 23:30:23.469881 kubelet[3333]: I0706 23:30:23.469855 3333 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:30:23.470719 kubelet[3333]: I0706 23:30:23.470695 3333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:30:23.473441 kubelet[3333]: I0706 23:30:23.472415 3333 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:30:23.476174 kubelet[3333]: I0706 23:30:23.476105 3333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:30:23.476694 kubelet[3333]: I0706 23:30:23.476671 3333 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:30:23.478399 kubelet[3333]: I0706 23:30:23.478364 3333 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:30:23.478714 kubelet[3333]: E0706 23:30:23.478582 3333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-d392076d12\" not found" Jul 6 23:30:23.479762 kubelet[3333]: I0706 23:30:23.479656 3333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:30:23.489074 kubelet[3333]: I0706 23:30:23.489041 3333 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:30:23.489153 kubelet[3333]: I0706 23:30:23.489127 3333 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:30:23.492720 kubelet[3333]: I0706 23:30:23.492699 3333 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:30:23.492917 kubelet[3333]: I0706 23:30:23.492730 3333 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:30:23.494960 kubelet[3333]: I0706 23:30:23.494866 3333 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:30:23.502232 kubelet[3333]: I0706 23:30:23.501870 3333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:30:23.505258 kubelet[3333]: I0706 23:30:23.505218 3333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:30:23.505586 kubelet[3333]: I0706 23:30:23.505247 3333 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:30:23.505586 kubelet[3333]: I0706 23:30:23.505356 3333 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:30:23.505586 kubelet[3333]: I0706 23:30:23.505365 3333 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:30:23.505586 kubelet[3333]: E0706 23:30:23.505417 3333 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:30:23.573434 kubelet[3333]: I0706 23:30:23.573188 3333 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:30:23.573434 kubelet[3333]: I0706 23:30:23.573274 3333 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:30:23.573434 kubelet[3333]: I0706 23:30:23.573321 3333 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:30:23.574028 kubelet[3333]: I0706 23:30:23.573842 3333 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:30:23.574028 kubelet[3333]: I0706 23:30:23.573859 3333 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:30:23.574028 kubelet[3333]: I0706 23:30:23.573888 3333 policy_none.go:49] "None policy: Start" Jul 6 23:30:23.574553 kubelet[3333]: I0706 23:30:23.574158 3333 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:30:23.574553 kubelet[3333]: I0706 23:30:23.574178 3333 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:30:23.574553 kubelet[3333]: I0706 23:30:23.574416 3333 state_mem.go:75] "Updated machine memory state" Jul 6 23:30:23.582627 kubelet[3333]: I0706 23:30:23.582523 3333 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:30:23.583597 kubelet[3333]: I0706 23:30:23.583489 3333 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:30:23.584222 kubelet[3333]: I0706 23:30:23.583508 3333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:30:23.585064 kubelet[3333]: I0706 23:30:23.584938 3333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:30:23.588093 kubelet[3333]: E0706 23:30:23.587856 3333 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:30:23.607756 kubelet[3333]: I0706 23:30:23.607017 3333 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.607756 kubelet[3333]: I0706 23:30:23.607465 3333 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.608002 kubelet[3333]: I0706 23:30:23.607986 3333 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.626274 kubelet[3333]: W0706 23:30:23.626249 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:30:23.626389 kubelet[3333]: E0706 23:30:23.626329 3333 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-a-d392076d12\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.626670 kubelet[3333]: W0706 23:30:23.626606 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:30:23.626670 kubelet[3333]: E0706 23:30:23.626657 3333 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-a-d392076d12\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.626858 kubelet[3333]: W0706 23:30:23.626793 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:30:23.626858 kubelet[3333]: E0706 23:30:23.626829 3333 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.700943 kubelet[3333]: I0706 23:30:23.700830 3333 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.714530 kubelet[3333]: I0706 23:30:23.714266 3333 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.714530 kubelet[3333]: I0706 23:30:23.714333 3333 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.793703 kubelet[3333]: I0706 23:30:23.793577 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/149077b10eea7d936df0278e8df400ce-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" (UID: \"149077b10eea7d936df0278e8df400ce\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.793703 kubelet[3333]: I0706 23:30:23.793663 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e19d766fdc6924dd146698fcb3c3131e-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-d392076d12\" (UID: \"e19d766fdc6924dd146698fcb3c3131e\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.793703 kubelet[3333]: I0706 23:30:23.793702 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85691432053ce3b1b48293a24af93e03-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-d392076d12\" (UID: \"85691432053ce3b1b48293a24af93e03\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.794102 kubelet[3333]: I0706 23:30:23.793728 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85691432053ce3b1b48293a24af93e03-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-d392076d12\" (UID: \"85691432053ce3b1b48293a24af93e03\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.794102 kubelet[3333]: I0706 23:30:23.793761 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/149077b10eea7d936df0278e8df400ce-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" (UID: \"149077b10eea7d936df0278e8df400ce\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.794102 kubelet[3333]: I0706 23:30:23.793792 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/149077b10eea7d936df0278e8df400ce-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" (UID: \"149077b10eea7d936df0278e8df400ce\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.794102 kubelet[3333]: I0706 23:30:23.793818 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85691432053ce3b1b48293a24af93e03-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-d392076d12\" (UID: \"85691432053ce3b1b48293a24af93e03\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.794102 kubelet[3333]: I0706 23:30:23.793846 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/149077b10eea7d936df0278e8df400ce-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" (UID: \"149077b10eea7d936df0278e8df400ce\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:23.794301 kubelet[3333]: I0706 23:30:23.793886 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/149077b10eea7d936df0278e8df400ce-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-d392076d12\" (UID: \"149077b10eea7d936df0278e8df400ce\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" Jul 6 23:30:24.068310 sudo[3368]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:30:24.068699 sudo[3368]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:30:24.460942 kubelet[3333]: I0706 23:30:24.460834 3333 apiserver.go:52] "Watching apiserver" Jul 6 23:30:24.493735 kubelet[3333]: I0706 23:30:24.493676 3333 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:30:24.533455 kubelet[3333]: I0706 23:30:24.533426 3333 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-d392076d12" Jul 6 23:30:24.534098 kubelet[3333]: I0706 23:30:24.534071 3333 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:24.546073 kubelet[3333]: W0706 23:30:24.546040 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:30:24.546379 kubelet[3333]: E0706 23:30:24.546352 3333 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-a-d392076d12\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" Jul 6 23:30:24.547114 kubelet[3333]: W0706 23:30:24.546982 3333 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:30:24.547200 kubelet[3333]: E0706 23:30:24.547067 3333 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-a-d392076d12\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.1-a-d392076d12" Jul 6 23:30:24.567714 kubelet[3333]: I0706 23:30:24.567666 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.1-a-d392076d12" podStartSLOduration=3.567652328 podStartE2EDuration="3.567652328s" podCreationTimestamp="2025-07-06 23:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:30:24.565882214 +0000 UTC m=+1.165556512" watchObservedRunningTime="2025-07-06 23:30:24.567652328 +0000 UTC m=+1.167326526" Jul 6 23:30:24.601514 kubelet[3333]: I0706 23:30:24.600852 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-d392076d12" podStartSLOduration=2.600835102 podStartE2EDuration="2.600835102s" podCreationTimestamp="2025-07-06 23:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:30:24.582548451 +0000 UTC m=+1.182222649" watchObservedRunningTime="2025-07-06 23:30:24.600835102 +0000 UTC m=+1.200509400" Jul 6 23:30:24.619698 sudo[3368]: pam_unix(sudo:session): session closed for user root Jul 6 23:30:26.209885 sudo[2280]: pam_unix(sudo:session): session closed for user root Jul 6 23:30:26.318522 sshd[2279]: Connection closed by 10.200.16.10 port 59644 Jul 6 23:30:26.319427 sshd-session[2277]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:26.323973 systemd[1]: sshd@6-10.200.8.45:22-10.200.16.10:59644.service: Deactivated successfully. Jul 6 23:30:26.326772 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:30:26.327041 systemd[1]: session-9.scope: Consumed 5.207s CPU time, 261.9M memory peak. Jul 6 23:30:26.329580 systemd-logind[1698]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:30:26.330729 systemd-logind[1698]: Removed session 9. Jul 6 23:30:28.192227 kubelet[3333]: I0706 23:30:28.192144 3333 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:30:28.192976 kubelet[3333]: I0706 23:30:28.192724 3333 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:30:28.193041 containerd[1714]: time="2025-07-06T23:30:28.192505031Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:30:28.290351 kubelet[3333]: I0706 23:30:28.289688 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.1-a-d392076d12" podStartSLOduration=7.289667787 podStartE2EDuration="7.289667787s" podCreationTimestamp="2025-07-06 23:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:30:24.60174081 +0000 UTC m=+1.201415008" watchObservedRunningTime="2025-07-06 23:30:28.289667787 +0000 UTC m=+4.889341985" Jul 6 23:30:29.109798 systemd[1]: Created slice kubepods-besteffort-podc93bcb87_f909_4810_8297_868ab74cfb72.slice - libcontainer container kubepods-besteffort-podc93bcb87_f909_4810_8297_868ab74cfb72.slice. Jul 6 23:30:29.124200 systemd[1]: Created slice kubepods-burstable-podeb572e4c_8815_4c9c_8842_517f4765f93c.slice - libcontainer container kubepods-burstable-podeb572e4c_8815_4c9c_8842_517f4765f93c.slice. Jul 6 23:30:29.129962 kubelet[3333]: I0706 23:30:29.129885 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxlwl\" (UniqueName: \"kubernetes.io/projected/c93bcb87-f909-4810-8297-868ab74cfb72-kube-api-access-rxlwl\") pod \"kube-proxy-xb8cs\" (UID: \"c93bcb87-f909-4810-8297-868ab74cfb72\") " pod="kube-system/kube-proxy-xb8cs" Jul 6 23:30:29.130442 kubelet[3333]: I0706 23:30:29.130134 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-run\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.130442 kubelet[3333]: I0706 23:30:29.130238 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cni-path\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.130442 kubelet[3333]: I0706 23:30:29.130266 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-lib-modules\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.130442 kubelet[3333]: I0706 23:30:29.130416 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-xtables-lock\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.131464 kubelet[3333]: I0706 23:30:29.131251 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c93bcb87-f909-4810-8297-868ab74cfb72-xtables-lock\") pod \"kube-proxy-xb8cs\" (UID: \"c93bcb87-f909-4810-8297-868ab74cfb72\") " pod="kube-system/kube-proxy-xb8cs" Jul 6 23:30:29.131464 kubelet[3333]: I0706 23:30:29.131394 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-hostproc\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.131464 kubelet[3333]: I0706 23:30:29.131432 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb572e4c-8815-4c9c-8842-517f4765f93c-hubble-tls\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.131776 kubelet[3333]: I0706 23:30:29.131680 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-config-path\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.132183 kubelet[3333]: I0706 23:30:29.131710 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c93bcb87-f909-4810-8297-868ab74cfb72-lib-modules\") pod \"kube-proxy-xb8cs\" (UID: \"c93bcb87-f909-4810-8297-868ab74cfb72\") " pod="kube-system/kube-proxy-xb8cs" Jul 6 23:30:29.132183 kubelet[3333]: I0706 23:30:29.131999 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb572e4c-8815-4c9c-8842-517f4765f93c-clustermesh-secrets\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.132183 kubelet[3333]: I0706 23:30:29.132027 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-bpf-maps\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.132748 kubelet[3333]: I0706 23:30:29.132573 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l9mn\" (UniqueName: \"kubernetes.io/projected/eb572e4c-8815-4c9c-8842-517f4765f93c-kube-api-access-9l9mn\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.134320 kubelet[3333]: I0706 23:30:29.133256 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c93bcb87-f909-4810-8297-868ab74cfb72-kube-proxy\") pod \"kube-proxy-xb8cs\" (UID: \"c93bcb87-f909-4810-8297-868ab74cfb72\") " pod="kube-system/kube-proxy-xb8cs" Jul 6 23:30:29.134320 kubelet[3333]: I0706 23:30:29.133288 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-host-proc-sys-net\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.134320 kubelet[3333]: I0706 23:30:29.133347 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-etc-cni-netd\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.134320 kubelet[3333]: I0706 23:30:29.133368 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-cgroup\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.134320 kubelet[3333]: I0706 23:30:29.133419 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-host-proc-sys-kernel\") pod \"cilium-bq2xf\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " pod="kube-system/cilium-bq2xf" Jul 6 23:30:29.335158 kubelet[3333]: I0706 23:30:29.334605 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8lvn\" (UniqueName: \"kubernetes.io/projected/6c8adcbe-6e22-4887-9b83-bc5124ec534e-kube-api-access-v8lvn\") pod \"cilium-operator-6c4d7847fc-rgqsd\" (UID: \"6c8adcbe-6e22-4887-9b83-bc5124ec534e\") " pod="kube-system/cilium-operator-6c4d7847fc-rgqsd" Jul 6 23:30:29.335158 kubelet[3333]: I0706 23:30:29.334652 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c8adcbe-6e22-4887-9b83-bc5124ec534e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rgqsd\" (UID: \"6c8adcbe-6e22-4887-9b83-bc5124ec534e\") " pod="kube-system/cilium-operator-6c4d7847fc-rgqsd" Jul 6 23:30:29.336763 systemd[1]: Created slice kubepods-besteffort-pod6c8adcbe_6e22_4887_9b83_bc5124ec534e.slice - libcontainer container kubepods-besteffort-pod6c8adcbe_6e22_4887_9b83_bc5124ec534e.slice. Jul 6 23:30:29.417607 containerd[1714]: time="2025-07-06T23:30:29.417460017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xb8cs,Uid:c93bcb87-f909-4810-8297-868ab74cfb72,Namespace:kube-system,Attempt:0,}" Jul 6 23:30:29.430080 containerd[1714]: time="2025-07-06T23:30:29.430034427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bq2xf,Uid:eb572e4c-8815-4c9c-8842-517f4765f93c,Namespace:kube-system,Attempt:0,}" Jul 6 23:30:29.484053 containerd[1714]: time="2025-07-06T23:30:29.483758700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:30:29.484053 containerd[1714]: time="2025-07-06T23:30:29.483846601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:30:29.484053 containerd[1714]: time="2025-07-06T23:30:29.483869201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:29.484053 containerd[1714]: time="2025-07-06T23:30:29.483956002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:29.494355 containerd[1714]: time="2025-07-06T23:30:29.494272193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:30:29.494467 containerd[1714]: time="2025-07-06T23:30:29.494398394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:30:29.495029 containerd[1714]: time="2025-07-06T23:30:29.494971299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:29.495638 containerd[1714]: time="2025-07-06T23:30:29.495417703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:29.511436 systemd[1]: Started cri-containerd-5a7dcb637734e172b636cba6ccc3edb91f7f83839c583cda417843d5f63157ba.scope - libcontainer container 5a7dcb637734e172b636cba6ccc3edb91f7f83839c583cda417843d5f63157ba. Jul 6 23:30:29.528736 systemd[1]: Started cri-containerd-43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381.scope - libcontainer container 43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381. Jul 6 23:30:29.562877 containerd[1714]: time="2025-07-06T23:30:29.562770296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xb8cs,Uid:c93bcb87-f909-4810-8297-868ab74cfb72,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a7dcb637734e172b636cba6ccc3edb91f7f83839c583cda417843d5f63157ba\"" Jul 6 23:30:29.567803 containerd[1714]: time="2025-07-06T23:30:29.567732040Z" level=info msg="CreateContainer within sandbox \"5a7dcb637734e172b636cba6ccc3edb91f7f83839c583cda417843d5f63157ba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:30:29.573109 containerd[1714]: time="2025-07-06T23:30:29.573079787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bq2xf,Uid:eb572e4c-8815-4c9c-8842-517f4765f93c,Namespace:kube-system,Attempt:0,} returns sandbox id \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\"" Jul 6 23:30:29.575388 containerd[1714]: time="2025-07-06T23:30:29.575311207Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:30:29.614817 containerd[1714]: time="2025-07-06T23:30:29.614785954Z" level=info msg="CreateContainer within sandbox \"5a7dcb637734e172b636cba6ccc3edb91f7f83839c583cda417843d5f63157ba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d00c180968be9027b178af3e1a72106b13b2610c1e7b79f15d8492be79731034\"" Jul 6 23:30:29.615418 containerd[1714]: time="2025-07-06T23:30:29.615271458Z" level=info msg="StartContainer for \"d00c180968be9027b178af3e1a72106b13b2610c1e7b79f15d8492be79731034\"" Jul 6 23:30:29.642523 containerd[1714]: time="2025-07-06T23:30:29.642153395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rgqsd,Uid:6c8adcbe-6e22-4887-9b83-bc5124ec534e,Namespace:kube-system,Attempt:0,}" Jul 6 23:30:29.646392 systemd[1]: Started cri-containerd-d00c180968be9027b178af3e1a72106b13b2610c1e7b79f15d8492be79731034.scope - libcontainer container d00c180968be9027b178af3e1a72106b13b2610c1e7b79f15d8492be79731034. Jul 6 23:30:29.682887 containerd[1714]: time="2025-07-06T23:30:29.681423641Z" level=info msg="StartContainer for \"d00c180968be9027b178af3e1a72106b13b2610c1e7b79f15d8492be79731034\" returns successfully" Jul 6 23:30:29.707894 containerd[1714]: time="2025-07-06T23:30:29.707638372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:30:29.707894 containerd[1714]: time="2025-07-06T23:30:29.707688872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:30:29.707894 containerd[1714]: time="2025-07-06T23:30:29.707703772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:29.707894 containerd[1714]: time="2025-07-06T23:30:29.707782273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:29.730637 systemd[1]: Started cri-containerd-c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77.scope - libcontainer container c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77. Jul 6 23:30:29.784855 containerd[1714]: time="2025-07-06T23:30:29.784790851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rgqsd,Uid:6c8adcbe-6e22-4887-9b83-bc5124ec534e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77\"" Jul 6 23:30:30.580789 kubelet[3333]: I0706 23:30:30.580084 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xb8cs" podStartSLOduration=1.5800637530000001 podStartE2EDuration="1.580063753s" podCreationTimestamp="2025-07-06 23:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:30:30.568549352 +0000 UTC m=+7.168223650" watchObservedRunningTime="2025-07-06 23:30:30.580063753 +0000 UTC m=+7.179737951" Jul 6 23:30:35.472663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535263869.mount: Deactivated successfully. Jul 6 23:30:37.790093 containerd[1714]: time="2025-07-06T23:30:37.789972662Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:37.792420 containerd[1714]: time="2025-07-06T23:30:37.792356784Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 6 23:30:37.796361 containerd[1714]: time="2025-07-06T23:30:37.796309319Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:37.797913 containerd[1714]: time="2025-07-06T23:30:37.797794132Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.222447125s" Jul 6 23:30:37.797913 containerd[1714]: time="2025-07-06T23:30:37.797830832Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 6 23:30:37.799760 containerd[1714]: time="2025-07-06T23:30:37.799562548Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:30:37.800600 containerd[1714]: time="2025-07-06T23:30:37.800438056Z" level=info msg="CreateContainer within sandbox \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:30:37.859846 containerd[1714]: time="2025-07-06T23:30:37.859808885Z" level=info msg="CreateContainer within sandbox \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c\"" Jul 6 23:30:37.860380 containerd[1714]: time="2025-07-06T23:30:37.860346290Z" level=info msg="StartContainer for \"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c\"" Jul 6 23:30:37.902662 systemd[1]: Started cri-containerd-7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c.scope - libcontainer container 7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c. Jul 6 23:30:37.935293 containerd[1714]: time="2025-07-06T23:30:37.934058447Z" level=info msg="StartContainer for \"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c\" returns successfully" Jul 6 23:30:37.940071 systemd[1]: cri-containerd-7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c.scope: Deactivated successfully. Jul 6 23:30:38.841096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c-rootfs.mount: Deactivated successfully. Jul 6 23:30:41.598218 containerd[1714]: time="2025-07-06T23:30:41.597996430Z" level=info msg="shim disconnected" id=7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c namespace=k8s.io Jul 6 23:30:41.598218 containerd[1714]: time="2025-07-06T23:30:41.598057430Z" level=warning msg="cleaning up after shim disconnected" id=7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c namespace=k8s.io Jul 6 23:30:41.598218 containerd[1714]: time="2025-07-06T23:30:41.598066430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:42.585966 containerd[1714]: time="2025-07-06T23:30:42.585647639Z" level=info msg="CreateContainer within sandbox \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:30:42.640951 containerd[1714]: time="2025-07-06T23:30:42.640901732Z" level=info msg="CreateContainer within sandbox \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3\"" Jul 6 23:30:42.642493 containerd[1714]: time="2025-07-06T23:30:42.641565838Z" level=info msg="StartContainer for \"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3\"" Jul 6 23:30:42.677388 systemd[1]: Started cri-containerd-9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3.scope - libcontainer container 9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3. Jul 6 23:30:42.713865 containerd[1714]: time="2025-07-06T23:30:42.713709582Z" level=info msg="StartContainer for \"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3\" returns successfully" Jul 6 23:30:42.723554 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:30:42.724097 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:30:42.725321 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:30:42.733701 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:30:42.737537 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:30:42.739850 systemd[1]: cri-containerd-9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3.scope: Deactivated successfully. Jul 6 23:30:42.765620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:30:42.767043 containerd[1714]: time="2025-07-06T23:30:42.766890256Z" level=info msg="shim disconnected" id=9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3 namespace=k8s.io Jul 6 23:30:42.767197 containerd[1714]: time="2025-07-06T23:30:42.767044157Z" level=warning msg="cleaning up after shim disconnected" id=9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3 namespace=k8s.io Jul 6 23:30:42.767197 containerd[1714]: time="2025-07-06T23:30:42.767057758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:42.779049 containerd[1714]: time="2025-07-06T23:30:42.779010164Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:30:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:30:43.592471 containerd[1714]: time="2025-07-06T23:30:43.592425620Z" level=info msg="CreateContainer within sandbox \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:30:43.625419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3-rootfs.mount: Deactivated successfully. Jul 6 23:30:43.635300 containerd[1714]: time="2025-07-06T23:30:43.635261402Z" level=info msg="CreateContainer within sandbox \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75\"" Jul 6 23:30:43.636502 containerd[1714]: time="2025-07-06T23:30:43.636305411Z" level=info msg="StartContainer for \"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75\"" Jul 6 23:30:43.701583 systemd[1]: Started cri-containerd-68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75.scope - libcontainer container 68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75. Jul 6 23:30:43.738713 containerd[1714]: time="2025-07-06T23:30:43.738649624Z" level=info msg="StartContainer for \"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75\" returns successfully" Jul 6 23:30:43.739334 systemd[1]: cri-containerd-68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75.scope: Deactivated successfully. Jul 6 23:30:43.769059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75-rootfs.mount: Deactivated successfully. Jul 6 23:30:43.783311 containerd[1714]: time="2025-07-06T23:30:43.783199522Z" level=info msg="shim disconnected" id=68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75 namespace=k8s.io Jul 6 23:30:43.783564 containerd[1714]: time="2025-07-06T23:30:43.783312823Z" level=warning msg="cleaning up after shim disconnected" id=68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75 namespace=k8s.io Jul 6 23:30:43.783564 containerd[1714]: time="2025-07-06T23:30:43.783324223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:44.591972 containerd[1714]: time="2025-07-06T23:30:44.591924373Z" level=info msg="CreateContainer within sandbox \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:30:44.663333 containerd[1714]: time="2025-07-06T23:30:44.663292428Z" level=info msg="CreateContainer within sandbox \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208\"" Jul 6 23:30:44.664071 containerd[1714]: time="2025-07-06T23:30:44.663996934Z" level=info msg="StartContainer for \"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208\"" Jul 6 23:30:44.701639 systemd[1]: Started cri-containerd-d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208.scope - libcontainer container d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208. Jul 6 23:30:44.726462 systemd[1]: cri-containerd-d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208.scope: Deactivated successfully. Jul 6 23:30:44.730619 containerd[1714]: time="2025-07-06T23:30:44.730448051Z" level=info msg="StartContainer for \"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208\" returns successfully" Jul 6 23:30:44.749411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208-rootfs.mount: Deactivated successfully. Jul 6 23:30:44.775880 containerd[1714]: time="2025-07-06T23:30:44.775805804Z" level=info msg="shim disconnected" id=d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208 namespace=k8s.io Jul 6 23:30:44.775880 containerd[1714]: time="2025-07-06T23:30:44.775875304Z" level=warning msg="cleaning up after shim disconnected" id=d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208 namespace=k8s.io Jul 6 23:30:44.776655 containerd[1714]: time="2025-07-06T23:30:44.775889504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:45.598109 containerd[1714]: time="2025-07-06T23:30:45.597842302Z" level=info msg="CreateContainer within sandbox \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:30:45.645847 containerd[1714]: time="2025-07-06T23:30:45.645810275Z" level=info msg="CreateContainer within sandbox \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\"" Jul 6 23:30:45.646313 containerd[1714]: time="2025-07-06T23:30:45.646284879Z" level=info msg="StartContainer for \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\"" Jul 6 23:30:45.687539 systemd[1]: Started cri-containerd-ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a.scope - libcontainer container ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a. Jul 6 23:30:45.728030 containerd[1714]: time="2025-07-06T23:30:45.727962514Z" level=info msg="StartContainer for \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\" returns successfully" Jul 6 23:30:45.761873 systemd[1]: run-containerd-runc-k8s.io-ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a-runc.P6gV9a.mount: Deactivated successfully. Jul 6 23:30:45.841115 kubelet[3333]: I0706 23:30:45.840980 3333 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:30:45.898771 systemd[1]: Created slice kubepods-burstable-pod1d973dc8_9eb8_4300_9b85_abbba7eafac1.slice - libcontainer container kubepods-burstable-pod1d973dc8_9eb8_4300_9b85_abbba7eafac1.slice. Jul 6 23:30:45.912339 systemd[1]: Created slice kubepods-burstable-podbfae587e_37d3_473c_99a9_89ec40a23715.slice - libcontainer container kubepods-burstable-podbfae587e_37d3_473c_99a9_89ec40a23715.slice. Jul 6 23:30:45.950420 kubelet[3333]: I0706 23:30:45.950380 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfae587e-37d3-473c-99a9-89ec40a23715-config-volume\") pod \"coredns-668d6bf9bc-4tkmf\" (UID: \"bfae587e-37d3-473c-99a9-89ec40a23715\") " pod="kube-system/coredns-668d6bf9bc-4tkmf" Jul 6 23:30:45.950547 kubelet[3333]: I0706 23:30:45.950424 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5d2z\" (UniqueName: \"kubernetes.io/projected/1d973dc8-9eb8-4300-9b85-abbba7eafac1-kube-api-access-q5d2z\") pod \"coredns-668d6bf9bc-mmc4p\" (UID: \"1d973dc8-9eb8-4300-9b85-abbba7eafac1\") " pod="kube-system/coredns-668d6bf9bc-mmc4p" Jul 6 23:30:45.950547 kubelet[3333]: I0706 23:30:45.950453 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvsrh\" (UniqueName: \"kubernetes.io/projected/bfae587e-37d3-473c-99a9-89ec40a23715-kube-api-access-lvsrh\") pod \"coredns-668d6bf9bc-4tkmf\" (UID: \"bfae587e-37d3-473c-99a9-89ec40a23715\") " pod="kube-system/coredns-668d6bf9bc-4tkmf" Jul 6 23:30:45.950547 kubelet[3333]: I0706 23:30:45.950477 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d973dc8-9eb8-4300-9b85-abbba7eafac1-config-volume\") pod \"coredns-668d6bf9bc-mmc4p\" (UID: \"1d973dc8-9eb8-4300-9b85-abbba7eafac1\") " pod="kube-system/coredns-668d6bf9bc-mmc4p" Jul 6 23:30:46.212892 containerd[1714]: time="2025-07-06T23:30:46.212439785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mmc4p,Uid:1d973dc8-9eb8-4300-9b85-abbba7eafac1,Namespace:kube-system,Attempt:0,}" Jul 6 23:30:46.220450 containerd[1714]: time="2025-07-06T23:30:46.220114645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4tkmf,Uid:bfae587e-37d3-473c-99a9-89ec40a23715,Namespace:kube-system,Attempt:0,}" Jul 6 23:30:46.623923 kubelet[3333]: I0706 23:30:46.623830 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bq2xf" podStartSLOduration=9.399126741 podStartE2EDuration="17.623810186s" podCreationTimestamp="2025-07-06 23:30:29 +0000 UTC" firstStartedPulling="2025-07-06 23:30:29.574142196 +0000 UTC m=+6.173816494" lastFinishedPulling="2025-07-06 23:30:37.798825741 +0000 UTC m=+14.398499939" observedRunningTime="2025-07-06 23:30:46.623400983 +0000 UTC m=+23.223075181" watchObservedRunningTime="2025-07-06 23:30:46.623810186 +0000 UTC m=+23.223484384" Jul 6 23:30:48.315406 containerd[1714]: time="2025-07-06T23:30:48.315347951Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:48.317799 containerd[1714]: time="2025-07-06T23:30:48.317724970Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 6 23:30:48.323297 containerd[1714]: time="2025-07-06T23:30:48.323233413Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:30:48.324677 containerd[1714]: time="2025-07-06T23:30:48.324527123Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 10.524924575s" Jul 6 23:30:48.324677 containerd[1714]: time="2025-07-06T23:30:48.324567823Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 6 23:30:48.326999 containerd[1714]: time="2025-07-06T23:30:48.326944142Z" level=info msg="CreateContainer within sandbox \"c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:30:48.366171 containerd[1714]: time="2025-07-06T23:30:48.366133547Z" level=info msg="CreateContainer within sandbox \"c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\"" Jul 6 23:30:48.366604 containerd[1714]: time="2025-07-06T23:30:48.366575250Z" level=info msg="StartContainer for \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\"" Jul 6 23:30:48.401987 systemd[1]: run-containerd-runc-k8s.io-c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2-runc.Ld6o69.mount: Deactivated successfully. Jul 6 23:30:48.411378 systemd[1]: Started cri-containerd-c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2.scope - libcontainer container c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2. Jul 6 23:30:48.440333 containerd[1714]: time="2025-07-06T23:30:48.440291724Z" level=info msg="StartContainer for \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\" returns successfully" Jul 6 23:30:48.688540 kubelet[3333]: I0706 23:30:48.687698 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rgqsd" podStartSLOduration=1.14816768 podStartE2EDuration="19.687678449s" podCreationTimestamp="2025-07-06 23:30:29 +0000 UTC" firstStartedPulling="2025-07-06 23:30:29.786037962 +0000 UTC m=+6.385712160" lastFinishedPulling="2025-07-06 23:30:48.325548731 +0000 UTC m=+24.925222929" observedRunningTime="2025-07-06 23:30:48.685923335 +0000 UTC m=+25.285597633" watchObservedRunningTime="2025-07-06 23:30:48.687678449 +0000 UTC m=+25.287352747" Jul 6 23:30:51.944401 systemd-networkd[1568]: cilium_host: Link UP Jul 6 23:30:51.945096 systemd-networkd[1568]: cilium_net: Link UP Jul 6 23:30:51.945500 systemd-networkd[1568]: cilium_net: Gained carrier Jul 6 23:30:51.945708 systemd-networkd[1568]: cilium_host: Gained carrier Jul 6 23:30:52.123588 systemd-networkd[1568]: cilium_vxlan: Link UP Jul 6 23:30:52.123599 systemd-networkd[1568]: cilium_vxlan: Gained carrier Jul 6 23:30:52.411283 kernel: NET: Registered PF_ALG protocol family Jul 6 23:30:52.441554 systemd-networkd[1568]: cilium_net: Gained IPv6LL Jul 6 23:30:52.969392 systemd-networkd[1568]: cilium_host: Gained IPv6LL Jul 6 23:30:53.338114 systemd-networkd[1568]: lxc_health: Link UP Jul 6 23:30:53.342676 systemd-networkd[1568]: lxc_health: Gained carrier Jul 6 23:30:53.354619 systemd-networkd[1568]: cilium_vxlan: Gained IPv6LL Jul 6 23:30:53.837913 systemd-networkd[1568]: lxc967359692582: Link UP Jul 6 23:30:53.847327 kernel: eth0: renamed from tmpc93b0 Jul 6 23:30:53.858111 systemd-networkd[1568]: lxc967359692582: Gained carrier Jul 6 23:30:53.858438 systemd-networkd[1568]: lxc60af0ffbeaea: Link UP Jul 6 23:30:53.874415 kernel: eth0: renamed from tmp78768 Jul 6 23:30:53.882440 systemd-networkd[1568]: lxc60af0ffbeaea: Gained carrier Jul 6 23:30:55.081496 systemd-networkd[1568]: lxc_health: Gained IPv6LL Jul 6 23:30:55.593400 systemd-networkd[1568]: lxc967359692582: Gained IPv6LL Jul 6 23:30:55.721373 systemd-networkd[1568]: lxc60af0ffbeaea: Gained IPv6LL Jul 6 23:30:57.649580 containerd[1714]: time="2025-07-06T23:30:57.649486024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:30:57.652316 containerd[1714]: time="2025-07-06T23:30:57.651826642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:30:57.652760 containerd[1714]: time="2025-07-06T23:30:57.652660849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:57.653823 containerd[1714]: time="2025-07-06T23:30:57.653687757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:57.656164 containerd[1714]: time="2025-07-06T23:30:57.655852573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:30:57.656164 containerd[1714]: time="2025-07-06T23:30:57.655901574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:30:57.656164 containerd[1714]: time="2025-07-06T23:30:57.655936574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:57.656651 containerd[1714]: time="2025-07-06T23:30:57.656133776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:57.705451 systemd[1]: Started cri-containerd-7876818055e8ff059a9b99d5ea6df7028e3dd0a92225411db0131173bfeac03f.scope - libcontainer container 7876818055e8ff059a9b99d5ea6df7028e3dd0a92225411db0131173bfeac03f. Jul 6 23:30:57.706955 systemd[1]: Started cri-containerd-c93b09fe4c53eb16788afed7b8ab5df105616534dd89240783ab3af122758913.scope - libcontainer container c93b09fe4c53eb16788afed7b8ab5df105616534dd89240783ab3af122758913. Jul 6 23:30:57.783810 containerd[1714]: time="2025-07-06T23:30:57.783702462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mmc4p,Uid:1d973dc8-9eb8-4300-9b85-abbba7eafac1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c93b09fe4c53eb16788afed7b8ab5df105616534dd89240783ab3af122758913\"" Jul 6 23:30:57.788433 containerd[1714]: time="2025-07-06T23:30:57.788295698Z" level=info msg="CreateContainer within sandbox \"c93b09fe4c53eb16788afed7b8ab5df105616534dd89240783ab3af122758913\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:30:57.820651 containerd[1714]: time="2025-07-06T23:30:57.819629440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4tkmf,Uid:bfae587e-37d3-473c-99a9-89ec40a23715,Namespace:kube-system,Attempt:0,} returns sandbox id \"7876818055e8ff059a9b99d5ea6df7028e3dd0a92225411db0131173bfeac03f\"" Jul 6 23:30:57.822296 containerd[1714]: time="2025-07-06T23:30:57.822053159Z" level=info msg="CreateContainer within sandbox \"7876818055e8ff059a9b99d5ea6df7028e3dd0a92225411db0131173bfeac03f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:30:57.866294 containerd[1714]: time="2025-07-06T23:30:57.866257701Z" level=info msg="CreateContainer within sandbox \"c93b09fe4c53eb16788afed7b8ab5df105616534dd89240783ab3af122758913\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4c3c2b0a46ddf71c168106eb4b097df9c5daaf96eec293bba44340aee3b348c\"" Jul 6 23:30:57.866959 containerd[1714]: time="2025-07-06T23:30:57.866791405Z" level=info msg="StartContainer for \"d4c3c2b0a46ddf71c168106eb4b097df9c5daaf96eec293bba44340aee3b348c\"" Jul 6 23:30:57.869966 containerd[1714]: time="2025-07-06T23:30:57.869729328Z" level=info msg="CreateContainer within sandbox \"7876818055e8ff059a9b99d5ea6df7028e3dd0a92225411db0131173bfeac03f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d2ed697d9a844a724585f04868b03af1c3c95bd217aed8c49f3692604441460\"" Jul 6 23:30:57.871352 containerd[1714]: time="2025-07-06T23:30:57.870406333Z" level=info msg="StartContainer for \"5d2ed697d9a844a724585f04868b03af1c3c95bd217aed8c49f3692604441460\"" Jul 6 23:30:57.910381 systemd[1]: Started cri-containerd-d4c3c2b0a46ddf71c168106eb4b097df9c5daaf96eec293bba44340aee3b348c.scope - libcontainer container d4c3c2b0a46ddf71c168106eb4b097df9c5daaf96eec293bba44340aee3b348c. Jul 6 23:30:57.914326 systemd[1]: Started cri-containerd-5d2ed697d9a844a724585f04868b03af1c3c95bd217aed8c49f3692604441460.scope - libcontainer container 5d2ed697d9a844a724585f04868b03af1c3c95bd217aed8c49f3692604441460. Jul 6 23:30:57.960077 containerd[1714]: time="2025-07-06T23:30:57.959849425Z" level=info msg="StartContainer for \"5d2ed697d9a844a724585f04868b03af1c3c95bd217aed8c49f3692604441460\" returns successfully" Jul 6 23:30:57.961221 containerd[1714]: time="2025-07-06T23:30:57.960657531Z" level=info msg="StartContainer for \"d4c3c2b0a46ddf71c168106eb4b097df9c5daaf96eec293bba44340aee3b348c\" returns successfully" Jul 6 23:30:58.648902 kubelet[3333]: I0706 23:30:58.647922 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mmc4p" podStartSLOduration=29.647903948 podStartE2EDuration="29.647903948s" podCreationTimestamp="2025-07-06 23:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:30:58.646618538 +0000 UTC m=+35.246292736" watchObservedRunningTime="2025-07-06 23:30:58.647903948 +0000 UTC m=+35.247578246" Jul 6 23:30:58.673652 kubelet[3333]: I0706 23:30:58.672003 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4tkmf" podStartSLOduration=29.671981134 podStartE2EDuration="29.671981134s" podCreationTimestamp="2025-07-06 23:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:30:58.671257128 +0000 UTC m=+35.270931326" watchObservedRunningTime="2025-07-06 23:30:58.671981134 +0000 UTC m=+35.271655332" Jul 6 23:31:31.300527 systemd[1]: Started sshd@7-10.200.8.45:22-10.200.16.10:52058.service - OpenSSH per-connection server daemon (10.200.16.10:52058). Jul 6 23:31:31.926783 sshd[4711]: Accepted publickey for core from 10.200.16.10 port 52058 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:31:31.928513 sshd-session[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:31:31.933136 systemd-logind[1698]: New session 10 of user core. Jul 6 23:31:31.941381 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:31:32.443723 sshd[4713]: Connection closed by 10.200.16.10 port 52058 Jul 6 23:31:32.444827 sshd-session[4711]: pam_unix(sshd:session): session closed for user core Jul 6 23:31:32.449900 systemd[1]: sshd@7-10.200.8.45:22-10.200.16.10:52058.service: Deactivated successfully. Jul 6 23:31:32.452158 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:31:32.453181 systemd-logind[1698]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:31:32.454145 systemd-logind[1698]: Removed session 10. Jul 6 23:31:37.564508 systemd[1]: Started sshd@8-10.200.8.45:22-10.200.16.10:52072.service - OpenSSH per-connection server daemon (10.200.16.10:52072). Jul 6 23:31:38.191416 sshd[4726]: Accepted publickey for core from 10.200.16.10 port 52072 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:31:38.193335 sshd-session[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:31:38.198701 systemd-logind[1698]: New session 11 of user core. Jul 6 23:31:38.208355 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:31:38.708168 sshd[4728]: Connection closed by 10.200.16.10 port 52072 Jul 6 23:31:38.709085 sshd-session[4726]: pam_unix(sshd:session): session closed for user core Jul 6 23:31:38.712738 systemd[1]: sshd@8-10.200.8.45:22-10.200.16.10:52072.service: Deactivated successfully. Jul 6 23:31:38.715972 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:31:38.717727 systemd-logind[1698]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:31:38.718649 systemd-logind[1698]: Removed session 11. Jul 6 23:31:43.824522 systemd[1]: Started sshd@9-10.200.8.45:22-10.200.16.10:42456.service - OpenSSH per-connection server daemon (10.200.16.10:42456). Jul 6 23:31:44.450014 sshd[4741]: Accepted publickey for core from 10.200.16.10 port 42456 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:31:44.451574 sshd-session[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:31:44.456240 systemd-logind[1698]: New session 12 of user core. Jul 6 23:31:44.463390 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:31:44.957537 sshd[4743]: Connection closed by 10.200.16.10 port 42456 Jul 6 23:31:44.958512 sshd-session[4741]: pam_unix(sshd:session): session closed for user core Jul 6 23:31:44.961906 systemd[1]: sshd@9-10.200.8.45:22-10.200.16.10:42456.service: Deactivated successfully. Jul 6 23:31:44.964492 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:31:44.966132 systemd-logind[1698]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:31:44.967372 systemd-logind[1698]: Removed session 12. Jul 6 23:31:50.076536 systemd[1]: Started sshd@10-10.200.8.45:22-10.200.16.10:33320.service - OpenSSH per-connection server daemon (10.200.16.10:33320). Jul 6 23:31:50.702534 sshd[4756]: Accepted publickey for core from 10.200.16.10 port 33320 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:31:50.704257 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:31:50.709624 systemd-logind[1698]: New session 13 of user core. Jul 6 23:31:50.719443 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:31:51.217159 sshd[4758]: Connection closed by 10.200.16.10 port 33320 Jul 6 23:31:51.217983 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Jul 6 23:31:51.223634 systemd[1]: sshd@10-10.200.8.45:22-10.200.16.10:33320.service: Deactivated successfully. Jul 6 23:31:51.223776 systemd-logind[1698]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:31:51.229457 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:31:51.232449 systemd-logind[1698]: Removed session 13. Jul 6 23:31:56.335542 systemd[1]: Started sshd@11-10.200.8.45:22-10.200.16.10:33332.service - OpenSSH per-connection server daemon (10.200.16.10:33332). Jul 6 23:31:56.973670 sshd[4772]: Accepted publickey for core from 10.200.16.10 port 33332 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:31:56.975549 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:31:56.980867 systemd-logind[1698]: New session 14 of user core. Jul 6 23:31:56.985389 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:31:57.480375 sshd[4774]: Connection closed by 10.200.16.10 port 33332 Jul 6 23:31:57.481564 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Jul 6 23:31:57.486572 systemd[1]: sshd@11-10.200.8.45:22-10.200.16.10:33332.service: Deactivated successfully. Jul 6 23:31:57.489171 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:31:57.490455 systemd-logind[1698]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:31:57.491915 systemd-logind[1698]: Removed session 14. Jul 6 23:31:57.599513 systemd[1]: Started sshd@12-10.200.8.45:22-10.200.16.10:33342.service - OpenSSH per-connection server daemon (10.200.16.10:33342). Jul 6 23:31:58.225255 sshd[4787]: Accepted publickey for core from 10.200.16.10 port 33342 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:31:58.227810 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:31:58.233350 systemd-logind[1698]: New session 15 of user core. Jul 6 23:31:58.240391 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:31:58.780457 sshd[4789]: Connection closed by 10.200.16.10 port 33342 Jul 6 23:31:58.781619 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Jul 6 23:31:58.785837 systemd-logind[1698]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:31:58.786488 systemd[1]: sshd@12-10.200.8.45:22-10.200.16.10:33342.service: Deactivated successfully. Jul 6 23:31:58.788855 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:31:58.790283 systemd-logind[1698]: Removed session 15. Jul 6 23:31:58.895550 systemd[1]: Started sshd@13-10.200.8.45:22-10.200.16.10:33356.service - OpenSSH per-connection server daemon (10.200.16.10:33356). Jul 6 23:31:59.523243 sshd[4799]: Accepted publickey for core from 10.200.16.10 port 33356 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:31:59.524321 sshd-session[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:31:59.530685 systemd-logind[1698]: New session 16 of user core. Jul 6 23:31:59.535400 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:32:00.035515 sshd[4801]: Connection closed by 10.200.16.10 port 33356 Jul 6 23:32:00.036413 sshd-session[4799]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:00.041099 systemd[1]: sshd@13-10.200.8.45:22-10.200.16.10:33356.service: Deactivated successfully. Jul 6 23:32:00.043899 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:32:00.045044 systemd-logind[1698]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:32:00.046100 systemd-logind[1698]: Removed session 16. Jul 6 23:32:05.155543 systemd[1]: Started sshd@14-10.200.8.45:22-10.200.16.10:35820.service - OpenSSH per-connection server daemon (10.200.16.10:35820). Jul 6 23:32:05.785968 sshd[4815]: Accepted publickey for core from 10.200.16.10 port 35820 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:05.787535 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:05.792115 systemd-logind[1698]: New session 17 of user core. Jul 6 23:32:05.797369 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:32:06.290050 sshd[4817]: Connection closed by 10.200.16.10 port 35820 Jul 6 23:32:06.291034 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:06.295563 systemd[1]: sshd@14-10.200.8.45:22-10.200.16.10:35820.service: Deactivated successfully. Jul 6 23:32:06.297860 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:32:06.299022 systemd-logind[1698]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:32:06.299989 systemd-logind[1698]: Removed session 17. Jul 6 23:32:06.407537 systemd[1]: Started sshd@15-10.200.8.45:22-10.200.16.10:35822.service - OpenSSH per-connection server daemon (10.200.16.10:35822). Jul 6 23:32:07.035017 sshd[4829]: Accepted publickey for core from 10.200.16.10 port 35822 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:07.036774 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:07.041095 systemd-logind[1698]: New session 18 of user core. Jul 6 23:32:07.046510 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:32:07.603122 sshd[4831]: Connection closed by 10.200.16.10 port 35822 Jul 6 23:32:07.603987 sshd-session[4829]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:07.608578 systemd[1]: sshd@15-10.200.8.45:22-10.200.16.10:35822.service: Deactivated successfully. Jul 6 23:32:07.610838 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:32:07.611994 systemd-logind[1698]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:32:07.613055 systemd-logind[1698]: Removed session 18. Jul 6 23:32:07.718521 systemd[1]: Started sshd@16-10.200.8.45:22-10.200.16.10:35836.service - OpenSSH per-connection server daemon (10.200.16.10:35836). Jul 6 23:32:08.345440 sshd[4841]: Accepted publickey for core from 10.200.16.10 port 35836 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:08.347065 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:08.351689 systemd-logind[1698]: New session 19 of user core. Jul 6 23:32:08.356380 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:32:09.678911 sshd[4843]: Connection closed by 10.200.16.10 port 35836 Jul 6 23:32:09.679928 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:09.684935 systemd[1]: sshd@16-10.200.8.45:22-10.200.16.10:35836.service: Deactivated successfully. Jul 6 23:32:09.687056 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:32:09.688115 systemd-logind[1698]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:32:09.689823 systemd-logind[1698]: Removed session 19. Jul 6 23:32:09.802516 systemd[1]: Started sshd@17-10.200.8.45:22-10.200.16.10:37776.service - OpenSSH per-connection server daemon (10.200.16.10:37776). Jul 6 23:32:10.428933 sshd[4860]: Accepted publickey for core from 10.200.16.10 port 37776 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:10.430637 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:10.435161 systemd-logind[1698]: New session 20 of user core. Jul 6 23:32:10.443371 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:32:11.061678 sshd[4862]: Connection closed by 10.200.16.10 port 37776 Jul 6 23:32:11.062778 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:11.066416 systemd[1]: sshd@17-10.200.8.45:22-10.200.16.10:37776.service: Deactivated successfully. Jul 6 23:32:11.069678 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:32:11.071193 systemd-logind[1698]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:32:11.072422 systemd-logind[1698]: Removed session 20. Jul 6 23:32:11.173867 systemd[1]: Started sshd@18-10.200.8.45:22-10.200.16.10:37778.service - OpenSSH per-connection server daemon (10.200.16.10:37778). Jul 6 23:32:11.815986 sshd[4872]: Accepted publickey for core from 10.200.16.10 port 37778 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:11.817696 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:11.823052 systemd-logind[1698]: New session 21 of user core. Jul 6 23:32:11.830422 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:32:12.316572 sshd[4874]: Connection closed by 10.200.16.10 port 37778 Jul 6 23:32:12.318490 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:12.322738 systemd-logind[1698]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:32:12.323419 systemd[1]: sshd@18-10.200.8.45:22-10.200.16.10:37778.service: Deactivated successfully. Jul 6 23:32:12.325770 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:32:12.327087 systemd-logind[1698]: Removed session 21. Jul 6 23:32:17.437526 systemd[1]: Started sshd@19-10.200.8.45:22-10.200.16.10:37786.service - OpenSSH per-connection server daemon (10.200.16.10:37786). Jul 6 23:32:18.061755 sshd[4888]: Accepted publickey for core from 10.200.16.10 port 37786 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:18.063365 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:18.067999 systemd-logind[1698]: New session 22 of user core. Jul 6 23:32:18.070448 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:32:18.575639 sshd[4890]: Connection closed by 10.200.16.10 port 37786 Jul 6 23:32:18.577492 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:18.582677 systemd[1]: sshd@19-10.200.8.45:22-10.200.16.10:37786.service: Deactivated successfully. Jul 6 23:32:18.585094 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:32:18.586347 systemd-logind[1698]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:32:18.587614 systemd-logind[1698]: Removed session 22. Jul 6 23:32:23.696516 systemd[1]: Started sshd@20-10.200.8.45:22-10.200.16.10:56978.service - OpenSSH per-connection server daemon (10.200.16.10:56978). Jul 6 23:32:24.325408 sshd[4904]: Accepted publickey for core from 10.200.16.10 port 56978 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:24.326960 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:24.332009 systemd-logind[1698]: New session 23 of user core. Jul 6 23:32:24.347376 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:32:24.829713 sshd[4906]: Connection closed by 10.200.16.10 port 56978 Jul 6 23:32:24.830566 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:24.834507 systemd[1]: sshd@20-10.200.8.45:22-10.200.16.10:56978.service: Deactivated successfully. Jul 6 23:32:24.836712 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:32:24.837842 systemd-logind[1698]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:32:24.839068 systemd-logind[1698]: Removed session 23. Jul 6 23:32:29.948751 systemd[1]: Started sshd@21-10.200.8.45:22-10.200.16.10:56572.service - OpenSSH per-connection server daemon (10.200.16.10:56572). Jul 6 23:32:30.576446 sshd[4921]: Accepted publickey for core from 10.200.16.10 port 56572 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:30.578014 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:30.582547 systemd-logind[1698]: New session 24 of user core. Jul 6 23:32:30.588371 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:32:31.088160 sshd[4923]: Connection closed by 10.200.16.10 port 56572 Jul 6 23:32:31.089194 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:31.093326 systemd[1]: sshd@21-10.200.8.45:22-10.200.16.10:56572.service: Deactivated successfully. Jul 6 23:32:31.095983 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:32:31.097832 systemd-logind[1698]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:32:31.098975 systemd-logind[1698]: Removed session 24. Jul 6 23:32:31.213492 systemd[1]: Started sshd@22-10.200.8.45:22-10.200.16.10:56584.service - OpenSSH per-connection server daemon (10.200.16.10:56584). Jul 6 23:32:31.840325 sshd[4935]: Accepted publickey for core from 10.200.16.10 port 56584 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:31.841770 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:31.846617 systemd-logind[1698]: New session 25 of user core. Jul 6 23:32:31.850424 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:32:33.539293 containerd[1714]: time="2025-07-06T23:32:33.539238601Z" level=info msg="StopContainer for \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\" with timeout 30 (s)" Jul 6 23:32:33.544896 containerd[1714]: time="2025-07-06T23:32:33.544341140Z" level=info msg="Stop container \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\" with signal terminated" Jul 6 23:32:33.572299 systemd[1]: cri-containerd-c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2.scope: Deactivated successfully. Jul 6 23:32:33.599029 containerd[1714]: time="2025-07-06T23:32:33.597878752Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:32:33.610495 containerd[1714]: time="2025-07-06T23:32:33.610459548Z" level=info msg="StopContainer for \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\" with timeout 2 (s)" Jul 6 23:32:33.610764 containerd[1714]: time="2025-07-06T23:32:33.610739150Z" level=info msg="Stop container \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\" with signal terminated" Jul 6 23:32:33.617601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2-rootfs.mount: Deactivated successfully. Jul 6 23:32:33.620501 kubelet[3333]: E0706 23:32:33.620465 3333 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:32:33.624788 systemd-networkd[1568]: lxc_health: Link DOWN Jul 6 23:32:33.624800 systemd-networkd[1568]: lxc_health: Lost carrier Jul 6 23:32:33.641782 systemd[1]: cri-containerd-ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a.scope: Deactivated successfully. Jul 6 23:32:33.642143 systemd[1]: cri-containerd-ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a.scope: Consumed 7.274s CPU time, 126.9M memory peak, 120K read from disk, 13.3M written to disk. Jul 6 23:32:33.669477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a-rootfs.mount: Deactivated successfully. Jul 6 23:32:33.699273 containerd[1714]: time="2025-07-06T23:32:33.699193529Z" level=info msg="shim disconnected" id=ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a namespace=k8s.io Jul 6 23:32:33.699273 containerd[1714]: time="2025-07-06T23:32:33.699271030Z" level=warning msg="cleaning up after shim disconnected" id=ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a namespace=k8s.io Jul 6 23:32:33.699737 containerd[1714]: time="2025-07-06T23:32:33.699284630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:33.700298 containerd[1714]: time="2025-07-06T23:32:33.700134737Z" level=info msg="shim disconnected" id=c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2 namespace=k8s.io Jul 6 23:32:33.700298 containerd[1714]: time="2025-07-06T23:32:33.700189937Z" level=warning msg="cleaning up after shim disconnected" id=c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2 namespace=k8s.io Jul 6 23:32:33.700298 containerd[1714]: time="2025-07-06T23:32:33.700231937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:33.722166 containerd[1714]: time="2025-07-06T23:32:33.722117406Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:32:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:32:33.728981 containerd[1714]: time="2025-07-06T23:32:33.728919358Z" level=info msg="StopContainer for \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\" returns successfully" Jul 6 23:32:33.728981 containerd[1714]: time="2025-07-06T23:32:33.728945358Z" level=info msg="StopContainer for \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\" returns successfully" Jul 6 23:32:33.729713 containerd[1714]: time="2025-07-06T23:32:33.729681964Z" level=info msg="StopPodSandbox for \"c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77\"" Jul 6 23:32:33.729805 containerd[1714]: time="2025-07-06T23:32:33.729724464Z" level=info msg="Container to stop \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:32:33.732450 containerd[1714]: time="2025-07-06T23:32:33.732422485Z" level=info msg="StopPodSandbox for \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\"" Jul 6 23:32:33.732550 containerd[1714]: time="2025-07-06T23:32:33.732466785Z" level=info msg="Container to stop \"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:32:33.732550 containerd[1714]: time="2025-07-06T23:32:33.732507985Z" level=info msg="Container to stop \"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:32:33.732550 containerd[1714]: time="2025-07-06T23:32:33.732522485Z" level=info msg="Container to stop \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:32:33.732550 containerd[1714]: time="2025-07-06T23:32:33.732533785Z" level=info msg="Container to stop \"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:32:33.732550 containerd[1714]: time="2025-07-06T23:32:33.732545886Z" level=info msg="Container to stop \"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:32:33.732903 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77-shm.mount: Deactivated successfully. Jul 6 23:32:33.744486 systemd[1]: cri-containerd-c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77.scope: Deactivated successfully. Jul 6 23:32:33.748119 systemd[1]: cri-containerd-43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381.scope: Deactivated successfully. Jul 6 23:32:33.792995 containerd[1714]: time="2025-07-06T23:32:33.791943542Z" level=info msg="shim disconnected" id=c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77 namespace=k8s.io Jul 6 23:32:33.792995 containerd[1714]: time="2025-07-06T23:32:33.791991342Z" level=warning msg="cleaning up after shim disconnected" id=c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77 namespace=k8s.io Jul 6 23:32:33.792995 containerd[1714]: time="2025-07-06T23:32:33.792001942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:33.799187 containerd[1714]: time="2025-07-06T23:32:33.799137097Z" level=info msg="shim disconnected" id=43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381 namespace=k8s.io Jul 6 23:32:33.799187 containerd[1714]: time="2025-07-06T23:32:33.799184597Z" level=warning msg="cleaning up after shim disconnected" id=43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381 namespace=k8s.io Jul 6 23:32:33.799509 containerd[1714]: time="2025-07-06T23:32:33.799195997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:33.813379 containerd[1714]: time="2025-07-06T23:32:33.813270205Z" level=info msg="TearDown network for sandbox \"c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77\" successfully" Jul 6 23:32:33.813703 containerd[1714]: time="2025-07-06T23:32:33.813516807Z" level=info msg="StopPodSandbox for \"c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77\" returns successfully" Jul 6 23:32:33.819095 containerd[1714]: time="2025-07-06T23:32:33.819067950Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:32:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:32:33.821129 containerd[1714]: time="2025-07-06T23:32:33.821100166Z" level=info msg="TearDown network for sandbox \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" successfully" Jul 6 23:32:33.821129 containerd[1714]: time="2025-07-06T23:32:33.821127766Z" level=info msg="StopPodSandbox for \"43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381\" returns successfully" Jul 6 23:32:33.830226 kubelet[3333]: I0706 23:32:33.829551 3333 scope.go:117] "RemoveContainer" containerID="c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2" Jul 6 23:32:33.831845 containerd[1714]: time="2025-07-06T23:32:33.831802648Z" level=info msg="RemoveContainer for \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\"" Jul 6 23:32:33.842855 containerd[1714]: time="2025-07-06T23:32:33.842826032Z" level=info msg="RemoveContainer for \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\" returns successfully" Jul 6 23:32:33.844105 kubelet[3333]: I0706 23:32:33.843057 3333 scope.go:117] "RemoveContainer" containerID="c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2" Jul 6 23:32:33.844105 kubelet[3333]: E0706 23:32:33.843446 3333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\": not found" containerID="c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2" Jul 6 23:32:33.844105 kubelet[3333]: I0706 23:32:33.843474 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2"} err="failed to get container status \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\": not found" Jul 6 23:32:33.844105 kubelet[3333]: I0706 23:32:33.843559 3333 scope.go:117] "RemoveContainer" containerID="ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a" Jul 6 23:32:33.844295 containerd[1714]: time="2025-07-06T23:32:33.843331736Z" level=error msg="ContainerStatus for \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c32bdc0ef315b027eb2ed9c8fc6b4edbd2a0a74392c391dd24df79e5d45549a2\": not found" Jul 6 23:32:33.845385 containerd[1714]: time="2025-07-06T23:32:33.845328852Z" level=info msg="RemoveContainer for \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\"" Jul 6 23:32:33.855523 containerd[1714]: time="2025-07-06T23:32:33.855492430Z" level=info msg="RemoveContainer for \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\" returns successfully" Jul 6 23:32:33.855672 kubelet[3333]: I0706 23:32:33.855647 3333 scope.go:117] "RemoveContainer" containerID="d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208" Jul 6 23:32:33.856591 containerd[1714]: time="2025-07-06T23:32:33.856565538Z" level=info msg="RemoveContainer for \"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208\"" Jul 6 23:32:33.864684 containerd[1714]: time="2025-07-06T23:32:33.864648100Z" level=info msg="RemoveContainer for \"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208\" returns successfully" Jul 6 23:32:33.864843 kubelet[3333]: I0706 23:32:33.864820 3333 scope.go:117] "RemoveContainer" containerID="68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75" Jul 6 23:32:33.865775 containerd[1714]: time="2025-07-06T23:32:33.865736508Z" level=info msg="RemoveContainer for \"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75\"" Jul 6 23:32:33.875676 containerd[1714]: time="2025-07-06T23:32:33.875638584Z" level=info msg="RemoveContainer for \"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75\" returns successfully" Jul 6 23:32:33.875902 kubelet[3333]: I0706 23:32:33.875870 3333 scope.go:117] "RemoveContainer" containerID="9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3" Jul 6 23:32:33.876951 containerd[1714]: time="2025-07-06T23:32:33.876926294Z" level=info msg="RemoveContainer for \"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3\"" Jul 6 23:32:33.889738 containerd[1714]: time="2025-07-06T23:32:33.889690192Z" level=info msg="RemoveContainer for \"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3\" returns successfully" Jul 6 23:32:33.889987 kubelet[3333]: I0706 23:32:33.889948 3333 scope.go:117] "RemoveContainer" containerID="7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c" Jul 6 23:32:33.891260 containerd[1714]: time="2025-07-06T23:32:33.891236704Z" level=info msg="RemoveContainer for \"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c\"" Jul 6 23:32:33.899427 containerd[1714]: time="2025-07-06T23:32:33.899399867Z" level=info msg="RemoveContainer for \"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c\" returns successfully" Jul 6 23:32:33.899600 kubelet[3333]: I0706 23:32:33.899545 3333 scope.go:117] "RemoveContainer" containerID="ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a" Jul 6 23:32:33.899790 containerd[1714]: time="2025-07-06T23:32:33.899755470Z" level=error msg="ContainerStatus for \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\": not found" Jul 6 23:32:33.900005 kubelet[3333]: E0706 23:32:33.899976 3333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\": not found" containerID="ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a" Jul 6 23:32:33.900091 kubelet[3333]: I0706 23:32:33.900003 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a"} err="failed to get container status \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac828e1183f0d25f78066777f92d7f662e83babf8734b71073d4c284b796fc2a\": not found" Jul 6 23:32:33.900091 kubelet[3333]: I0706 23:32:33.900032 3333 scope.go:117] "RemoveContainer" containerID="d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208" Jul 6 23:32:33.900298 containerd[1714]: time="2025-07-06T23:32:33.900263573Z" level=error msg="ContainerStatus for \"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208\": not found" Jul 6 23:32:33.900456 kubelet[3333]: E0706 23:32:33.900431 3333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208\": not found" containerID="d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208" Jul 6 23:32:33.900559 kubelet[3333]: I0706 23:32:33.900463 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208"} err="failed to get container status \"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8dfab38f46d786d1dffcffa7508a364fe1018e63b6c98315eb8faaa49770208\": not found" Jul 6 23:32:33.900559 kubelet[3333]: I0706 23:32:33.900498 3333 scope.go:117] "RemoveContainer" containerID="68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75" Jul 6 23:32:33.900814 containerd[1714]: time="2025-07-06T23:32:33.900780977Z" level=error msg="ContainerStatus for \"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75\": not found" Jul 6 23:32:33.900940 kubelet[3333]: E0706 23:32:33.900907 3333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75\": not found" containerID="68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75" Jul 6 23:32:33.901000 kubelet[3333]: I0706 23:32:33.900935 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75"} err="failed to get container status \"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75\": rpc error: code = NotFound desc = an error occurred when try to find container \"68ca208fc16242c636c05b3b71bd79794e894dc26dc35f977516f420f9dbaf75\": not found" Jul 6 23:32:33.901000 kubelet[3333]: I0706 23:32:33.900954 3333 scope.go:117] "RemoveContainer" containerID="9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3" Jul 6 23:32:33.901248 containerd[1714]: time="2025-07-06T23:32:33.901198481Z" level=error msg="ContainerStatus for \"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3\": not found" Jul 6 23:32:33.901463 kubelet[3333]: E0706 23:32:33.901442 3333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3\": not found" containerID="9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3" Jul 6 23:32:33.901543 kubelet[3333]: I0706 23:32:33.901473 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3"} err="failed to get container status \"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"9093472500a8264d6db9e9f30112efb27ee13d29879a0ef6813aca8c20fd60f3\": not found" Jul 6 23:32:33.901543 kubelet[3333]: I0706 23:32:33.901494 3333 scope.go:117] "RemoveContainer" containerID="7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c" Jul 6 23:32:33.901738 containerd[1714]: time="2025-07-06T23:32:33.901702384Z" level=error msg="ContainerStatus for \"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c\": not found" Jul 6 23:32:33.901840 kubelet[3333]: E0706 23:32:33.901816 3333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c\": not found" containerID="7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c" Jul 6 23:32:33.901906 kubelet[3333]: I0706 23:32:33.901846 3333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c"} err="failed to get container status \"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d4d381a8457cf346dbb032b6a414cb0eea9aabb3d85b784e3fd882f7fd5310c\": not found" Jul 6 23:32:33.947314 kubelet[3333]: I0706 23:32:33.947292 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-hostproc\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947402 kubelet[3333]: I0706 23:32:33.947329 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-cgroup\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947402 kubelet[3333]: I0706 23:32:33.947357 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-run\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947402 kubelet[3333]: I0706 23:32:33.947384 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb572e4c-8815-4c9c-8842-517f4765f93c-clustermesh-secrets\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947594 kubelet[3333]: I0706 23:32:33.947407 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cni-path\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947594 kubelet[3333]: I0706 23:32:33.947431 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l9mn\" (UniqueName: \"kubernetes.io/projected/eb572e4c-8815-4c9c-8842-517f4765f93c-kube-api-access-9l9mn\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947594 kubelet[3333]: I0706 23:32:33.947453 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-bpf-maps\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947594 kubelet[3333]: I0706 23:32:33.947476 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8lvn\" (UniqueName: \"kubernetes.io/projected/6c8adcbe-6e22-4887-9b83-bc5124ec534e-kube-api-access-v8lvn\") pod \"6c8adcbe-6e22-4887-9b83-bc5124ec534e\" (UID: \"6c8adcbe-6e22-4887-9b83-bc5124ec534e\") " Jul 6 23:32:33.947594 kubelet[3333]: I0706 23:32:33.947497 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb572e4c-8815-4c9c-8842-517f4765f93c-hubble-tls\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947594 kubelet[3333]: I0706 23:32:33.947521 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-config-path\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947819 kubelet[3333]: I0706 23:32:33.947544 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-etc-cni-netd\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947819 kubelet[3333]: I0706 23:32:33.947566 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-host-proc-sys-kernel\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947819 kubelet[3333]: I0706 23:32:33.947595 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c8adcbe-6e22-4887-9b83-bc5124ec534e-cilium-config-path\") pod \"6c8adcbe-6e22-4887-9b83-bc5124ec534e\" (UID: \"6c8adcbe-6e22-4887-9b83-bc5124ec534e\") " Jul 6 23:32:33.947819 kubelet[3333]: I0706 23:32:33.947616 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-xtables-lock\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947819 kubelet[3333]: I0706 23:32:33.947638 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-host-proc-sys-net\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.947819 kubelet[3333]: I0706 23:32:33.947660 3333 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-lib-modules\") pod \"eb572e4c-8815-4c9c-8842-517f4765f93c\" (UID: \"eb572e4c-8815-4c9c-8842-517f4765f93c\") " Jul 6 23:32:33.948042 kubelet[3333]: I0706 23:32:33.947737 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:32:33.948042 kubelet[3333]: I0706 23:32:33.947779 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-hostproc" (OuterVolumeSpecName: "hostproc") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:32:33.948042 kubelet[3333]: I0706 23:32:33.947799 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:32:33.948042 kubelet[3333]: I0706 23:32:33.947818 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:32:33.950566 kubelet[3333]: I0706 23:32:33.950541 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:32:33.954695 kubelet[3333]: I0706 23:32:33.950674 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cni-path" (OuterVolumeSpecName: "cni-path") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:32:33.956438 kubelet[3333]: I0706 23:32:33.951162 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:32:33.956438 kubelet[3333]: I0706 23:32:33.953257 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:32:33.956438 kubelet[3333]: I0706 23:32:33.954597 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c8adcbe-6e22-4887-9b83-bc5124ec534e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c8adcbe-6e22-4887-9b83-bc5124ec534e" (UID: "6c8adcbe-6e22-4887-9b83-bc5124ec534e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:32:33.956606 kubelet[3333]: I0706 23:32:33.954640 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:32:33.956606 kubelet[3333]: I0706 23:32:33.954656 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:32:33.956606 kubelet[3333]: I0706 23:32:33.955539 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:32:33.956606 kubelet[3333]: I0706 23:32:33.956545 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c8adcbe-6e22-4887-9b83-bc5124ec534e-kube-api-access-v8lvn" (OuterVolumeSpecName: "kube-api-access-v8lvn") pod "6c8adcbe-6e22-4887-9b83-bc5124ec534e" (UID: "6c8adcbe-6e22-4887-9b83-bc5124ec534e"). InnerVolumeSpecName "kube-api-access-v8lvn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:32:33.956774 kubelet[3333]: I0706 23:32:33.956614 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb572e4c-8815-4c9c-8842-517f4765f93c-kube-api-access-9l9mn" (OuterVolumeSpecName: "kube-api-access-9l9mn") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "kube-api-access-9l9mn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:32:33.956774 kubelet[3333]: I0706 23:32:33.956670 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb572e4c-8815-4c9c-8842-517f4765f93c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:32:33.958094 kubelet[3333]: I0706 23:32:33.958064 3333 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb572e4c-8815-4c9c-8842-517f4765f93c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eb572e4c-8815-4c9c-8842-517f4765f93c" (UID: "eb572e4c-8815-4c9c-8842-517f4765f93c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:32:34.049153 kubelet[3333]: I0706 23:32:34.048880 3333 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-bpf-maps\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049153 kubelet[3333]: I0706 23:32:34.048993 3333 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9l9mn\" (UniqueName: \"kubernetes.io/projected/eb572e4c-8815-4c9c-8842-517f4765f93c-kube-api-access-9l9mn\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049153 kubelet[3333]: I0706 23:32:34.049044 3333 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-config-path\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049153 kubelet[3333]: I0706 23:32:34.049085 3333 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v8lvn\" (UniqueName: \"kubernetes.io/projected/6c8adcbe-6e22-4887-9b83-bc5124ec534e-kube-api-access-v8lvn\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049153 kubelet[3333]: I0706 23:32:34.049126 3333 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb572e4c-8815-4c9c-8842-517f4765f93c-hubble-tls\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049805 kubelet[3333]: I0706 23:32:34.049164 3333 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-etc-cni-netd\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049805 kubelet[3333]: I0706 23:32:34.049198 3333 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-host-proc-sys-kernel\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049805 kubelet[3333]: I0706 23:32:34.049289 3333 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c8adcbe-6e22-4887-9b83-bc5124ec534e-cilium-config-path\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049805 kubelet[3333]: I0706 23:32:34.049324 3333 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-lib-modules\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049805 kubelet[3333]: I0706 23:32:34.049358 3333 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-xtables-lock\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049805 kubelet[3333]: I0706 23:32:34.049392 3333 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-host-proc-sys-net\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049805 kubelet[3333]: I0706 23:32:34.049426 3333 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-hostproc\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.049805 kubelet[3333]: I0706 23:32:34.049463 3333 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-cgroup\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.050254 kubelet[3333]: I0706 23:32:34.049498 3333 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cilium-run\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.050254 kubelet[3333]: I0706 23:32:34.049531 3333 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb572e4c-8815-4c9c-8842-517f4765f93c-clustermesh-secrets\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.050254 kubelet[3333]: I0706 23:32:34.049565 3333 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb572e4c-8815-4c9c-8842-517f4765f93c-cni-path\") on node \"ci-4230.2.1-a-d392076d12\" DevicePath \"\"" Jul 6 23:32:34.133985 systemd[1]: Removed slice kubepods-besteffort-pod6c8adcbe_6e22_4887_9b83_bc5124ec534e.slice - libcontainer container kubepods-besteffort-pod6c8adcbe_6e22_4887_9b83_bc5124ec534e.slice. Jul 6 23:32:34.553887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0dd2c853558e4317642894be4a982d7b6a118dfd8e960066084cd4c5708ac77-rootfs.mount: Deactivated successfully. Jul 6 23:32:34.554297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381-rootfs.mount: Deactivated successfully. Jul 6 23:32:34.554521 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43aca0f78d45edc786842a2749bb5553d7f9597519e4b43563392e550d60d381-shm.mount: Deactivated successfully. Jul 6 23:32:34.554711 systemd[1]: var-lib-kubelet-pods-6c8adcbe\x2d6e22\x2d4887\x2d9b83\x2dbc5124ec534e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv8lvn.mount: Deactivated successfully. Jul 6 23:32:34.554905 systemd[1]: var-lib-kubelet-pods-eb572e4c\x2d8815\x2d4c9c\x2d8842\x2d517f4765f93c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9l9mn.mount: Deactivated successfully. Jul 6 23:32:34.555078 systemd[1]: var-lib-kubelet-pods-eb572e4c\x2d8815\x2d4c9c\x2d8842\x2d517f4765f93c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:32:34.555278 systemd[1]: var-lib-kubelet-pods-eb572e4c\x2d8815\x2d4c9c\x2d8842\x2d517f4765f93c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:32:34.845398 systemd[1]: Removed slice kubepods-burstable-podeb572e4c_8815_4c9c_8842_517f4765f93c.slice - libcontainer container kubepods-burstable-podeb572e4c_8815_4c9c_8842_517f4765f93c.slice. Jul 6 23:32:34.845554 systemd[1]: kubepods-burstable-podeb572e4c_8815_4c9c_8842_517f4765f93c.slice: Consumed 7.359s CPU time, 127.4M memory peak, 120K read from disk, 13.3M written to disk. Jul 6 23:32:35.511674 kubelet[3333]: I0706 23:32:35.510464 3333 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c8adcbe-6e22-4887-9b83-bc5124ec534e" path="/var/lib/kubelet/pods/6c8adcbe-6e22-4887-9b83-bc5124ec534e/volumes" Jul 6 23:32:35.511674 kubelet[3333]: I0706 23:32:35.511015 3333 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb572e4c-8815-4c9c-8842-517f4765f93c" path="/var/lib/kubelet/pods/eb572e4c-8815-4c9c-8842-517f4765f93c/volumes" Jul 6 23:32:35.568656 sshd[4937]: Connection closed by 10.200.16.10 port 56584 Jul 6 23:32:35.569891 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:35.575574 systemd[1]: sshd@22-10.200.8.45:22-10.200.16.10:56584.service: Deactivated successfully. Jul 6 23:32:35.578545 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:32:35.579947 systemd-logind[1698]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:32:35.581739 systemd-logind[1698]: Removed session 25. Jul 6 23:32:35.686509 systemd[1]: Started sshd@23-10.200.8.45:22-10.200.16.10:56590.service - OpenSSH per-connection server daemon (10.200.16.10:56590). Jul 6 23:32:36.140237 kubelet[3333]: I0706 23:32:36.139715 3333 setters.go:602] "Node became not ready" node="ci-4230.2.1-a-d392076d12" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:32:36Z","lastTransitionTime":"2025-07-06T23:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:32:36.309865 sshd[5098]: Accepted publickey for core from 10.200.16.10 port 56590 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:36.311406 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:36.317162 systemd-logind[1698]: New session 26 of user core. Jul 6 23:32:36.323369 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:32:37.284638 kubelet[3333]: I0706 23:32:37.284561 3333 memory_manager.go:355] "RemoveStaleState removing state" podUID="eb572e4c-8815-4c9c-8842-517f4765f93c" containerName="cilium-agent" Jul 6 23:32:37.284638 kubelet[3333]: I0706 23:32:37.284607 3333 memory_manager.go:355] "RemoveStaleState removing state" podUID="6c8adcbe-6e22-4887-9b83-bc5124ec534e" containerName="cilium-operator" Jul 6 23:32:37.297067 systemd[1]: Created slice kubepods-burstable-pod9f845759_e0f7_49ab_8ca6_f7af52059eea.slice - libcontainer container kubepods-burstable-pod9f845759_e0f7_49ab_8ca6_f7af52059eea.slice. Jul 6 23:32:37.358042 sshd[5100]: Connection closed by 10.200.16.10 port 56590 Jul 6 23:32:37.358895 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:37.363112 systemd[1]: sshd@23-10.200.8.45:22-10.200.16.10:56590.service: Deactivated successfully. Jul 6 23:32:37.365395 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:32:37.366380 systemd-logind[1698]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:32:37.367663 systemd-logind[1698]: Removed session 26. Jul 6 23:32:37.370672 kubelet[3333]: I0706 23:32:37.370644 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f845759-e0f7-49ab-8ca6-f7af52059eea-cilium-run\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.370775 kubelet[3333]: I0706 23:32:37.370682 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f845759-e0f7-49ab-8ca6-f7af52059eea-host-proc-sys-net\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.370775 kubelet[3333]: I0706 23:32:37.370714 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f845759-e0f7-49ab-8ca6-f7af52059eea-host-proc-sys-kernel\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.370775 kubelet[3333]: I0706 23:32:37.370741 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f845759-e0f7-49ab-8ca6-f7af52059eea-cni-path\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.370775 kubelet[3333]: I0706 23:32:37.370761 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f845759-e0f7-49ab-8ca6-f7af52059eea-hubble-tls\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.370945 kubelet[3333]: I0706 23:32:37.370796 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f845759-e0f7-49ab-8ca6-f7af52059eea-cilium-cgroup\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.370945 kubelet[3333]: I0706 23:32:37.370822 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f845759-e0f7-49ab-8ca6-f7af52059eea-bpf-maps\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.370945 kubelet[3333]: I0706 23:32:37.370846 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f845759-e0f7-49ab-8ca6-f7af52059eea-lib-modules\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.370945 kubelet[3333]: I0706 23:32:37.370870 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7v2k\" (UniqueName: \"kubernetes.io/projected/9f845759-e0f7-49ab-8ca6-f7af52059eea-kube-api-access-c7v2k\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.370945 kubelet[3333]: I0706 23:32:37.370895 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f845759-e0f7-49ab-8ca6-f7af52059eea-etc-cni-netd\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.370945 kubelet[3333]: I0706 23:32:37.370917 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f845759-e0f7-49ab-8ca6-f7af52059eea-xtables-lock\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.371170 kubelet[3333]: I0706 23:32:37.370942 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f845759-e0f7-49ab-8ca6-f7af52059eea-clustermesh-secrets\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.371170 kubelet[3333]: I0706 23:32:37.370964 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f845759-e0f7-49ab-8ca6-f7af52059eea-cilium-config-path\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.371170 kubelet[3333]: I0706 23:32:37.370988 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f845759-e0f7-49ab-8ca6-f7af52059eea-hostproc\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.371170 kubelet[3333]: I0706 23:32:37.371015 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9f845759-e0f7-49ab-8ca6-f7af52059eea-cilium-ipsec-secrets\") pod \"cilium-htzg4\" (UID: \"9f845759-e0f7-49ab-8ca6-f7af52059eea\") " pod="kube-system/cilium-htzg4" Jul 6 23:32:37.473878 systemd[1]: Started sshd@24-10.200.8.45:22-10.200.16.10:56604.service - OpenSSH per-connection server daemon (10.200.16.10:56604). Jul 6 23:32:37.602342 containerd[1714]: time="2025-07-06T23:32:37.602175534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-htzg4,Uid:9f845759-e0f7-49ab-8ca6-f7af52059eea,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:37.650260 containerd[1714]: time="2025-07-06T23:32:37.649931499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:32:37.650260 containerd[1714]: time="2025-07-06T23:32:37.650011699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:32:37.650260 containerd[1714]: time="2025-07-06T23:32:37.650043400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:37.650591 containerd[1714]: time="2025-07-06T23:32:37.650194601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:37.671376 systemd[1]: Started cri-containerd-161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0.scope - libcontainer container 161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0. Jul 6 23:32:37.695042 containerd[1714]: time="2025-07-06T23:32:37.695001943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-htzg4,Uid:9f845759-e0f7-49ab-8ca6-f7af52059eea,Namespace:kube-system,Attempt:0,} returns sandbox id \"161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0\"" Jul 6 23:32:37.697680 containerd[1714]: time="2025-07-06T23:32:37.697464362Z" level=info msg="CreateContainer within sandbox \"161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:32:37.734002 containerd[1714]: time="2025-07-06T23:32:37.733966741Z" level=info msg="CreateContainer within sandbox \"161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c091b1e553df353654d90f433742bbb7ccb3e079e7a2c1cf198f7f0efdf139ff\"" Jul 6 23:32:37.734645 containerd[1714]: time="2025-07-06T23:32:37.734605145Z" level=info msg="StartContainer for \"c091b1e553df353654d90f433742bbb7ccb3e079e7a2c1cf198f7f0efdf139ff\"" Jul 6 23:32:37.771548 systemd[1]: Started cri-containerd-c091b1e553df353654d90f433742bbb7ccb3e079e7a2c1cf198f7f0efdf139ff.scope - libcontainer container c091b1e553df353654d90f433742bbb7ccb3e079e7a2c1cf198f7f0efdf139ff. Jul 6 23:32:37.803126 containerd[1714]: time="2025-07-06T23:32:37.803086869Z" level=info msg="StartContainer for \"c091b1e553df353654d90f433742bbb7ccb3e079e7a2c1cf198f7f0efdf139ff\" returns successfully" Jul 6 23:32:37.806491 systemd[1]: cri-containerd-c091b1e553df353654d90f433742bbb7ccb3e079e7a2c1cf198f7f0efdf139ff.scope: Deactivated successfully. Jul 6 23:32:37.899990 containerd[1714]: time="2025-07-06T23:32:37.899818007Z" level=info msg="shim disconnected" id=c091b1e553df353654d90f433742bbb7ccb3e079e7a2c1cf198f7f0efdf139ff namespace=k8s.io Jul 6 23:32:37.899990 containerd[1714]: time="2025-07-06T23:32:37.899892908Z" level=warning msg="cleaning up after shim disconnected" id=c091b1e553df353654d90f433742bbb7ccb3e079e7a2c1cf198f7f0efdf139ff namespace=k8s.io Jul 6 23:32:37.899990 containerd[1714]: time="2025-07-06T23:32:37.899904108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:38.138753 sshd[5111]: Accepted publickey for core from 10.200.16.10 port 56604 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:38.140764 sshd-session[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:38.147163 systemd-logind[1698]: New session 27 of user core. Jul 6 23:32:38.153393 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:32:38.584501 sshd[5220]: Connection closed by 10.200.16.10 port 56604 Jul 6 23:32:38.585679 sshd-session[5111]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:38.589304 systemd[1]: sshd@24-10.200.8.45:22-10.200.16.10:56604.service: Deactivated successfully. Jul 6 23:32:38.592282 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:32:38.594047 systemd-logind[1698]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:32:38.595126 systemd-logind[1698]: Removed session 27. Jul 6 23:32:38.621909 kubelet[3333]: E0706 23:32:38.621853 3333 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:32:38.700489 systemd[1]: Started sshd@25-10.200.8.45:22-10.200.16.10:56608.service - OpenSSH per-connection server daemon (10.200.16.10:56608). Jul 6 23:32:38.853004 containerd[1714]: time="2025-07-06T23:32:38.852757786Z" level=info msg="CreateContainer within sandbox \"161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:32:38.894162 containerd[1714]: time="2025-07-06T23:32:38.894118102Z" level=info msg="CreateContainer within sandbox \"161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"65660cbf19c21200b8fd83dffaf23db737fe18cb3be589010ea3b3ed5bc29957\"" Jul 6 23:32:38.894634 containerd[1714]: time="2025-07-06T23:32:38.894583106Z" level=info msg="StartContainer for \"65660cbf19c21200b8fd83dffaf23db737fe18cb3be589010ea3b3ed5bc29957\"" Jul 6 23:32:38.944365 systemd[1]: Started cri-containerd-65660cbf19c21200b8fd83dffaf23db737fe18cb3be589010ea3b3ed5bc29957.scope - libcontainer container 65660cbf19c21200b8fd83dffaf23db737fe18cb3be589010ea3b3ed5bc29957. Jul 6 23:32:38.972443 containerd[1714]: time="2025-07-06T23:32:38.972317699Z" level=info msg="StartContainer for \"65660cbf19c21200b8fd83dffaf23db737fe18cb3be589010ea3b3ed5bc29957\" returns successfully" Jul 6 23:32:38.977449 systemd[1]: cri-containerd-65660cbf19c21200b8fd83dffaf23db737fe18cb3be589010ea3b3ed5bc29957.scope: Deactivated successfully. Jul 6 23:32:39.011941 containerd[1714]: time="2025-07-06T23:32:39.011854101Z" level=info msg="shim disconnected" id=65660cbf19c21200b8fd83dffaf23db737fe18cb3be589010ea3b3ed5bc29957 namespace=k8s.io Jul 6 23:32:39.011941 containerd[1714]: time="2025-07-06T23:32:39.011917902Z" level=warning msg="cleaning up after shim disconnected" id=65660cbf19c21200b8fd83dffaf23db737fe18cb3be589010ea3b3ed5bc29957 namespace=k8s.io Jul 6 23:32:39.011941 containerd[1714]: time="2025-07-06T23:32:39.011930702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:39.330265 sshd[5227]: Accepted publickey for core from 10.200.16.10 port 56608 ssh2: RSA SHA256:CrkEq+GS/CqPhM0mP128HUaLhez9RVr/lxtrGPplanM Jul 6 23:32:39.331516 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:39.336053 systemd-logind[1698]: New session 28 of user core. Jul 6 23:32:39.347364 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 6 23:32:39.479645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65660cbf19c21200b8fd83dffaf23db737fe18cb3be589010ea3b3ed5bc29957-rootfs.mount: Deactivated successfully. Jul 6 23:32:39.856899 containerd[1714]: time="2025-07-06T23:32:39.856847656Z" level=info msg="CreateContainer within sandbox \"161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:32:39.906216 containerd[1714]: time="2025-07-06T23:32:39.906171632Z" level=info msg="CreateContainer within sandbox \"161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1b24628cbdc75017f048539c9f871f2e12d196e98cc0eaaef67b365dfa194596\"" Jul 6 23:32:39.906794 containerd[1714]: time="2025-07-06T23:32:39.906760837Z" level=info msg="StartContainer for \"1b24628cbdc75017f048539c9f871f2e12d196e98cc0eaaef67b365dfa194596\"" Jul 6 23:32:39.944379 systemd[1]: Started cri-containerd-1b24628cbdc75017f048539c9f871f2e12d196e98cc0eaaef67b365dfa194596.scope - libcontainer container 1b24628cbdc75017f048539c9f871f2e12d196e98cc0eaaef67b365dfa194596. Jul 6 23:32:39.990480 containerd[1714]: time="2025-07-06T23:32:39.989699870Z" level=info msg="StartContainer for \"1b24628cbdc75017f048539c9f871f2e12d196e98cc0eaaef67b365dfa194596\" returns successfully" Jul 6 23:32:39.989906 systemd[1]: cri-containerd-1b24628cbdc75017f048539c9f871f2e12d196e98cc0eaaef67b365dfa194596.scope: Deactivated successfully. Jul 6 23:32:40.038609 containerd[1714]: time="2025-07-06T23:32:40.038522643Z" level=info msg="shim disconnected" id=1b24628cbdc75017f048539c9f871f2e12d196e98cc0eaaef67b365dfa194596 namespace=k8s.io Jul 6 23:32:40.038609 containerd[1714]: time="2025-07-06T23:32:40.038620644Z" level=warning msg="cleaning up after shim disconnected" id=1b24628cbdc75017f048539c9f871f2e12d196e98cc0eaaef67b365dfa194596 namespace=k8s.io Jul 6 23:32:40.038913 containerd[1714]: time="2025-07-06T23:32:40.038634044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:40.480432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b24628cbdc75017f048539c9f871f2e12d196e98cc0eaaef67b365dfa194596-rootfs.mount: Deactivated successfully. Jul 6 23:32:40.862041 containerd[1714]: time="2025-07-06T23:32:40.861868132Z" level=info msg="CreateContainer within sandbox \"161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:32:40.902505 containerd[1714]: time="2025-07-06T23:32:40.902464242Z" level=info msg="CreateContainer within sandbox \"161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2160bccf404cfeeedc692f5d37c0680f3246e9b984980006ae216e79d8bea8c3\"" Jul 6 23:32:40.903014 containerd[1714]: time="2025-07-06T23:32:40.902947546Z" level=info msg="StartContainer for \"2160bccf404cfeeedc692f5d37c0680f3246e9b984980006ae216e79d8bea8c3\"" Jul 6 23:32:40.940392 systemd[1]: Started cri-containerd-2160bccf404cfeeedc692f5d37c0680f3246e9b984980006ae216e79d8bea8c3.scope - libcontainer container 2160bccf404cfeeedc692f5d37c0680f3246e9b984980006ae216e79d8bea8c3. Jul 6 23:32:40.973060 systemd[1]: cri-containerd-2160bccf404cfeeedc692f5d37c0680f3246e9b984980006ae216e79d8bea8c3.scope: Deactivated successfully. Jul 6 23:32:40.978743 containerd[1714]: time="2025-07-06T23:32:40.978702325Z" level=info msg="StartContainer for \"2160bccf404cfeeedc692f5d37c0680f3246e9b984980006ae216e79d8bea8c3\" returns successfully" Jul 6 23:32:41.012064 containerd[1714]: time="2025-07-06T23:32:41.011993379Z" level=info msg="shim disconnected" id=2160bccf404cfeeedc692f5d37c0680f3246e9b984980006ae216e79d8bea8c3 namespace=k8s.io Jul 6 23:32:41.012064 containerd[1714]: time="2025-07-06T23:32:41.012062380Z" level=warning msg="cleaning up after shim disconnected" id=2160bccf404cfeeedc692f5d37c0680f3246e9b984980006ae216e79d8bea8c3 namespace=k8s.io Jul 6 23:32:41.012350 containerd[1714]: time="2025-07-06T23:32:41.012076880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:41.480710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2160bccf404cfeeedc692f5d37c0680f3246e9b984980006ae216e79d8bea8c3-rootfs.mount: Deactivated successfully. Jul 6 23:32:41.867641 containerd[1714]: time="2025-07-06T23:32:41.867585314Z" level=info msg="CreateContainer within sandbox \"161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:32:41.917229 containerd[1714]: time="2025-07-06T23:32:41.917171393Z" level=info msg="CreateContainer within sandbox \"161617f3337556296f54a28baef98332ca3991e3a15999e96eb7eab2fbb812a0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b2337b14431cfda1e4f31dc6eb8de980370c8d971a70448fd96d614075f8174b\"" Jul 6 23:32:41.918800 containerd[1714]: time="2025-07-06T23:32:41.917787398Z" level=info msg="StartContainer for \"b2337b14431cfda1e4f31dc6eb8de980370c8d971a70448fd96d614075f8174b\"" Jul 6 23:32:41.959366 systemd[1]: Started cri-containerd-b2337b14431cfda1e4f31dc6eb8de980370c8d971a70448fd96d614075f8174b.scope - libcontainer container b2337b14431cfda1e4f31dc6eb8de980370c8d971a70448fd96d614075f8174b. Jul 6 23:32:41.994608 containerd[1714]: time="2025-07-06T23:32:41.994539484Z" level=info msg="StartContainer for \"b2337b14431cfda1e4f31dc6eb8de980370c8d971a70448fd96d614075f8174b\" returns successfully" Jul 6 23:32:42.394265 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 6 23:32:42.480484 systemd[1]: run-containerd-runc-k8s.io-b2337b14431cfda1e4f31dc6eb8de980370c8d971a70448fd96d614075f8174b-runc.2v8bbn.mount: Deactivated successfully. Jul 6 23:32:42.888101 kubelet[3333]: I0706 23:32:42.888029 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-htzg4" podStartSLOduration=5.888010008 podStartE2EDuration="5.888010008s" podCreationTimestamp="2025-07-06 23:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:32:42.887640206 +0000 UTC m=+139.487314404" watchObservedRunningTime="2025-07-06 23:32:42.888010008 +0000 UTC m=+139.487684206" Jul 6 23:32:45.315542 systemd-networkd[1568]: lxc_health: Link UP Jul 6 23:32:45.320609 systemd-networkd[1568]: lxc_health: Gained carrier Jul 6 23:32:47.209468 systemd-networkd[1568]: lxc_health: Gained IPv6LL Jul 6 23:32:50.569044 sshd[5290]: Connection closed by 10.200.16.10 port 56608 Jul 6 23:32:50.569952 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:50.573292 systemd[1]: sshd@25-10.200.8.45:22-10.200.16.10:56608.service: Deactivated successfully. Jul 6 23:32:50.575908 systemd[1]: session-28.scope: Deactivated successfully. Jul 6 23:32:50.577661 systemd-logind[1698]: Session 28 logged out. Waiting for processes to exit. Jul 6 23:32:50.579252 systemd-logind[1698]: Removed session 28.