Aug 12 23:51:29.223882 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 12 23:51:29.223927 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:51:29.223943 kernel: BIOS-provided physical RAM map: Aug 12 23:51:29.223952 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 12 23:51:29.223961 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 12 23:51:29.223970 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 12 23:51:29.223981 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 12 23:51:29.223990 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 12 23:51:29.224000 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Aug 12 23:51:29.224009 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Aug 12 23:51:29.224018 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Aug 12 23:51:29.224030 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Aug 12 23:51:29.224043 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Aug 12 23:51:29.224095 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Aug 12 23:51:29.224110 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Aug 12 23:51:29.224120 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 12 23:51:29.224142 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Aug 12 23:51:29.224152 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Aug 12 23:51:29.224162 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Aug 12 23:51:29.224171 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Aug 12 23:51:29.224181 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Aug 12 23:51:29.224191 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 12 23:51:29.224200 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 12 23:51:29.224210 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 12 23:51:29.224219 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Aug 12 23:51:29.224229 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 12 23:51:29.224238 kernel: NX (Execute Disable) protection: active Aug 12 23:51:29.224252 kernel: APIC: Static calls initialized Aug 12 23:51:29.224262 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Aug 12 23:51:29.224272 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Aug 12 23:51:29.224282 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Aug 12 23:51:29.224291 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Aug 12 23:51:29.224301 kernel: extended physical RAM map: Aug 12 23:51:29.224311 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 12 23:51:29.224320 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 12 23:51:29.224330 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 12 23:51:29.224354 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Aug 12 23:51:29.224364 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 12 23:51:29.224374 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Aug 12 23:51:29.224389 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Aug 12 23:51:29.224405 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Aug 12 23:51:29.224415 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Aug 12 23:51:29.224425 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Aug 12 23:51:29.224435 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Aug 12 23:51:29.224445 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Aug 12 23:51:29.224462 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Aug 12 23:51:29.224473 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Aug 12 23:51:29.224483 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Aug 12 23:51:29.224493 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Aug 12 23:51:29.224504 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 12 23:51:29.224514 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Aug 12 23:51:29.224524 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Aug 12 23:51:29.224534 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Aug 12 23:51:29.224545 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Aug 12 23:51:29.224559 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Aug 12 23:51:29.224569 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 12 23:51:29.224579 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 12 23:51:29.224589 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 12 23:51:29.224603 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Aug 12 23:51:29.224613 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 12 23:51:29.224623 kernel: efi: EFI v2.7 by EDK II Aug 12 23:51:29.224633 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Aug 12 23:51:29.224643 kernel: random: crng init done Aug 12 23:51:29.224654 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Aug 12 23:51:29.224664 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Aug 12 23:51:29.224677 kernel: secureboot: Secure boot disabled Aug 12 23:51:29.224691 kernel: SMBIOS 2.8 present. Aug 12 23:51:29.224701 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Aug 12 23:51:29.224711 kernel: Hypervisor detected: KVM Aug 12 23:51:29.224721 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 12 23:51:29.224732 kernel: kvm-clock: using sched offset of 4537467930 cycles Aug 12 23:51:29.224743 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 12 23:51:29.224754 kernel: tsc: Detected 2794.750 MHz processor Aug 12 23:51:29.224764 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 12 23:51:29.224775 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 12 23:51:29.224786 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Aug 12 23:51:29.224800 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 12 23:51:29.224810 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 12 23:51:29.224821 kernel: Using GB pages for direct mapping Aug 12 23:51:29.224831 kernel: ACPI: Early table checksum verification disabled Aug 12 23:51:29.224842 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 12 23:51:29.224853 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 12 23:51:29.224863 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:29.224874 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:29.224884 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 12 23:51:29.224898 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:29.224909 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:29.224919 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:29.224930 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:51:29.224940 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 12 23:51:29.224951 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 12 23:51:29.224962 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 12 23:51:29.224972 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 12 23:51:29.224986 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 12 23:51:29.224996 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 12 23:51:29.225006 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 12 23:51:29.225017 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 12 23:51:29.225027 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 12 23:51:29.225038 kernel: No NUMA configuration found Aug 12 23:51:29.225076 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Aug 12 23:51:29.225087 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Aug 12 23:51:29.225098 kernel: Zone ranges: Aug 12 23:51:29.225109 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 12 23:51:29.225123 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Aug 12 23:51:29.225144 kernel: Normal empty Aug 12 23:51:29.225158 kernel: Movable zone start for each node Aug 12 23:51:29.225168 kernel: Early memory node ranges Aug 12 23:51:29.225179 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 12 23:51:29.225189 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 12 23:51:29.225199 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 12 23:51:29.225210 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Aug 12 23:51:29.225220 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Aug 12 23:51:29.225235 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Aug 12 23:51:29.225245 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Aug 12 23:51:29.225255 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Aug 12 23:51:29.225266 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Aug 12 23:51:29.225276 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 12 23:51:29.225287 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 12 23:51:29.225309 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 12 23:51:29.225322 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 12 23:51:29.225333 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Aug 12 23:51:29.225344 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Aug 12 23:51:29.225355 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 12 23:51:29.225369 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Aug 12 23:51:29.225383 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Aug 12 23:51:29.225394 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 12 23:51:29.225405 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 12 23:51:29.225416 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 12 23:51:29.225427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 12 23:51:29.225442 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 12 23:51:29.225453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 12 23:51:29.225463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 12 23:51:29.225474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 12 23:51:29.225485 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 12 23:51:29.225496 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 12 23:51:29.225507 kernel: TSC deadline timer available Aug 12 23:51:29.225518 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 12 23:51:29.225529 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 12 23:51:29.225543 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 12 23:51:29.225554 kernel: kvm-guest: setup PV sched yield Aug 12 23:51:29.225565 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Aug 12 23:51:29.225576 kernel: Booting paravirtualized kernel on KVM Aug 12 23:51:29.225587 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 12 23:51:29.225599 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 12 23:51:29.225610 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 12 23:51:29.225621 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 12 23:51:29.225632 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 12 23:51:29.225646 kernel: kvm-guest: PV spinlocks enabled Aug 12 23:51:29.225657 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 12 23:51:29.225669 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:51:29.225681 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:51:29.225692 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:51:29.225706 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:51:29.225717 kernel: Fallback order for Node 0: 0 Aug 12 23:51:29.225728 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Aug 12 23:51:29.225742 kernel: Policy zone: DMA32 Aug 12 23:51:29.225754 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:51:29.225765 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 177824K reserved, 0K cma-reserved) Aug 12 23:51:29.225776 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 12 23:51:29.225787 kernel: ftrace: allocating 37942 entries in 149 pages Aug 12 23:51:29.225798 kernel: ftrace: allocated 149 pages with 4 groups Aug 12 23:51:29.225809 kernel: Dynamic Preempt: voluntary Aug 12 23:51:29.225820 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:51:29.225832 kernel: rcu: RCU event tracing is enabled. Aug 12 23:51:29.225846 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 12 23:51:29.225857 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:51:29.225869 kernel: Rude variant of Tasks RCU enabled. Aug 12 23:51:29.225879 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:51:29.225891 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:51:29.225902 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 12 23:51:29.225912 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 12 23:51:29.225924 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:51:29.225934 kernel: Console: colour dummy device 80x25 Aug 12 23:51:29.225949 kernel: printk: console [ttyS0] enabled Aug 12 23:51:29.225960 kernel: ACPI: Core revision 20230628 Aug 12 23:51:29.225971 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 12 23:51:29.225982 kernel: APIC: Switch to symmetric I/O mode setup Aug 12 23:51:29.225993 kernel: x2apic enabled Aug 12 23:51:29.226004 kernel: APIC: Switched APIC routing to: physical x2apic Aug 12 23:51:29.226018 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 12 23:51:29.226030 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 12 23:51:29.226041 kernel: kvm-guest: setup PV IPIs Aug 12 23:51:29.226070 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 12 23:51:29.226081 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 12 23:51:29.226092 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 12 23:51:29.226103 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 12 23:51:29.226114 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 12 23:51:29.226125 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 12 23:51:29.226144 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 12 23:51:29.226155 kernel: Spectre V2 : Mitigation: Retpolines Aug 12 23:51:29.226167 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 12 23:51:29.226182 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 12 23:51:29.226193 kernel: RETBleed: Mitigation: untrained return thunk Aug 12 23:51:29.226204 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 12 23:51:29.226215 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 12 23:51:29.226226 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 12 23:51:29.226238 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 12 23:51:29.226249 kernel: x86/bugs: return thunk changed Aug 12 23:51:29.226264 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 12 23:51:29.226275 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 12 23:51:29.226289 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 12 23:51:29.226300 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 12 23:51:29.226311 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 12 23:51:29.226322 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 12 23:51:29.226333 kernel: Freeing SMP alternatives memory: 32K Aug 12 23:51:29.226344 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:51:29.226355 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 12 23:51:29.226366 kernel: landlock: Up and running. Aug 12 23:51:29.226381 kernel: SELinux: Initializing. Aug 12 23:51:29.226392 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:51:29.226424 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:51:29.226459 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 12 23:51:29.226489 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:51:29.226502 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:51:29.226513 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:51:29.226524 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 12 23:51:29.226535 kernel: ... version: 0 Aug 12 23:51:29.226550 kernel: ... bit width: 48 Aug 12 23:51:29.226561 kernel: ... generic registers: 6 Aug 12 23:51:29.226572 kernel: ... value mask: 0000ffffffffffff Aug 12 23:51:29.226583 kernel: ... max period: 00007fffffffffff Aug 12 23:51:29.226594 kernel: ... fixed-purpose events: 0 Aug 12 23:51:29.226611 kernel: ... event mask: 000000000000003f Aug 12 23:51:29.226622 kernel: signal: max sigframe size: 1776 Aug 12 23:51:29.226632 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:51:29.226644 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:51:29.226659 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:51:29.226670 kernel: smpboot: x86: Booting SMP configuration: Aug 12 23:51:29.226681 kernel: .... node #0, CPUs: #1 #2 #3 Aug 12 23:51:29.226692 kernel: smp: Brought up 1 node, 4 CPUs Aug 12 23:51:29.226703 kernel: smpboot: Max logical packages: 1 Aug 12 23:51:29.226714 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 12 23:51:29.226725 kernel: devtmpfs: initialized Aug 12 23:51:29.226736 kernel: x86/mm: Memory block size: 128MB Aug 12 23:51:29.226747 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 12 23:51:29.226758 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 12 23:51:29.226773 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Aug 12 23:51:29.226784 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 12 23:51:29.226795 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Aug 12 23:51:29.226806 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 12 23:51:29.226817 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:51:29.226828 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 12 23:51:29.226840 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:51:29.226851 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:51:29.226865 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:51:29.226876 kernel: audit: type=2000 audit(1755042686.931:1): state=initialized audit_enabled=0 res=1 Aug 12 23:51:29.226887 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:51:29.226898 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 12 23:51:29.226909 kernel: cpuidle: using governor menu Aug 12 23:51:29.226920 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:51:29.226931 kernel: dca service started, version 1.12.1 Aug 12 23:51:29.226942 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Aug 12 23:51:29.226953 kernel: PCI: Using configuration type 1 for base access Aug 12 23:51:29.226968 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 12 23:51:29.226979 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:51:29.226990 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 12 23:51:29.227001 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:51:29.227012 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:51:29.227023 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:51:29.227034 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:51:29.227045 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:51:29.227072 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:51:29.227087 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 12 23:51:29.227098 kernel: ACPI: Interpreter enabled Aug 12 23:51:29.227109 kernel: ACPI: PM: (supports S0 S3 S5) Aug 12 23:51:29.227120 kernel: ACPI: Using IOAPIC for interrupt routing Aug 12 23:51:29.227138 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 12 23:51:29.227149 kernel: PCI: Using E820 reservations for host bridge windows Aug 12 23:51:29.227160 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 12 23:51:29.227171 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:51:29.227497 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:51:29.227782 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 12 23:51:29.227967 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 12 23:51:29.227983 kernel: PCI host bridge to bus 0000:00 Aug 12 23:51:29.229185 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 12 23:51:29.229343 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 12 23:51:29.229493 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 12 23:51:29.229651 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Aug 12 23:51:29.229803 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Aug 12 23:51:29.229954 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Aug 12 23:51:29.230168 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:51:29.231079 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 12 23:51:29.231290 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 12 23:51:29.231457 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 12 23:51:29.231630 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Aug 12 23:51:29.231795 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 12 23:51:29.231961 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Aug 12 23:51:29.234210 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 12 23:51:29.234411 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 12 23:51:29.234579 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Aug 12 23:51:29.234742 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Aug 12 23:51:29.234915 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Aug 12 23:51:29.235171 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 12 23:51:29.235342 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Aug 12 23:51:29.235507 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 12 23:51:29.235672 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Aug 12 23:51:29.235856 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 12 23:51:29.236034 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Aug 12 23:51:29.236232 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 12 23:51:29.236400 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Aug 12 23:51:29.236567 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 12 23:51:29.236748 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 12 23:51:29.236914 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 12 23:51:29.237160 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 12 23:51:29.237338 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Aug 12 23:51:29.237502 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Aug 12 23:51:29.237691 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 12 23:51:29.237858 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Aug 12 23:51:29.237874 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 12 23:51:29.237886 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 12 23:51:29.237897 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 12 23:51:29.237908 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 12 23:51:29.237924 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 12 23:51:29.237935 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 12 23:51:29.237946 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 12 23:51:29.237957 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 12 23:51:29.237968 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 12 23:51:29.237979 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 12 23:51:29.237990 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 12 23:51:29.238002 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 12 23:51:29.238013 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 12 23:51:29.238028 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 12 23:51:29.238039 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 12 23:51:29.238065 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 12 23:51:29.238076 kernel: iommu: Default domain type: Translated Aug 12 23:51:29.238088 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 12 23:51:29.238099 kernel: efivars: Registered efivars operations Aug 12 23:51:29.238109 kernel: PCI: Using ACPI for IRQ routing Aug 12 23:51:29.238121 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 12 23:51:29.238141 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 12 23:51:29.238157 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Aug 12 23:51:29.238168 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Aug 12 23:51:29.238179 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Aug 12 23:51:29.238190 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Aug 12 23:51:29.238201 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Aug 12 23:51:29.238212 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Aug 12 23:51:29.238223 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Aug 12 23:51:29.238392 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 12 23:51:29.238563 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 12 23:51:29.238728 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 12 23:51:29.238743 kernel: vgaarb: loaded Aug 12 23:51:29.238755 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 12 23:51:29.238766 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 12 23:51:29.238777 kernel: clocksource: Switched to clocksource kvm-clock Aug 12 23:51:29.238789 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:51:29.238800 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:51:29.238811 kernel: pnp: PnP ACPI init Aug 12 23:51:29.239155 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Aug 12 23:51:29.239231 kernel: pnp: PnP ACPI: found 6 devices Aug 12 23:51:29.239247 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 12 23:51:29.239259 kernel: NET: Registered PF_INET protocol family Aug 12 23:51:29.239270 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:51:29.239311 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:51:29.239325 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:51:29.239337 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:51:29.239352 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 12 23:51:29.239363 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:51:29.239375 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:51:29.239386 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:51:29.239398 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:51:29.239410 kernel: NET: Registered PF_XDP protocol family Aug 12 23:51:29.239592 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 12 23:51:29.239759 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 12 23:51:29.239918 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 12 23:51:29.240619 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 12 23:51:29.240776 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 12 23:51:29.240932 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Aug 12 23:51:29.243268 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Aug 12 23:51:29.243427 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Aug 12 23:51:29.243443 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:51:29.243456 kernel: Initialise system trusted keyrings Aug 12 23:51:29.243475 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:51:29.243486 kernel: Key type asymmetric registered Aug 12 23:51:29.243498 kernel: Asymmetric key parser 'x509' registered Aug 12 23:51:29.243510 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 12 23:51:29.243522 kernel: io scheduler mq-deadline registered Aug 12 23:51:29.243534 kernel: io scheduler kyber registered Aug 12 23:51:29.243545 kernel: io scheduler bfq registered Aug 12 23:51:29.243557 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 12 23:51:29.243569 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 12 23:51:29.243581 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 12 23:51:29.243596 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 12 23:51:29.243609 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:51:29.243624 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 12 23:51:29.243635 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 12 23:51:29.243647 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 12 23:51:29.243662 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 12 23:51:29.243844 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 12 23:51:29.244010 kernel: rtc_cmos 00:04: registered as rtc0 Aug 12 23:51:29.244196 kernel: rtc_cmos 00:04: setting system clock to 2025-08-12T23:51:28 UTC (1755042688) Aug 12 23:51:29.244358 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Aug 12 23:51:29.244375 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 12 23:51:29.244388 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 12 23:51:29.244400 kernel: efifb: probing for efifb Aug 12 23:51:29.244418 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Aug 12 23:51:29.244429 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Aug 12 23:51:29.244441 kernel: efifb: scrolling: redraw Aug 12 23:51:29.244453 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 12 23:51:29.244465 kernel: Console: switching to colour frame buffer device 160x50 Aug 12 23:51:29.244476 kernel: fb0: EFI VGA frame buffer device Aug 12 23:51:29.244488 kernel: pstore: Using crash dump compression: deflate Aug 12 23:51:29.244500 kernel: pstore: Registered efi_pstore as persistent store backend Aug 12 23:51:29.244511 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:51:29.244527 kernel: Segment Routing with IPv6 Aug 12 23:51:29.244538 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:51:29.244550 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:51:29.244561 kernel: Key type dns_resolver registered Aug 12 23:51:29.244573 kernel: IPI shorthand broadcast: enabled Aug 12 23:51:29.244585 kernel: sched_clock: Marking stable (1470005745, 171661079)->(1732181801, -90514977) Aug 12 23:51:29.244597 kernel: registered taskstats version 1 Aug 12 23:51:29.244608 kernel: Loading compiled-in X.509 certificates Aug 12 23:51:29.244620 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 12 23:51:29.244635 kernel: Key type .fscrypt registered Aug 12 23:51:29.244646 kernel: Key type fscrypt-provisioning registered Aug 12 23:51:29.244657 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:51:29.244669 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:51:29.244680 kernel: ima: No architecture policies found Aug 12 23:51:29.244692 kernel: clk: Disabling unused clocks Aug 12 23:51:29.244703 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 12 23:51:29.244715 kernel: Write protecting the kernel read-only data: 38912k Aug 12 23:51:29.244727 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 12 23:51:29.244742 kernel: Run /init as init process Aug 12 23:51:29.244753 kernel: with arguments: Aug 12 23:51:29.244765 kernel: /init Aug 12 23:51:29.244776 kernel: with environment: Aug 12 23:51:29.244787 kernel: HOME=/ Aug 12 23:51:29.244799 kernel: TERM=linux Aug 12 23:51:29.244810 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:51:29.244829 systemd[1]: Successfully made /usr/ read-only. Aug 12 23:51:29.244845 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:51:29.244862 systemd[1]: Detected virtualization kvm. Aug 12 23:51:29.244875 systemd[1]: Detected architecture x86-64. Aug 12 23:51:29.244887 systemd[1]: Running in initrd. Aug 12 23:51:29.244898 systemd[1]: No hostname configured, using default hostname. Aug 12 23:51:29.244911 systemd[1]: Hostname set to . Aug 12 23:51:29.244923 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:51:29.244935 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:51:29.244951 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:51:29.244964 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:51:29.244977 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:51:29.244989 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:51:29.245002 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:51:29.245016 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:51:29.245030 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:51:29.245061 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:51:29.245074 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:51:29.245086 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:51:29.245098 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:51:29.245110 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:51:29.245123 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:51:29.245143 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:51:29.245156 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:51:29.245172 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:51:29.245185 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:51:29.245197 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 12 23:51:29.245209 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:51:29.245221 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:51:29.245234 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:51:29.245246 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:51:29.245259 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:51:29.245271 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:51:29.245287 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:51:29.245299 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:51:29.245346 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:51:29.245360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:51:29.245371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:51:29.245383 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:51:29.245394 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:51:29.245412 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:51:29.245424 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:51:29.245514 systemd-journald[194]: Collecting audit messages is disabled. Aug 12 23:51:29.245552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:51:29.245566 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:51:29.245578 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:51:29.245590 systemd-journald[194]: Journal started Aug 12 23:51:29.245618 systemd-journald[194]: Runtime Journal (/run/log/journal/05ff83d34fef41e893960bdd4aa46b74) is 6M, max 48.2M, 42.2M free. Aug 12 23:51:29.222788 systemd-modules-load[195]: Inserted module 'overlay' Aug 12 23:51:29.253178 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:51:29.265288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:51:29.270536 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:51:29.274497 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:51:29.279531 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:51:29.291079 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:51:29.295077 kernel: Bridge firewalling registered Aug 12 23:51:29.295199 systemd-modules-load[195]: Inserted module 'br_netfilter' Aug 12 23:51:29.297962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:51:29.305894 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:51:29.309546 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:51:29.315904 dracut-cmdline[221]: dracut-dracut-053 Aug 12 23:51:29.317109 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:51:29.330350 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:51:29.342284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:51:29.357310 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:51:29.427836 systemd-resolved[260]: Positive Trust Anchors: Aug 12 23:51:29.428270 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:51:29.428312 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:51:29.431987 systemd-resolved[260]: Defaulting to hostname 'linux'. Aug 12 23:51:29.433728 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:51:29.455382 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:51:29.485095 kernel: SCSI subsystem initialized Aug 12 23:51:29.498092 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:51:29.522170 kernel: iscsi: registered transport (tcp) Aug 12 23:51:29.560097 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:51:29.560212 kernel: QLogic iSCSI HBA Driver Aug 12 23:51:29.636510 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:51:29.651385 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:51:29.723161 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:51:29.723244 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:51:29.725167 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 12 23:51:29.796162 kernel: raid6: avx2x4 gen() 18868 MB/s Aug 12 23:51:29.813141 kernel: raid6: avx2x2 gen() 16214 MB/s Aug 12 23:51:29.830340 kernel: raid6: avx2x1 gen() 13741 MB/s Aug 12 23:51:29.830372 kernel: raid6: using algorithm avx2x4 gen() 18868 MB/s Aug 12 23:51:29.853177 kernel: raid6: .... xor() 5972 MB/s, rmw enabled Aug 12 23:51:29.853217 kernel: raid6: using avx2x2 recovery algorithm Aug 12 23:51:29.894091 kernel: xor: automatically using best checksumming function avx Aug 12 23:51:30.200132 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:51:30.229448 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:51:30.244646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:51:30.276606 systemd-udevd[415]: Using default interface naming scheme 'v255'. Aug 12 23:51:30.286006 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:51:30.304581 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:51:30.322821 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Aug 12 23:51:30.391153 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:51:30.399351 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:51:30.500503 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:51:30.519385 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:51:30.559173 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:51:30.562569 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:51:30.565225 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:51:30.568316 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:51:30.585882 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:51:30.601078 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:51:30.664015 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:51:30.664218 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:51:30.669597 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:51:30.672285 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:51:30.672924 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:51:30.678007 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:51:30.712140 kernel: libata version 3.00 loaded. Aug 12 23:51:30.712235 kernel: cryptd: max_cpu_qlen set to 1000 Aug 12 23:51:30.719144 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 12 23:51:30.723350 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 12 23:51:30.720861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:51:30.728506 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:51:30.746142 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:51:30.746213 kernel: GPT:9289727 != 19775487 Aug 12 23:51:30.746228 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:51:30.746255 kernel: GPT:9289727 != 19775487 Aug 12 23:51:30.746267 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:51:30.746280 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:51:30.749803 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:51:30.750521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:51:30.756309 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:51:30.763334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:51:30.830176 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (478) Aug 12 23:51:30.842127 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:51:30.870131 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/vda3 scanned by (udev-worker) (468) Aug 12 23:51:30.902105 kernel: ahci 0000:00:1f.2: version 3.0 Aug 12 23:51:30.902508 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 12 23:51:30.905219 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 12 23:51:30.905589 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 12 23:51:30.905771 kernel: AVX2 version of gcm_enc/dec engaged. Aug 12 23:51:30.908045 kernel: AES CTR mode by8 optimization enabled Aug 12 23:51:30.908957 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 12 23:51:30.913075 kernel: scsi host0: ahci Aug 12 23:51:30.916348 kernel: scsi host1: ahci Aug 12 23:51:30.916693 kernel: scsi host2: ahci Aug 12 23:51:30.916903 kernel: scsi host3: ahci Aug 12 23:51:30.918079 kernel: scsi host4: ahci Aug 12 23:51:30.919887 kernel: scsi host5: ahci Aug 12 23:51:30.920191 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Aug 12 23:51:30.920209 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Aug 12 23:51:30.921653 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Aug 12 23:51:30.921681 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Aug 12 23:51:30.922499 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Aug 12 23:51:30.922903 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 12 23:51:30.927349 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Aug 12 23:51:30.935372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:51:30.946708 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 12 23:51:30.946873 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 12 23:51:30.962372 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:51:30.965006 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:51:30.973358 disk-uuid[562]: Primary Header is updated. Aug 12 23:51:30.973358 disk-uuid[562]: Secondary Entries is updated. Aug 12 23:51:30.973358 disk-uuid[562]: Secondary Header is updated. Aug 12 23:51:30.977129 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:51:30.996293 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:51:31.238536 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 12 23:51:31.238628 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 12 23:51:31.238640 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 12 23:51:31.238655 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 12 23:51:31.240109 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 12 23:51:31.240158 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 12 23:51:31.241099 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 12 23:51:31.242607 kernel: ata3.00: applying bridge limits Aug 12 23:51:31.242634 kernel: ata3.00: configured for UDMA/100 Aug 12 23:51:31.243101 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 12 23:51:31.300146 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 12 23:51:31.300427 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 12 23:51:31.313114 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 12 23:51:31.992790 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:51:31.995360 disk-uuid[564]: The operation has completed successfully. Aug 12 23:51:32.040581 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:51:32.040773 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:51:32.089199 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:51:32.093253 sh[599]: Success Aug 12 23:51:32.109093 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 12 23:51:32.198188 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:51:32.211137 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:51:32.212592 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:51:32.260287 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 12 23:51:32.260361 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:51:32.260388 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 12 23:51:32.261466 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 12 23:51:32.263116 kernel: BTRFS info (device dm-0): using free space tree Aug 12 23:51:32.300021 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:51:32.310256 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:51:32.331939 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:51:32.340752 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:51:32.382850 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:51:32.382919 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:51:32.382980 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:51:32.394829 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:51:32.403105 kernel: BTRFS info (device vda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:51:32.425353 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:51:32.441638 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:51:32.549417 ignition[688]: Ignition 2.20.0 Aug 12 23:51:32.549429 ignition[688]: Stage: fetch-offline Aug 12 23:51:32.549469 ignition[688]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:32.549481 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:32.549592 ignition[688]: parsed url from cmdline: "" Aug 12 23:51:32.549596 ignition[688]: no config URL provided Aug 12 23:51:32.549603 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:51:32.549615 ignition[688]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:51:32.549645 ignition[688]: op(1): [started] loading QEMU firmware config module Aug 12 23:51:32.549651 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 12 23:51:32.561091 ignition[688]: op(1): [finished] loading QEMU firmware config module Aug 12 23:51:32.573100 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:51:32.598333 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:51:32.616227 ignition[688]: parsing config with SHA512: 29aa1aadbcecc7fc314f1e9bda6160f0463eb07a6a9e215cc61bf235da70b27e244a54e60e40c96b7b63c7037fd8ebf70ae19ae26145c502f59802d911f86754 Aug 12 23:51:32.626513 unknown[688]: fetched base config from "system" Aug 12 23:51:32.626761 unknown[688]: fetched user config from "qemu" Aug 12 23:51:32.627334 ignition[688]: fetch-offline: fetch-offline passed Aug 12 23:51:32.627423 ignition[688]: Ignition finished successfully Aug 12 23:51:32.630684 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:51:32.688651 systemd-networkd[785]: lo: Link UP Aug 12 23:51:32.692072 systemd-networkd[785]: lo: Gained carrier Aug 12 23:51:32.699643 systemd-networkd[785]: Enumeration completed Aug 12 23:51:32.700921 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:51:32.701914 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:51:32.701920 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:51:32.708900 systemd-networkd[785]: eth0: Link UP Aug 12 23:51:32.708906 systemd-networkd[785]: eth0: Gained carrier Aug 12 23:51:32.708921 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:51:32.725620 systemd[1]: Reached target network.target - Network. Aug 12 23:51:32.725738 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 12 23:51:32.746297 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:51:32.759279 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:51:32.810596 ignition[790]: Ignition 2.20.0 Aug 12 23:51:32.810612 ignition[790]: Stage: kargs Aug 12 23:51:32.810827 ignition[790]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:32.810843 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:32.811998 ignition[790]: kargs: kargs passed Aug 12 23:51:32.812092 ignition[790]: Ignition finished successfully Aug 12 23:51:32.834871 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:51:32.856408 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:51:32.891369 ignition[800]: Ignition 2.20.0 Aug 12 23:51:32.891386 ignition[800]: Stage: disks Aug 12 23:51:32.891593 ignition[800]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:32.891609 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:32.892750 ignition[800]: disks: disks passed Aug 12 23:51:32.892814 ignition[800]: Ignition finished successfully Aug 12 23:51:32.919319 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:51:32.926489 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:51:32.932558 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:51:32.943194 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:51:32.946301 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:51:32.947549 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:51:32.964608 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:51:32.992252 systemd-resolved[260]: Detected conflict on linux IN A 10.0.0.30 Aug 12 23:51:32.992460 systemd-resolved[260]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Aug 12 23:51:32.993474 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 12 23:51:33.010203 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:51:33.022427 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:51:33.231408 kernel: EXT4-fs (vda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 12 23:51:33.233777 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:51:33.237149 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:51:33.256357 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:51:33.261504 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:51:33.262074 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 12 23:51:33.262143 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:51:33.262184 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:51:33.298510 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:51:33.303615 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (820) Aug 12 23:51:33.303709 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:51:33.308911 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:51:33.308961 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:51:33.316336 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:51:33.336687 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:51:33.328255 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:51:33.477451 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:51:33.483241 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:51:33.489139 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:51:33.496360 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:51:33.712916 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:51:33.729326 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:51:33.738331 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:51:33.752945 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:51:33.764847 kernel: BTRFS info (device vda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:51:33.834508 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:51:33.843977 ignition[933]: INFO : Ignition 2.20.0 Aug 12 23:51:33.843977 ignition[933]: INFO : Stage: mount Aug 12 23:51:33.848911 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:33.848911 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:33.856376 ignition[933]: INFO : mount: mount passed Aug 12 23:51:33.857467 ignition[933]: INFO : Ignition finished successfully Aug 12 23:51:33.863506 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:51:33.879378 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:51:33.898918 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:51:33.916193 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (948) Aug 12 23:51:33.918990 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:51:33.919131 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:51:33.919169 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:51:33.935112 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:51:33.938366 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:51:34.082095 ignition[965]: INFO : Ignition 2.20.0 Aug 12 23:51:34.082095 ignition[965]: INFO : Stage: files Aug 12 23:51:34.089651 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:34.089651 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:34.089651 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:51:34.089651 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:51:34.089651 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:51:34.138089 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:51:34.138089 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:51:34.138089 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:51:34.126209 unknown[965]: wrote ssh authorized keys file for user: core Aug 12 23:51:34.144847 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 12 23:51:34.144847 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 12 23:51:34.199091 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:51:34.389450 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 12 23:51:34.389450 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:51:34.394304 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 12 23:51:34.470144 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 12 23:51:34.565927 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:51:34.565927 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 12 23:51:34.570068 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 12 23:51:34.655223 systemd-networkd[785]: eth0: Gained IPv6LL Aug 12 23:51:34.868183 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 12 23:51:35.487188 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 12 23:51:35.487188 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 12 23:51:35.491473 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:51:35.494039 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:51:35.494039 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 12 23:51:35.494039 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 12 23:51:35.499031 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:51:35.501238 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:51:35.501238 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 12 23:51:35.501238 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 12 23:51:35.525704 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:51:35.531515 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:51:35.533298 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 12 23:51:35.533298 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:51:35.533298 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:51:35.533298 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:51:35.533298 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:51:35.533298 ignition[965]: INFO : files: files passed Aug 12 23:51:35.533298 ignition[965]: INFO : Ignition finished successfully Aug 12 23:51:35.547914 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:51:35.562251 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:51:35.565507 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:51:35.569017 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:51:35.569242 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:51:35.595478 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Aug 12 23:51:35.601206 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:51:35.601206 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:51:35.605237 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:51:35.609265 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:51:35.609644 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:51:35.622216 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:51:35.651243 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:51:35.651393 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:51:35.653974 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:51:35.656160 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:51:35.656458 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:51:35.657487 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:51:35.678641 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:51:35.692270 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:51:35.705740 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:51:35.705970 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:51:35.709371 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:51:35.710517 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:51:35.710686 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:51:35.715350 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:51:35.717538 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:51:35.719524 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:51:35.721906 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:51:35.724436 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:51:35.724615 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:51:35.726712 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:51:35.727116 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:51:35.727460 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:51:35.727835 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:51:35.728376 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:51:35.728527 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:51:35.738700 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:51:35.738885 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:51:35.740864 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:51:35.741096 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:51:35.744340 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:51:35.744494 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:51:35.748559 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:51:35.748719 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:51:35.750963 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:51:35.751930 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:51:35.755107 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:51:35.755714 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:51:35.758289 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:51:35.761711 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:51:35.761840 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:51:35.762807 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:51:35.762920 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:51:35.764705 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:51:35.764856 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:51:35.769032 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:51:35.769231 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:51:35.781237 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:51:35.783635 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:51:35.785837 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:51:35.787093 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:51:35.789827 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:51:35.790999 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:51:35.798942 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:51:35.800226 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:51:35.815753 ignition[1019]: INFO : Ignition 2.20.0 Aug 12 23:51:35.815753 ignition[1019]: INFO : Stage: umount Aug 12 23:51:35.817900 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:51:35.817900 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:51:35.817900 ignition[1019]: INFO : umount: umount passed Aug 12 23:51:35.817900 ignition[1019]: INFO : Ignition finished successfully Aug 12 23:51:35.819280 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:51:35.825663 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:51:35.825798 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:51:35.828111 systemd[1]: Stopped target network.target - Network. Aug 12 23:51:35.829080 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:51:35.829158 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:51:35.830735 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:51:35.830806 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:51:35.834524 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:51:35.834594 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:51:35.835549 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:51:35.835613 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:51:35.836069 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:51:35.840642 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:51:35.847828 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:51:35.847979 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:51:35.851772 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 12 23:51:35.852651 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:51:35.852705 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:51:35.859421 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:51:35.859736 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:51:35.859872 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:51:35.864299 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 12 23:51:35.865222 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:51:35.865323 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:51:35.877260 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:51:35.878245 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:51:35.878349 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:51:35.880554 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:51:35.880629 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:51:35.883780 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:51:35.883854 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:51:35.884739 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:51:35.886601 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:51:35.901545 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:51:35.901748 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:51:35.910306 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:51:35.910569 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:51:35.912888 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:51:35.912970 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:51:35.914932 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:51:35.915005 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:51:35.916911 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:51:35.916995 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:51:35.919300 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:51:35.919371 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:51:35.921182 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:51:35.921253 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:51:35.932334 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:51:35.933472 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:51:35.933551 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:51:35.936820 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 12 23:51:35.936880 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:51:35.939321 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:51:35.939380 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:51:35.941608 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:51:35.941674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:51:35.944748 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:51:35.944870 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:51:36.063268 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:51:36.063454 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:51:36.066179 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:51:36.068186 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:51:36.068315 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:51:36.080376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:51:36.089715 systemd[1]: Switching root. Aug 12 23:51:36.127246 systemd-journald[194]: Journal stopped Aug 12 23:51:38.348464 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Aug 12 23:51:38.348549 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:51:38.348571 kernel: SELinux: policy capability open_perms=1 Aug 12 23:51:38.348582 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:51:38.348594 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:51:38.348607 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:51:38.348619 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:51:38.348630 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:51:38.348647 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:51:38.348664 kernel: audit: type=1403 audit(1755042697.291:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:51:38.348677 systemd[1]: Successfully loaded SELinux policy in 63.698ms. Aug 12 23:51:38.348704 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.873ms. Aug 12 23:51:38.348717 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:51:38.348731 systemd[1]: Detected virtualization kvm. Aug 12 23:51:38.348744 systemd[1]: Detected architecture x86-64. Aug 12 23:51:38.348756 systemd[1]: Detected first boot. Aug 12 23:51:38.348769 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:51:38.348784 zram_generator::config[1067]: No configuration found. Aug 12 23:51:38.348798 kernel: Guest personality initialized and is inactive Aug 12 23:51:38.348810 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 12 23:51:38.348823 kernel: Initialized host personality Aug 12 23:51:38.348835 kernel: NET: Registered PF_VSOCK protocol family Aug 12 23:51:38.348848 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:51:38.348861 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 12 23:51:38.348883 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:51:38.348895 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:51:38.348912 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:51:38.348931 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:51:38.348944 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:51:38.348956 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:51:38.348968 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:51:38.348981 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:51:38.348994 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:51:38.349007 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:51:38.349022 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:51:38.349034 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:51:38.349116 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:51:38.349132 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:51:38.349144 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:51:38.349157 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:51:38.349171 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:51:38.349184 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 12 23:51:38.349201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:51:38.349213 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:51:38.349227 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:51:38.349239 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:51:38.349252 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:51:38.349270 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:51:38.349290 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:51:38.349303 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:51:38.349315 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:51:38.349330 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:51:38.349343 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:51:38.349356 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 12 23:51:38.349369 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:51:38.349381 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:51:38.349394 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:51:38.349407 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:51:38.349421 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:51:38.349434 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:51:38.349454 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:51:38.349467 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:51:38.349479 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:51:38.349492 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:51:38.349505 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:51:38.349518 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:51:38.349531 systemd[1]: Reached target machines.target - Containers. Aug 12 23:51:38.349544 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:51:38.349560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:51:38.349573 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:51:38.349586 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:51:38.349599 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:51:38.349611 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:51:38.349624 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:51:38.349639 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:51:38.349651 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:51:38.349665 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:51:38.349684 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:51:38.349696 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:51:38.349709 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:51:38.349722 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:51:38.349735 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:51:38.349747 kernel: loop: module loaded Aug 12 23:51:38.349759 kernel: fuse: init (API version 7.39) Aug 12 23:51:38.349772 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:51:38.349787 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:51:38.349800 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:51:38.349813 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:51:38.349825 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 12 23:51:38.349838 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:51:38.349854 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:51:38.349866 systemd[1]: Stopped verity-setup.service. Aug 12 23:51:38.349887 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:51:38.349900 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:51:38.349913 kernel: ACPI: bus type drm_connector registered Aug 12 23:51:38.349927 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:51:38.349940 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:51:38.349952 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:51:38.349969 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:51:38.349988 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:51:38.350022 systemd-journald[1138]: Collecting audit messages is disabled. Aug 12 23:51:38.350059 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:51:38.350072 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:51:38.350089 systemd-journald[1138]: Journal started Aug 12 23:51:38.350113 systemd-journald[1138]: Runtime Journal (/run/log/journal/05ff83d34fef41e893960bdd4aa46b74) is 6M, max 48.2M, 42.2M free. Aug 12 23:51:38.098246 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:51:38.112437 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 12 23:51:38.113083 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:51:38.351585 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:51:38.354077 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:51:38.356176 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:51:38.357660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:51:38.357918 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:51:38.359387 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:51:38.359617 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:51:38.360990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:51:38.361239 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:51:38.362979 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:51:38.363265 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:51:38.364697 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:51:38.364936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:51:38.366386 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:51:38.367837 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:51:38.369576 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:51:38.372608 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 12 23:51:38.388619 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:51:38.398262 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:51:38.400764 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:51:38.401909 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:51:38.401947 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:51:38.404023 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 12 23:51:38.406465 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:51:38.412210 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:51:38.413504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:51:38.416544 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:51:38.422305 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:51:38.424210 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:51:38.426265 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:51:38.427404 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:51:38.428565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:51:38.434555 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:51:38.438267 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:51:38.443222 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:51:38.445493 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:51:38.452146 systemd-journald[1138]: Time spent on flushing to /var/log/journal/05ff83d34fef41e893960bdd4aa46b74 is 23.268ms for 1063 entries. Aug 12 23:51:38.452146 systemd-journald[1138]: System Journal (/var/log/journal/05ff83d34fef41e893960bdd4aa46b74) is 8M, max 195.6M, 187.6M free. Aug 12 23:51:38.501026 systemd-journald[1138]: Received client request to flush runtime journal. Aug 12 23:51:38.501106 kernel: loop0: detected capacity change from 0 to 224512 Aug 12 23:51:38.454387 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:51:38.461789 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:51:38.467821 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:51:38.482203 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 12 23:51:38.485639 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:51:38.503391 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:51:38.506269 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Aug 12 23:51:38.506286 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Aug 12 23:51:38.511148 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:51:38.511462 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:51:38.513294 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:51:38.522397 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:51:38.525604 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 12 23:51:38.532805 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 12 23:51:38.543337 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 12 23:51:38.550083 kernel: loop1: detected capacity change from 0 to 147912 Aug 12 23:51:38.561168 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:51:38.583747 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:51:38.607077 kernel: loop2: detected capacity change from 0 to 138176 Aug 12 23:51:38.613584 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Aug 12 23:51:38.613994 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Aug 12 23:51:38.620168 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:51:38.660088 kernel: loop3: detected capacity change from 0 to 224512 Aug 12 23:51:38.676355 kernel: loop4: detected capacity change from 0 to 147912 Aug 12 23:51:38.691340 kernel: loop5: detected capacity change from 0 to 138176 Aug 12 23:51:38.705614 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 12 23:51:38.706314 (sd-merge)[1215]: Merged extensions into '/usr'. Aug 12 23:51:38.712010 systemd[1]: Reload requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:51:38.712030 systemd[1]: Reloading... Aug 12 23:51:38.838088 zram_generator::config[1252]: No configuration found. Aug 12 23:51:38.888385 ldconfig[1182]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:51:38.960564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:51:39.035813 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:51:39.036376 systemd[1]: Reloading finished in 323 ms. Aug 12 23:51:39.067341 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:51:39.068985 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:51:39.095202 systemd[1]: Starting ensure-sysext.service... Aug 12 23:51:39.097456 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:51:39.127190 systemd[1]: Reload requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:51:39.127211 systemd[1]: Reloading... Aug 12 23:51:39.183300 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:51:39.183822 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:51:39.185467 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:51:39.185932 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Aug 12 23:51:39.186093 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Aug 12 23:51:39.194636 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:51:39.194657 systemd-tmpfiles[1281]: Skipping /boot Aug 12 23:51:39.214111 zram_generator::config[1310]: No configuration found. Aug 12 23:51:39.226959 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:51:39.226983 systemd-tmpfiles[1281]: Skipping /boot Aug 12 23:51:39.363735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:51:39.431497 systemd[1]: Reloading finished in 303 ms. Aug 12 23:51:39.447271 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:51:39.467294 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:51:39.478356 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:51:39.481458 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:51:39.484684 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:51:39.489581 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:51:39.496200 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:51:39.499577 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:51:39.505716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:51:39.505985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:51:39.508099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:51:39.512513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:51:39.517387 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:51:39.520280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:51:39.520462 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:51:39.523175 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:51:39.524428 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:51:39.526253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:51:39.526770 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:51:39.535139 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:51:39.552379 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:51:39.552623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:51:39.554748 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:51:39.554981 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:51:39.559108 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:51:39.559481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:51:39.559716 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Aug 12 23:51:39.572682 augenrules[1383]: No rules Aug 12 23:51:39.574939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:51:39.576400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:51:39.577067 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:51:39.577217 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:51:39.579472 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:51:39.580793 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:51:39.582416 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:51:39.582746 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:51:39.585552 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:51:39.589347 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:51:39.591998 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:51:39.592335 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:51:39.594302 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:51:39.599319 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:51:39.608885 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:51:39.629762 systemd[1]: Finished ensure-sysext.service. Aug 12 23:51:39.635781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:51:39.642798 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:51:39.643922 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:51:39.645651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:51:39.656262 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:51:39.659434 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:51:39.662940 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:51:39.665232 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:51:39.665272 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:51:39.667260 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:51:39.681245 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:51:39.682405 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:51:39.682436 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:51:39.683187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:51:39.683422 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:51:39.685002 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:51:39.685232 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:51:39.687297 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:51:39.687687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:51:39.696312 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 12 23:51:39.699942 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:51:39.700293 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:51:39.703945 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:51:39.705445 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:51:39.717242 augenrules[1418]: /sbin/augenrules: No change Aug 12 23:51:39.736876 augenrules[1449]: No rules Aug 12 23:51:39.739333 systemd-resolved[1352]: Positive Trust Anchors: Aug 12 23:51:39.739368 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:51:39.739402 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:51:39.752928 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:51:39.754622 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:51:39.772506 systemd-resolved[1352]: Defaulting to hostname 'linux'. Aug 12 23:51:39.775634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:51:39.778429 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:51:39.833105 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (1413) Aug 12 23:51:39.854076 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 12 23:51:39.854858 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:51:39.856377 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:51:39.860073 kernel: ACPI: button: Power Button [PWRF] Aug 12 23:51:39.862481 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 12 23:51:39.863623 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 12 23:51:39.863806 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 12 23:51:39.864014 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 12 23:51:39.874888 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:51:39.877234 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 12 23:51:39.889030 systemd-networkd[1428]: lo: Link UP Aug 12 23:51:39.889041 systemd-networkd[1428]: lo: Gained carrier Aug 12 23:51:39.896363 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:51:39.906254 systemd-networkd[1428]: Enumeration completed Aug 12 23:51:39.906656 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:51:39.906661 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:51:39.907124 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:51:39.907623 systemd-networkd[1428]: eth0: Link UP Aug 12 23:51:39.907636 systemd-networkd[1428]: eth0: Gained carrier Aug 12 23:51:39.907649 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:51:39.911549 systemd[1]: Reached target network.target - Network. Aug 12 23:51:39.920549 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 12 23:51:39.921106 systemd-networkd[1428]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:51:39.921847 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Aug 12 23:51:39.923352 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:51:39.923517 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 12 23:51:39.923578 systemd-timesyncd[1431]: Initial clock synchronization to Tue 2025-08-12 23:51:39.749040 UTC. Aug 12 23:51:39.961346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:51:39.963926 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:51:39.990693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:51:39.991083 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:51:40.020437 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 12 23:51:40.039104 kernel: mousedev: PS/2 mouse device common for all mice Aug 12 23:51:40.045247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:51:40.052153 kernel: kvm_amd: TSC scaling supported Aug 12 23:51:40.052208 kernel: kvm_amd: Nested Virtualization enabled Aug 12 23:51:40.052222 kernel: kvm_amd: Nested Paging enabled Aug 12 23:51:40.053091 kernel: kvm_amd: LBR virtualization supported Aug 12 23:51:40.053107 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 12 23:51:40.054066 kernel: kvm_amd: Virtual GIF supported Aug 12 23:51:40.075091 kernel: EDAC MC: Ver: 3.0.0 Aug 12 23:51:40.109008 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 12 23:51:40.110907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:51:40.128536 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 12 23:51:40.139100 lvm[1486]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:51:40.170737 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 12 23:51:40.172437 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:51:40.173585 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:51:40.174835 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:51:40.176147 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:51:40.177653 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:51:40.178892 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:51:40.180133 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:51:40.181378 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:51:40.181416 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:51:40.182335 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:51:40.184540 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:51:40.188288 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:51:40.192924 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 12 23:51:40.194328 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 12 23:51:40.195543 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 12 23:51:40.203239 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:51:40.204876 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 12 23:51:40.207515 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 12 23:51:40.209213 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:51:40.210390 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:51:40.211377 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:51:40.212363 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:51:40.212392 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:51:40.213520 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:51:40.215666 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:51:40.218120 lvm[1490]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:51:40.220183 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:51:40.222416 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:51:40.223547 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:51:40.227211 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:51:40.227492 jq[1493]: false Aug 12 23:51:40.234177 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:51:40.237254 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:51:40.239526 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:51:40.247276 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:51:40.249034 extend-filesystems[1494]: Found loop3 Aug 12 23:51:40.250056 extend-filesystems[1494]: Found loop4 Aug 12 23:51:40.250056 extend-filesystems[1494]: Found loop5 Aug 12 23:51:40.250056 extend-filesystems[1494]: Found sr0 Aug 12 23:51:40.250056 extend-filesystems[1494]: Found vda Aug 12 23:51:40.250056 extend-filesystems[1494]: Found vda1 Aug 12 23:51:40.250056 extend-filesystems[1494]: Found vda2 Aug 12 23:51:40.250056 extend-filesystems[1494]: Found vda3 Aug 12 23:51:40.250056 extend-filesystems[1494]: Found usr Aug 12 23:51:40.250056 extend-filesystems[1494]: Found vda4 Aug 12 23:51:40.250056 extend-filesystems[1494]: Found vda6 Aug 12 23:51:40.250056 extend-filesystems[1494]: Found vda7 Aug 12 23:51:40.250056 extend-filesystems[1494]: Found vda9 Aug 12 23:51:40.250056 extend-filesystems[1494]: Checking size of /dev/vda9 Aug 12 23:51:40.249991 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:51:40.250844 dbus-daemon[1492]: [system] SELinux support is enabled Aug 12 23:51:40.253643 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:51:40.254942 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:51:40.261713 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:51:40.263744 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:51:40.267534 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 12 23:51:40.272509 jq[1511]: true Aug 12 23:51:40.273187 extend-filesystems[1494]: Resized partition /dev/vda9 Aug 12 23:51:40.274491 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:51:40.274812 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:51:40.276007 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:51:40.276334 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:51:40.278665 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:51:40.280233 update_engine[1508]: I20250812 23:51:40.279922 1508 main.cc:92] Flatcar Update Engine starting Aug 12 23:51:40.279490 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:51:40.283939 extend-filesystems[1517]: resize2fs 1.47.1 (20-May-2024) Aug 12 23:51:40.290127 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 12 23:51:40.294350 update_engine[1508]: I20250812 23:51:40.294286 1508 update_check_scheduler.cc:74] Next update check in 8m22s Aug 12 23:51:40.294571 (ntainerd)[1520]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:51:40.298255 jq[1518]: true Aug 12 23:51:40.322677 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:51:40.322731 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:51:40.326082 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 12 23:51:40.330422 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:51:40.330450 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:51:40.349312 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (1390) Aug 12 23:51:40.331972 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:51:40.349449 tar[1516]: linux-amd64/LICENSE Aug 12 23:51:40.346313 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:51:40.350513 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:51:40.350513 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:51:40.350513 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 12 23:51:40.355095 tar[1516]: linux-amd64/helm Aug 12 23:51:40.356715 extend-filesystems[1494]: Resized filesystem in /dev/vda9 Aug 12 23:51:40.357505 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:51:40.357774 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:51:40.379793 bash[1546]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:51:40.381126 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:51:40.386370 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 12 23:51:40.433197 systemd-logind[1505]: Watching system buttons on /dev/input/event1 (Power Button) Aug 12 23:51:40.433226 systemd-logind[1505]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 12 23:51:40.435429 systemd-logind[1505]: New seat seat0. Aug 12 23:51:40.436901 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:51:40.486736 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:51:40.488498 locksmithd[1544]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:51:40.554797 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:51:40.565608 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:51:40.579390 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:51:40.579931 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:51:40.588388 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:51:40.608718 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:51:40.621629 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:51:40.625529 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 12 23:51:40.626923 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:51:40.662139 containerd[1520]: time="2025-08-12T23:51:40.661735864Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 12 23:51:40.751697 containerd[1520]: time="2025-08-12T23:51:40.751610874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:51:40.753857 containerd[1520]: time="2025-08-12T23:51:40.753804631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:51:40.753857 containerd[1520]: time="2025-08-12T23:51:40.753839344Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 12 23:51:40.753857 containerd[1520]: time="2025-08-12T23:51:40.753855731Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 12 23:51:40.754162 containerd[1520]: time="2025-08-12T23:51:40.754135051Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 12 23:51:40.754162 containerd[1520]: time="2025-08-12T23:51:40.754160091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 12 23:51:40.754263 containerd[1520]: time="2025-08-12T23:51:40.754244139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:51:40.754263 containerd[1520]: time="2025-08-12T23:51:40.754260594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:51:40.754569 containerd[1520]: time="2025-08-12T23:51:40.754540866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:51:40.754569 containerd[1520]: time="2025-08-12T23:51:40.754561877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 12 23:51:40.754610 containerd[1520]: time="2025-08-12T23:51:40.754574980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:51:40.754610 containerd[1520]: time="2025-08-12T23:51:40.754584849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 12 23:51:40.754717 containerd[1520]: time="2025-08-12T23:51:40.754699877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:51:40.754994 containerd[1520]: time="2025-08-12T23:51:40.754959666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:51:40.755189 containerd[1520]: time="2025-08-12T23:51:40.755162651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:51:40.755189 containerd[1520]: time="2025-08-12T23:51:40.755180812Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 12 23:51:40.755348 containerd[1520]: time="2025-08-12T23:51:40.755323711Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 12 23:51:40.755419 containerd[1520]: time="2025-08-12T23:51:40.755402790Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:51:40.761071 containerd[1520]: time="2025-08-12T23:51:40.760533899Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 12 23:51:40.761071 containerd[1520]: time="2025-08-12T23:51:40.760598033Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 12 23:51:40.761071 containerd[1520]: time="2025-08-12T23:51:40.760614115Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 12 23:51:40.761071 containerd[1520]: time="2025-08-12T23:51:40.760630442Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 12 23:51:40.761071 containerd[1520]: time="2025-08-12T23:51:40.760645105Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 12 23:51:40.761071 containerd[1520]: time="2025-08-12T23:51:40.760836722Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 12 23:51:40.761334 containerd[1520]: time="2025-08-12T23:51:40.761287157Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 12 23:51:40.761520 containerd[1520]: time="2025-08-12T23:51:40.761494641Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 12 23:51:40.761520 containerd[1520]: time="2025-08-12T23:51:40.761516016Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 12 23:51:40.761558 containerd[1520]: time="2025-08-12T23:51:40.761531235Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 12 23:51:40.761558 containerd[1520]: time="2025-08-12T23:51:40.761545711Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 12 23:51:40.761594 containerd[1520]: time="2025-08-12T23:51:40.761558745Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 12 23:51:40.761622 containerd[1520]: time="2025-08-12T23:51:40.761571446Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 12 23:51:40.761643 containerd[1520]: time="2025-08-12T23:51:40.761621056Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 12 23:51:40.761643 containerd[1520]: time="2025-08-12T23:51:40.761636511Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 12 23:51:40.761678 containerd[1520]: time="2025-08-12T23:51:40.761650497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 12 23:51:40.761678 containerd[1520]: time="2025-08-12T23:51:40.761663629Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 12 23:51:40.761678 containerd[1520]: time="2025-08-12T23:51:40.761676056Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 12 23:51:40.761736 containerd[1520]: time="2025-08-12T23:51:40.761701135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761736 containerd[1520]: time="2025-08-12T23:51:40.761714973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761736 containerd[1520]: time="2025-08-12T23:51:40.761726978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761794 containerd[1520]: time="2025-08-12T23:51:40.761739572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761794 containerd[1520]: time="2025-08-12T23:51:40.761751656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761794 containerd[1520]: time="2025-08-12T23:51:40.761764220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761794 containerd[1520]: time="2025-08-12T23:51:40.761775197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761885 containerd[1520]: time="2025-08-12T23:51:40.761796933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761885 containerd[1520]: time="2025-08-12T23:51:40.761810360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761885 containerd[1520]: time="2025-08-12T23:51:40.761825590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761885 containerd[1520]: time="2025-08-12T23:51:40.761848739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761885 containerd[1520]: time="2025-08-12T23:51:40.761859597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761885 containerd[1520]: time="2025-08-12T23:51:40.761871044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.761885 containerd[1520]: time="2025-08-12T23:51:40.761883981Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 12 23:51:40.762012 containerd[1520]: time="2025-08-12T23:51:40.761906669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.762012 containerd[1520]: time="2025-08-12T23:51:40.761919624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.762012 containerd[1520]: time="2025-08-12T23:51:40.761929749Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 12 23:51:40.762012 containerd[1520]: time="2025-08-12T23:51:40.761990060Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 12 23:51:40.762012 containerd[1520]: time="2025-08-12T23:51:40.762006378Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 12 23:51:40.762158 containerd[1520]: time="2025-08-12T23:51:40.762016708Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 12 23:51:40.762158 containerd[1520]: time="2025-08-12T23:51:40.762037710Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 12 23:51:40.762158 containerd[1520]: time="2025-08-12T23:51:40.762079684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.762158 containerd[1520]: time="2025-08-12T23:51:40.762093455Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 12 23:51:40.762158 containerd[1520]: time="2025-08-12T23:51:40.762103500Z" level=info msg="NRI interface is disabled by configuration." Aug 12 23:51:40.762158 containerd[1520]: time="2025-08-12T23:51:40.762112879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 12 23:51:40.762509 containerd[1520]: time="2025-08-12T23:51:40.762441946Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 12 23:51:40.762509 containerd[1520]: time="2025-08-12T23:51:40.762505315Z" level=info msg="Connect containerd service" Aug 12 23:51:40.762663 containerd[1520]: time="2025-08-12T23:51:40.762555434Z" level=info msg="using legacy CRI server" Aug 12 23:51:40.762663 containerd[1520]: time="2025-08-12T23:51:40.762565147Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:51:40.762663 containerd[1520]: time="2025-08-12T23:51:40.762657221Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 12 23:51:40.763334 containerd[1520]: time="2025-08-12T23:51:40.763302498Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:51:40.763766 containerd[1520]: time="2025-08-12T23:51:40.763470457Z" level=info msg="Start subscribing containerd event" Aug 12 23:51:40.763766 containerd[1520]: time="2025-08-12T23:51:40.763529201Z" level=info msg="Start recovering state" Aug 12 23:51:40.763766 containerd[1520]: time="2025-08-12T23:51:40.763593511Z" level=info msg="Start event monitor" Aug 12 23:51:40.763766 containerd[1520]: time="2025-08-12T23:51:40.763615836Z" level=info msg="Start snapshots syncer" Aug 12 23:51:40.763766 containerd[1520]: time="2025-08-12T23:51:40.763625440Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:51:40.763766 containerd[1520]: time="2025-08-12T23:51:40.763632977Z" level=info msg="Start streaming server" Aug 12 23:51:40.763766 containerd[1520]: time="2025-08-12T23:51:40.763728785Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:51:40.764080 containerd[1520]: time="2025-08-12T23:51:40.763797398Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:51:40.763981 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:51:40.764307 containerd[1520]: time="2025-08-12T23:51:40.764263494Z" level=info msg="containerd successfully booted in 0.104927s" Aug 12 23:51:40.989855 tar[1516]: linux-amd64/README.md Aug 12 23:51:41.009177 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:51:41.438256 systemd-networkd[1428]: eth0: Gained IPv6LL Aug 12 23:51:41.441798 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:51:41.443713 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:51:41.453372 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 12 23:51:41.455881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:51:41.465778 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:51:41.485895 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 12 23:51:41.486279 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 12 23:51:41.487949 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:51:41.492134 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:51:42.708432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:51:42.736628 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:51:42.736845 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:51:42.753192 systemd[1]: Startup finished in 1.698s (kernel) + 8.412s (initrd) + 5.523s (userspace) = 15.634s. Aug 12 23:51:43.350145 kubelet[1606]: E0812 23:51:43.350023 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:51:43.354665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:51:43.354884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:51:43.355269 systemd[1]: kubelet.service: Consumed 1.717s CPU time, 265.4M memory peak. Aug 12 23:51:43.508949 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:51:43.518386 systemd[1]: Started sshd@0-10.0.0.30:22-10.0.0.1:60856.service - OpenSSH per-connection server daemon (10.0.0.1:60856). Aug 12 23:51:43.567868 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 60856 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:51:43.569846 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:51:43.581804 systemd-logind[1505]: New session 1 of user core. Aug 12 23:51:43.583489 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:51:43.597375 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:51:43.610854 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:51:43.614493 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:51:43.623721 (systemd)[1623]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:43.626852 systemd-logind[1505]: New session c1 of user core. Aug 12 23:51:43.787553 systemd[1623]: Queued start job for default target default.target. Aug 12 23:51:43.802397 systemd[1623]: Created slice app.slice - User Application Slice. Aug 12 23:51:43.802421 systemd[1623]: Reached target paths.target - Paths. Aug 12 23:51:43.802462 systemd[1623]: Reached target timers.target - Timers. Aug 12 23:51:43.804138 systemd[1623]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:51:43.815759 systemd[1623]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:51:43.815885 systemd[1623]: Reached target sockets.target - Sockets. Aug 12 23:51:43.815937 systemd[1623]: Reached target basic.target - Basic System. Aug 12 23:51:43.815980 systemd[1623]: Reached target default.target - Main User Target. Aug 12 23:51:43.816012 systemd[1623]: Startup finished in 181ms. Aug 12 23:51:43.816740 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:51:43.818713 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:51:43.879928 systemd[1]: Started sshd@1-10.0.0.30:22-10.0.0.1:60864.service - OpenSSH per-connection server daemon (10.0.0.1:60864). Aug 12 23:51:43.919685 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 60864 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:51:43.921697 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:51:43.927067 systemd-logind[1505]: New session 2 of user core. Aug 12 23:51:43.937235 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:51:43.990647 sshd[1636]: Connection closed by 10.0.0.1 port 60864 Aug 12 23:51:43.991068 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:44.003994 systemd[1]: sshd@1-10.0.0.30:22-10.0.0.1:60864.service: Deactivated successfully. Aug 12 23:51:44.006399 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:51:44.008616 systemd-logind[1505]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:51:44.026327 systemd[1]: Started sshd@2-10.0.0.30:22-10.0.0.1:60870.service - OpenSSH per-connection server daemon (10.0.0.1:60870). Aug 12 23:51:44.027378 systemd-logind[1505]: Removed session 2. Aug 12 23:51:44.060628 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 60870 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:51:44.062361 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:51:44.067299 systemd-logind[1505]: New session 3 of user core. Aug 12 23:51:44.077203 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:51:44.126704 sshd[1644]: Connection closed by 10.0.0.1 port 60870 Aug 12 23:51:44.127167 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:44.143350 systemd[1]: sshd@2-10.0.0.30:22-10.0.0.1:60870.service: Deactivated successfully. Aug 12 23:51:44.145498 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:51:44.147375 systemd-logind[1505]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:51:44.160313 systemd[1]: Started sshd@3-10.0.0.30:22-10.0.0.1:60884.service - OpenSSH per-connection server daemon (10.0.0.1:60884). Aug 12 23:51:44.161346 systemd-logind[1505]: Removed session 3. Aug 12 23:51:44.196966 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 60884 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:51:44.198844 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:51:44.203501 systemd-logind[1505]: New session 4 of user core. Aug 12 23:51:44.225183 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:51:44.279421 sshd[1652]: Connection closed by 10.0.0.1 port 60884 Aug 12 23:51:44.279852 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:44.293211 systemd[1]: sshd@3-10.0.0.30:22-10.0.0.1:60884.service: Deactivated successfully. Aug 12 23:51:44.295389 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:51:44.296891 systemd-logind[1505]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:51:44.310331 systemd[1]: Started sshd@4-10.0.0.30:22-10.0.0.1:60898.service - OpenSSH per-connection server daemon (10.0.0.1:60898). Aug 12 23:51:44.311361 systemd-logind[1505]: Removed session 4. Aug 12 23:51:44.344425 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 60898 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:51:44.345880 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:51:44.350584 systemd-logind[1505]: New session 5 of user core. Aug 12 23:51:44.360186 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:51:44.421678 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:51:44.422077 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:51:44.441858 sudo[1661]: pam_unix(sudo:session): session closed for user root Aug 12 23:51:44.443672 sshd[1660]: Connection closed by 10.0.0.1 port 60898 Aug 12 23:51:44.444284 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:44.458247 systemd[1]: sshd@4-10.0.0.30:22-10.0.0.1:60898.service: Deactivated successfully. Aug 12 23:51:44.460513 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:51:44.466902 systemd-logind[1505]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:51:44.470462 systemd[1]: Started sshd@5-10.0.0.30:22-10.0.0.1:60904.service - OpenSSH per-connection server daemon (10.0.0.1:60904). Aug 12 23:51:44.471526 systemd-logind[1505]: Removed session 5. Aug 12 23:51:44.505398 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 60904 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:51:44.507592 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:51:44.513757 systemd-logind[1505]: New session 6 of user core. Aug 12 23:51:44.522264 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:51:44.582118 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:51:44.582464 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:51:44.588105 sudo[1671]: pam_unix(sudo:session): session closed for user root Aug 12 23:51:44.595683 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 12 23:51:44.596148 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:51:44.621726 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:51:44.660458 augenrules[1693]: No rules Aug 12 23:51:44.662537 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:51:44.662865 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:51:44.664333 sudo[1670]: pam_unix(sudo:session): session closed for user root Aug 12 23:51:44.666409 sshd[1669]: Connection closed by 10.0.0.1 port 60904 Aug 12 23:51:44.666892 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:44.681614 systemd[1]: sshd@5-10.0.0.30:22-10.0.0.1:60904.service: Deactivated successfully. Aug 12 23:51:44.684070 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:51:44.686201 systemd-logind[1505]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:51:44.698681 systemd[1]: Started sshd@6-10.0.0.30:22-10.0.0.1:60914.service - OpenSSH per-connection server daemon (10.0.0.1:60914). Aug 12 23:51:44.699960 systemd-logind[1505]: Removed session 6. Aug 12 23:51:44.738194 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 60914 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:51:44.741160 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:51:44.747355 systemd-logind[1505]: New session 7 of user core. Aug 12 23:51:44.757365 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:51:44.813350 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:51:44.813779 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:51:45.320279 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:51:45.320651 (dockerd)[1724]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:51:46.317934 dockerd[1724]: time="2025-08-12T23:51:46.317859657Z" level=info msg="Starting up" Aug 12 23:51:46.884818 dockerd[1724]: time="2025-08-12T23:51:46.884733022Z" level=info msg="Loading containers: start." Aug 12 23:51:47.590084 kernel: Initializing XFRM netlink socket Aug 12 23:51:47.683564 systemd-networkd[1428]: docker0: Link UP Aug 12 23:51:47.761264 dockerd[1724]: time="2025-08-12T23:51:47.761193251Z" level=info msg="Loading containers: done." Aug 12 23:51:47.781541 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3376320135-merged.mount: Deactivated successfully. Aug 12 23:51:47.796998 dockerd[1724]: time="2025-08-12T23:51:47.796925056Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:51:47.797130 dockerd[1724]: time="2025-08-12T23:51:47.797104953Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 12 23:51:47.797308 dockerd[1724]: time="2025-08-12T23:51:47.797273517Z" level=info msg="Daemon has completed initialization" Aug 12 23:51:47.952988 dockerd[1724]: time="2025-08-12T23:51:47.952269648Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:51:47.952538 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:51:48.902118 containerd[1520]: time="2025-08-12T23:51:48.902038279Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 12 23:51:49.937998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830858631.mount: Deactivated successfully. Aug 12 23:51:53.400916 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:51:53.435361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:51:53.675348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:51:53.680423 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:51:53.792978 kubelet[1981]: E0812 23:51:53.792628 1981 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:51:53.799909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:51:53.800169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:51:53.800576 systemd[1]: kubelet.service: Consumed 277ms CPU time, 110.6M memory peak. Aug 12 23:51:54.082545 containerd[1520]: time="2025-08-12T23:51:54.082361775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:51:54.084181 containerd[1520]: time="2025-08-12T23:51:54.084104556Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 12 23:51:54.116864 containerd[1520]: time="2025-08-12T23:51:54.116749511Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:51:54.135303 containerd[1520]: time="2025-08-12T23:51:54.135245232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:51:54.136415 containerd[1520]: time="2025-08-12T23:51:54.136371342Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 5.234251817s" Aug 12 23:51:54.136415 containerd[1520]: time="2025-08-12T23:51:54.136424233Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 12 23:51:54.137279 containerd[1520]: time="2025-08-12T23:51:54.137129531Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 12 23:51:57.794141 containerd[1520]: time="2025-08-12T23:51:57.794031980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:51:57.795034 containerd[1520]: time="2025-08-12T23:51:57.794772001Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 12 23:51:57.796029 containerd[1520]: time="2025-08-12T23:51:57.795983183Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:51:57.799277 containerd[1520]: time="2025-08-12T23:51:57.799216111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:51:57.800363 containerd[1520]: time="2025-08-12T23:51:57.800326102Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 3.663168711s" Aug 12 23:51:57.800363 containerd[1520]: time="2025-08-12T23:51:57.800357950Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 12 23:51:57.800876 containerd[1520]: time="2025-08-12T23:51:57.800839378Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 12 23:52:00.196287 containerd[1520]: time="2025-08-12T23:52:00.196194522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:00.197271 containerd[1520]: time="2025-08-12T23:52:00.197201801Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 12 23:52:00.198838 containerd[1520]: time="2025-08-12T23:52:00.198790914Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:00.202557 containerd[1520]: time="2025-08-12T23:52:00.202486320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:00.204092 containerd[1520]: time="2025-08-12T23:52:00.203997834Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 2.40312244s" Aug 12 23:52:00.204092 containerd[1520]: time="2025-08-12T23:52:00.204086986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 12 23:52:00.205019 containerd[1520]: time="2025-08-12T23:52:00.204980895Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 12 23:52:03.143621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663913217.mount: Deactivated successfully. Aug 12 23:52:03.901221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 12 23:52:03.918390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:04.212296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:04.227262 (kubelet)[2013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:52:04.447512 kubelet[2013]: E0812 23:52:04.447438 2013 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:52:04.454167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:52:04.454455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:52:04.455293 systemd[1]: kubelet.service: Consumed 413ms CPU time, 111.1M memory peak. Aug 12 23:52:05.491916 containerd[1520]: time="2025-08-12T23:52:05.490818412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:05.493971 containerd[1520]: time="2025-08-12T23:52:05.493903962Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 12 23:52:05.495952 containerd[1520]: time="2025-08-12T23:52:05.495891663Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:05.499579 containerd[1520]: time="2025-08-12T23:52:05.499491900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:05.500250 containerd[1520]: time="2025-08-12T23:52:05.500201502Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 5.295184695s" Aug 12 23:52:05.500250 containerd[1520]: time="2025-08-12T23:52:05.500240856Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 12 23:52:05.501095 containerd[1520]: time="2025-08-12T23:52:05.500748385Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:52:06.390558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819894398.mount: Deactivated successfully. Aug 12 23:52:08.596721 containerd[1520]: time="2025-08-12T23:52:08.596572698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:08.599299 containerd[1520]: time="2025-08-12T23:52:08.599227361Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 12 23:52:08.600308 containerd[1520]: time="2025-08-12T23:52:08.600239555Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:08.607261 containerd[1520]: time="2025-08-12T23:52:08.607069445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:08.609405 containerd[1520]: time="2025-08-12T23:52:08.609278924Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.108494975s" Aug 12 23:52:08.609405 containerd[1520]: time="2025-08-12T23:52:08.609362978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 12 23:52:08.610918 containerd[1520]: time="2025-08-12T23:52:08.610528270Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:52:09.705851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308653303.mount: Deactivated successfully. Aug 12 23:52:09.720710 containerd[1520]: time="2025-08-12T23:52:09.720591719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:09.722752 containerd[1520]: time="2025-08-12T23:52:09.722621265Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 12 23:52:09.724757 containerd[1520]: time="2025-08-12T23:52:09.724472328Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:09.729732 containerd[1520]: time="2025-08-12T23:52:09.729414485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:09.730333 containerd[1520]: time="2025-08-12T23:52:09.730237616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.119655834s" Aug 12 23:52:09.730333 containerd[1520]: time="2025-08-12T23:52:09.730280948Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 12 23:52:09.732831 containerd[1520]: time="2025-08-12T23:52:09.730893188Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 12 23:52:11.488812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012131964.mount: Deactivated successfully. Aug 12 23:52:13.297640 containerd[1520]: time="2025-08-12T23:52:13.297556685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:13.301626 containerd[1520]: time="2025-08-12T23:52:13.301534952Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 12 23:52:13.675441 containerd[1520]: time="2025-08-12T23:52:13.675373938Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:13.679140 containerd[1520]: time="2025-08-12T23:52:13.679092627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:13.680165 containerd[1520]: time="2025-08-12T23:52:13.680134334Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.949210548s" Aug 12 23:52:13.680214 containerd[1520]: time="2025-08-12T23:52:13.680166155Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 12 23:52:14.563100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 12 23:52:14.575256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:14.744593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:14.749473 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:52:14.788103 kubelet[2164]: E0812 23:52:14.787951 2164 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:52:14.792156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:52:14.792402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:52:14.792787 systemd[1]: kubelet.service: Consumed 212ms CPU time, 111.9M memory peak. Aug 12 23:52:16.068878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:16.069119 systemd[1]: kubelet.service: Consumed 212ms CPU time, 111.9M memory peak. Aug 12 23:52:16.087351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:16.118992 systemd[1]: Reload requested from client PID 2179 ('systemctl') (unit session-7.scope)... Aug 12 23:52:16.119014 systemd[1]: Reloading... Aug 12 23:52:16.256070 zram_generator::config[2224]: No configuration found. Aug 12 23:52:17.761431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:52:17.867928 systemd[1]: Reloading finished in 1748 ms. Aug 12 23:52:17.919725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:17.923952 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:17.925295 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:52:17.925586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:17.925635 systemd[1]: kubelet.service: Consumed 173ms CPU time, 98.2M memory peak. Aug 12 23:52:17.927434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:18.099912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:18.104377 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:52:18.144295 kubelet[2273]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:52:18.144295 kubelet[2273]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:52:18.144295 kubelet[2273]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:52:18.144769 kubelet[2273]: I0812 23:52:18.144335 2273 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:52:18.378738 kubelet[2273]: I0812 23:52:18.378590 2273 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:52:18.378738 kubelet[2273]: I0812 23:52:18.378628 2273 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:52:18.378910 kubelet[2273]: I0812 23:52:18.378893 2273 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:52:18.689215 kubelet[2273]: E0812 23:52:18.689146 2273 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:18.690615 kubelet[2273]: I0812 23:52:18.690566 2273 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:52:18.697023 kubelet[2273]: E0812 23:52:18.696985 2273 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:52:18.697023 kubelet[2273]: I0812 23:52:18.697015 2273 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:52:18.703399 kubelet[2273]: I0812 23:52:18.703361 2273 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:52:18.890118 kubelet[2273]: I0812 23:52:18.889975 2273 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:52:18.890330 kubelet[2273]: I0812 23:52:18.890102 2273 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:52:18.890460 kubelet[2273]: I0812 23:52:18.890346 2273 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:52:18.890460 kubelet[2273]: I0812 23:52:18.890357 2273 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:52:18.890567 kubelet[2273]: I0812 23:52:18.890538 2273 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:52:18.897252 kubelet[2273]: I0812 23:52:18.897213 2273 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:52:18.897252 kubelet[2273]: I0812 23:52:18.897254 2273 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:52:18.897332 kubelet[2273]: I0812 23:52:18.897284 2273 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:52:18.897332 kubelet[2273]: I0812 23:52:18.897301 2273 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:52:18.901671 kubelet[2273]: I0812 23:52:18.901619 2273 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 12 23:52:18.902041 kubelet[2273]: I0812 23:52:18.902018 2273 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:52:18.902133 kubelet[2273]: W0812 23:52:18.902112 2273 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:52:18.908283 kubelet[2273]: I0812 23:52:18.908243 2273 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:52:18.908345 kubelet[2273]: I0812 23:52:18.908292 2273 server.go:1287] "Started kubelet" Aug 12 23:52:18.909734 kubelet[2273]: W0812 23:52:18.908965 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:52:18.909734 kubelet[2273]: E0812 23:52:18.909027 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:18.910295 kubelet[2273]: I0812 23:52:18.910268 2273 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:52:18.910502 kubelet[2273]: W0812 23:52:18.910464 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:52:18.910540 kubelet[2273]: E0812 23:52:18.910510 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:18.910601 kubelet[2273]: I0812 23:52:18.910547 2273 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:52:18.911486 kubelet[2273]: I0812 23:52:18.911438 2273 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:52:18.911664 kubelet[2273]: I0812 23:52:18.911472 2273 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:52:18.913461 kubelet[2273]: I0812 23:52:18.911766 2273 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:52:18.913461 kubelet[2273]: I0812 23:52:18.911937 2273 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:52:18.913461 kubelet[2273]: E0812 23:52:18.912400 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:18.913461 kubelet[2273]: I0812 23:52:18.912448 2273 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:52:18.913461 kubelet[2273]: I0812 23:52:18.912609 2273 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:52:18.913461 kubelet[2273]: I0812 23:52:18.912655 2273 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:52:18.913461 kubelet[2273]: W0812 23:52:18.912916 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:52:18.913461 kubelet[2273]: E0812 23:52:18.912946 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:18.913461 kubelet[2273]: E0812 23:52:18.913396 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="200ms" Aug 12 23:52:18.916897 kubelet[2273]: I0812 23:52:18.916864 2273 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:52:18.918464 kubelet[2273]: I0812 23:52:18.918438 2273 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:52:18.920529 kubelet[2273]: I0812 23:52:18.920184 2273 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:52:18.920740 kubelet[2273]: E0812 23:52:18.918179 2273 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.30:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.30:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2a128a418a81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:52:18.908269185 +0000 UTC m=+0.798919352,LastTimestamp:2025-08-12 23:52:18.908269185 +0000 UTC m=+0.798919352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:52:18.921170 kubelet[2273]: E0812 23:52:18.921015 2273 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:52:18.937348 kubelet[2273]: I0812 23:52:18.937121 2273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:52:18.941847 kubelet[2273]: I0812 23:52:18.941168 2273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:52:18.941847 kubelet[2273]: I0812 23:52:18.941209 2273 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:52:18.941847 kubelet[2273]: I0812 23:52:18.941238 2273 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:52:18.941847 kubelet[2273]: I0812 23:52:18.941246 2273 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:52:18.941847 kubelet[2273]: E0812 23:52:18.941296 2273 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:52:18.942334 kubelet[2273]: I0812 23:52:18.942087 2273 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:52:18.942334 kubelet[2273]: I0812 23:52:18.942105 2273 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:52:18.942334 kubelet[2273]: I0812 23:52:18.942132 2273 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:52:18.943038 kubelet[2273]: W0812 23:52:18.942583 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:52:18.943038 kubelet[2273]: E0812 23:52:18.942631 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:19.013452 kubelet[2273]: E0812 23:52:19.013357 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:19.041706 kubelet[2273]: E0812 23:52:19.041624 2273 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:52:19.083861 kubelet[2273]: I0812 23:52:19.083787 2273 policy_none.go:49] "None policy: Start" Aug 12 23:52:19.083861 kubelet[2273]: I0812 23:52:19.083843 2273 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:52:19.083861 kubelet[2273]: I0812 23:52:19.083861 2273 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:52:19.091943 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:52:19.110480 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:52:19.113564 kubelet[2273]: E0812 23:52:19.113509 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:19.113907 kubelet[2273]: E0812 23:52:19.113867 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="400ms" Aug 12 23:52:19.114930 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:52:19.124094 kubelet[2273]: I0812 23:52:19.124034 2273 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:52:19.124359 kubelet[2273]: I0812 23:52:19.124325 2273 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:52:19.124359 kubelet[2273]: I0812 23:52:19.124344 2273 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:52:19.124645 kubelet[2273]: I0812 23:52:19.124628 2273 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:52:19.125494 kubelet[2273]: E0812 23:52:19.125465 2273 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:52:19.125592 kubelet[2273]: E0812 23:52:19.125526 2273 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 12 23:52:19.227092 kubelet[2273]: I0812 23:52:19.226930 2273 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:52:19.227879 kubelet[2273]: E0812 23:52:19.227831 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Aug 12 23:52:19.250667 systemd[1]: Created slice kubepods-burstable-pod5d82f190b5723cac0a961d98e8c7d928.slice - libcontainer container kubepods-burstable-pod5d82f190b5723cac0a961d98e8c7d928.slice. Aug 12 23:52:19.266004 kubelet[2273]: E0812 23:52:19.265959 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:19.269344 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice - libcontainer container kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Aug 12 23:52:19.271173 kubelet[2273]: E0812 23:52:19.271145 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:19.273476 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice - libcontainer container kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Aug 12 23:52:19.275026 kubelet[2273]: E0812 23:52:19.274993 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:19.314420 kubelet[2273]: I0812 23:52:19.314372 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:19.314420 kubelet[2273]: I0812 23:52:19.314418 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d82f190b5723cac0a961d98e8c7d928-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d82f190b5723cac0a961d98e8c7d928\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:19.314546 kubelet[2273]: I0812 23:52:19.314441 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d82f190b5723cac0a961d98e8c7d928-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5d82f190b5723cac0a961d98e8c7d928\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:19.314546 kubelet[2273]: I0812 23:52:19.314459 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:19.314546 kubelet[2273]: I0812 23:52:19.314478 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:19.314546 kubelet[2273]: I0812 23:52:19.314495 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:19.314546 kubelet[2273]: I0812 23:52:19.314513 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:19.314730 kubelet[2273]: I0812 23:52:19.314529 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d82f190b5723cac0a961d98e8c7d928-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d82f190b5723cac0a961d98e8c7d928\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:19.314730 kubelet[2273]: I0812 23:52:19.314586 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:19.430404 kubelet[2273]: I0812 23:52:19.430350 2273 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:52:19.430856 kubelet[2273]: E0812 23:52:19.430808 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Aug 12 23:52:19.514731 kubelet[2273]: E0812 23:52:19.514607 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="800ms" Aug 12 23:52:19.567877 containerd[1520]: time="2025-08-12T23:52:19.567813708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5d82f190b5723cac0a961d98e8c7d928,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:19.572759 containerd[1520]: time="2025-08-12T23:52:19.572466341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:19.578301 containerd[1520]: time="2025-08-12T23:52:19.578233007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:19.770337 kubelet[2273]: W0812 23:52:19.770153 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:52:19.770337 kubelet[2273]: E0812 23:52:19.770248 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:19.832400 kubelet[2273]: I0812 23:52:19.832352 2273 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:52:19.832755 kubelet[2273]: E0812 23:52:19.832711 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Aug 12 23:52:19.891758 kubelet[2273]: W0812 23:52:19.891623 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:52:19.891758 kubelet[2273]: E0812 23:52:19.891727 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:20.106330 kubelet[2273]: W0812 23:52:20.106171 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:52:20.106330 kubelet[2273]: E0812 23:52:20.106221 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:20.173963 kubelet[2273]: W0812 23:52:20.173889 2273 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Aug 12 23:52:20.173963 kubelet[2273]: E0812 23:52:20.173960 2273 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:20.315628 kubelet[2273]: E0812 23:52:20.315546 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="1.6s" Aug 12 23:52:20.635013 kubelet[2273]: I0812 23:52:20.634967 2273 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:52:20.635381 kubelet[2273]: E0812 23:52:20.635348 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Aug 12 23:52:20.743019 kubelet[2273]: E0812 23:52:20.742974 2273 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:52:20.973324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806788810.mount: Deactivated successfully. Aug 12 23:52:20.982203 containerd[1520]: time="2025-08-12T23:52:20.982137844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:20.985966 containerd[1520]: time="2025-08-12T23:52:20.985912319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 12 23:52:20.987161 containerd[1520]: time="2025-08-12T23:52:20.987129646Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:20.989163 containerd[1520]: time="2025-08-12T23:52:20.989129824Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:20.990147 containerd[1520]: time="2025-08-12T23:52:20.990005411Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:52:20.991172 containerd[1520]: time="2025-08-12T23:52:20.991128228Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:20.992121 containerd[1520]: time="2025-08-12T23:52:20.991999145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:52:20.993125 containerd[1520]: time="2025-08-12T23:52:20.993085493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:52:20.993804 containerd[1520]: time="2025-08-12T23:52:20.993776919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.421230916s" Aug 12 23:52:20.996239 containerd[1520]: time="2025-08-12T23:52:20.996210321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.428251506s" Aug 12 23:52:20.998969 containerd[1520]: time="2025-08-12T23:52:20.998921271Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.420560612s" Aug 12 23:52:21.164829 containerd[1520]: time="2025-08-12T23:52:21.164721147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:21.164829 containerd[1520]: time="2025-08-12T23:52:21.164812932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:21.164829 containerd[1520]: time="2025-08-12T23:52:21.164827118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:21.165245 containerd[1520]: time="2025-08-12T23:52:21.164914114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:21.168720 containerd[1520]: time="2025-08-12T23:52:21.168617769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:21.168980 containerd[1520]: time="2025-08-12T23:52:21.168700175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:21.168980 containerd[1520]: time="2025-08-12T23:52:21.168715193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:21.168980 containerd[1520]: time="2025-08-12T23:52:21.168784956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:21.174935 containerd[1520]: time="2025-08-12T23:52:21.171859514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:21.174935 containerd[1520]: time="2025-08-12T23:52:21.172006964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:21.174935 containerd[1520]: time="2025-08-12T23:52:21.172093760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:21.174935 containerd[1520]: time="2025-08-12T23:52:21.172395654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:21.191227 systemd[1]: Started cri-containerd-60b27f697bbb1c07fd82cf1139173508d6f8df3d6adf17a0bfb03a3d38031e0d.scope - libcontainer container 60b27f697bbb1c07fd82cf1139173508d6f8df3d6adf17a0bfb03a3d38031e0d. Aug 12 23:52:21.210095 systemd[1]: Started cri-containerd-3afd05e7c57138a660aefaf5fa90e4a61dfb45fa231e707ba47e62766c8cfc9b.scope - libcontainer container 3afd05e7c57138a660aefaf5fa90e4a61dfb45fa231e707ba47e62766c8cfc9b. Aug 12 23:52:21.212518 systemd[1]: Started cri-containerd-cf51e39d79cb4bdff4a50bb67c73591866450ece05b07362c1680847266f76c2.scope - libcontainer container cf51e39d79cb4bdff4a50bb67c73591866450ece05b07362c1680847266f76c2. Aug 12 23:52:21.271228 containerd[1520]: time="2025-08-12T23:52:21.271103337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"60b27f697bbb1c07fd82cf1139173508d6f8df3d6adf17a0bfb03a3d38031e0d\"" Aug 12 23:52:21.279369 containerd[1520]: time="2025-08-12T23:52:21.279330404Z" level=info msg="CreateContainer within sandbox \"60b27f697bbb1c07fd82cf1139173508d6f8df3d6adf17a0bfb03a3d38031e0d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:52:21.279688 containerd[1520]: time="2025-08-12T23:52:21.279518622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf51e39d79cb4bdff4a50bb67c73591866450ece05b07362c1680847266f76c2\"" Aug 12 23:52:21.282634 containerd[1520]: time="2025-08-12T23:52:21.282535531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5d82f190b5723cac0a961d98e8c7d928,Namespace:kube-system,Attempt:0,} returns sandbox id \"3afd05e7c57138a660aefaf5fa90e4a61dfb45fa231e707ba47e62766c8cfc9b\"" Aug 12 23:52:21.283283 containerd[1520]: time="2025-08-12T23:52:21.283261111Z" level=info msg="CreateContainer within sandbox \"cf51e39d79cb4bdff4a50bb67c73591866450ece05b07362c1680847266f76c2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:52:21.285481 containerd[1520]: time="2025-08-12T23:52:21.285273087Z" level=info msg="CreateContainer within sandbox \"3afd05e7c57138a660aefaf5fa90e4a61dfb45fa231e707ba47e62766c8cfc9b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:52:21.298292 containerd[1520]: time="2025-08-12T23:52:21.298251422Z" level=info msg="CreateContainer within sandbox \"60b27f697bbb1c07fd82cf1139173508d6f8df3d6adf17a0bfb03a3d38031e0d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2a0fe0aa4da1547f56d7afeb37395fac9b206b4542f73054016b3bc692311384\"" Aug 12 23:52:21.299071 containerd[1520]: time="2025-08-12T23:52:21.299011779Z" level=info msg="StartContainer for \"2a0fe0aa4da1547f56d7afeb37395fac9b206b4542f73054016b3bc692311384\"" Aug 12 23:52:21.313210 containerd[1520]: time="2025-08-12T23:52:21.313115273Z" level=info msg="CreateContainer within sandbox \"3afd05e7c57138a660aefaf5fa90e4a61dfb45fa231e707ba47e62766c8cfc9b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"834815df8cfb0b179028e9264fce0614bfebadf7ae3a7858295c1936d73ced32\"" Aug 12 23:52:21.314911 containerd[1520]: time="2025-08-12T23:52:21.314861145Z" level=info msg="CreateContainer within sandbox \"cf51e39d79cb4bdff4a50bb67c73591866450ece05b07362c1680847266f76c2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a46ad0e98e837411d2cf46399f8fc574a835902b354d9600f35aa21755a24bbc\"" Aug 12 23:52:21.315658 containerd[1520]: time="2025-08-12T23:52:21.315618965Z" level=info msg="StartContainer for \"a46ad0e98e837411d2cf46399f8fc574a835902b354d9600f35aa21755a24bbc\"" Aug 12 23:52:21.316908 containerd[1520]: time="2025-08-12T23:52:21.315672307Z" level=info msg="StartContainer for \"834815df8cfb0b179028e9264fce0614bfebadf7ae3a7858295c1936d73ced32\"" Aug 12 23:52:21.333467 systemd[1]: Started cri-containerd-2a0fe0aa4da1547f56d7afeb37395fac9b206b4542f73054016b3bc692311384.scope - libcontainer container 2a0fe0aa4da1547f56d7afeb37395fac9b206b4542f73054016b3bc692311384. Aug 12 23:52:21.348221 systemd[1]: Started cri-containerd-a46ad0e98e837411d2cf46399f8fc574a835902b354d9600f35aa21755a24bbc.scope - libcontainer container a46ad0e98e837411d2cf46399f8fc574a835902b354d9600f35aa21755a24bbc. Aug 12 23:52:21.358256 systemd[1]: Started cri-containerd-834815df8cfb0b179028e9264fce0614bfebadf7ae3a7858295c1936d73ced32.scope - libcontainer container 834815df8cfb0b179028e9264fce0614bfebadf7ae3a7858295c1936d73ced32. Aug 12 23:52:21.395827 containerd[1520]: time="2025-08-12T23:52:21.395762339Z" level=info msg="StartContainer for \"2a0fe0aa4da1547f56d7afeb37395fac9b206b4542f73054016b3bc692311384\" returns successfully" Aug 12 23:52:21.402710 containerd[1520]: time="2025-08-12T23:52:21.402653305Z" level=info msg="StartContainer for \"a46ad0e98e837411d2cf46399f8fc574a835902b354d9600f35aa21755a24bbc\" returns successfully" Aug 12 23:52:21.414779 containerd[1520]: time="2025-08-12T23:52:21.414706080Z" level=info msg="StartContainer for \"834815df8cfb0b179028e9264fce0614bfebadf7ae3a7858295c1936d73ced32\" returns successfully" Aug 12 23:52:21.951848 kubelet[2273]: E0812 23:52:21.951796 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:21.953947 kubelet[2273]: E0812 23:52:21.953914 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:21.956520 kubelet[2273]: E0812 23:52:21.956443 2273 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:52:22.237837 kubelet[2273]: I0812 23:52:22.237668 2273 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:52:22.758256 kubelet[2273]: E0812 23:52:22.758217 2273 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 12 23:52:22.910345 kubelet[2273]: I0812 23:52:22.910296 2273 apiserver.go:52] "Watching apiserver" Aug 12 23:52:22.913223 kubelet[2273]: I0812 23:52:22.913180 2273 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:52:22.923477 kubelet[2273]: I0812 23:52:22.923416 2273 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 12 23:52:22.923477 kubelet[2273]: E0812 23:52:22.923459 2273 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 12 23:52:22.957345 kubelet[2273]: I0812 23:52:22.957312 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:22.957808 kubelet[2273]: I0812 23:52:22.957366 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:22.979289 kubelet[2273]: E0812 23:52:22.979233 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:22.979489 kubelet[2273]: E0812 23:52:22.979337 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:23.014518 kubelet[2273]: I0812 23:52:23.014338 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:23.017810 kubelet[2273]: E0812 23:52:23.017264 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:23.017810 kubelet[2273]: I0812 23:52:23.017303 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:23.019019 kubelet[2273]: E0812 23:52:23.018979 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:23.019019 kubelet[2273]: I0812 23:52:23.019002 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:23.021123 kubelet[2273]: E0812 23:52:23.021092 2273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:23.959647 kubelet[2273]: I0812 23:52:23.959600 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:24.687733 kubelet[2273]: I0812 23:52:24.687686 2273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:25.055501 systemd[1]: Reload requested from client PID 2554 ('systemctl') (unit session-7.scope)... Aug 12 23:52:25.055516 systemd[1]: Reloading... Aug 12 23:52:25.125393 update_engine[1508]: I20250812 23:52:25.123157 1508 update_attempter.cc:509] Updating boot flags... Aug 12 23:52:25.200139 zram_generator::config[2614]: No configuration found. Aug 12 23:52:25.209135 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (2607) Aug 12 23:52:25.233168 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (2612) Aug 12 23:52:25.344909 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:52:25.476100 systemd[1]: Reloading finished in 420 ms. Aug 12 23:52:25.547343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:25.565159 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:52:25.565559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:25.565629 systemd[1]: kubelet.service: Consumed 900ms CPU time, 137.9M memory peak. Aug 12 23:52:25.577721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:52:25.779565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:52:25.784820 (kubelet)[2657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:52:25.825909 kubelet[2657]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:52:25.825909 kubelet[2657]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:52:25.825909 kubelet[2657]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:52:25.826516 kubelet[2657]: I0812 23:52:25.826135 2657 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:52:25.833245 kubelet[2657]: I0812 23:52:25.833199 2657 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:52:25.833245 kubelet[2657]: I0812 23:52:25.833229 2657 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:52:25.833490 kubelet[2657]: I0812 23:52:25.833466 2657 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:52:25.834618 kubelet[2657]: I0812 23:52:25.834593 2657 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:52:25.836954 kubelet[2657]: I0812 23:52:25.836909 2657 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:52:25.841629 kubelet[2657]: E0812 23:52:25.841575 2657 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:52:25.841629 kubelet[2657]: I0812 23:52:25.841616 2657 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:52:25.848516 kubelet[2657]: I0812 23:52:25.848461 2657 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:52:25.848815 kubelet[2657]: I0812 23:52:25.848763 2657 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:52:25.849017 kubelet[2657]: I0812 23:52:25.848801 2657 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:52:25.849125 kubelet[2657]: I0812 23:52:25.849024 2657 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:52:25.849125 kubelet[2657]: I0812 23:52:25.849037 2657 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:52:25.849169 kubelet[2657]: I0812 23:52:25.849126 2657 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:52:25.849336 kubelet[2657]: I0812 23:52:25.849308 2657 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:52:25.849366 kubelet[2657]: I0812 23:52:25.849336 2657 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:52:25.849366 kubelet[2657]: I0812 23:52:25.849355 2657 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:52:25.849366 kubelet[2657]: I0812 23:52:25.849366 2657 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:52:25.853075 kubelet[2657]: I0812 23:52:25.850494 2657 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 12 23:52:25.853075 kubelet[2657]: I0812 23:52:25.850887 2657 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:52:25.853075 kubelet[2657]: I0812 23:52:25.851404 2657 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:52:25.853075 kubelet[2657]: I0812 23:52:25.851454 2657 server.go:1287] "Started kubelet" Aug 12 23:52:25.853075 kubelet[2657]: I0812 23:52:25.851784 2657 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:52:25.853075 kubelet[2657]: I0812 23:52:25.851889 2657 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:52:25.853075 kubelet[2657]: I0812 23:52:25.852266 2657 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:52:25.853075 kubelet[2657]: I0812 23:52:25.852746 2657 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:52:25.854906 kubelet[2657]: I0812 23:52:25.854880 2657 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:52:25.855024 kubelet[2657]: I0812 23:52:25.854995 2657 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:52:25.858716 kubelet[2657]: E0812 23:52:25.858679 2657 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:52:25.859318 kubelet[2657]: I0812 23:52:25.859291 2657 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:52:25.861974 kubelet[2657]: I0812 23:52:25.861953 2657 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:52:25.862162 kubelet[2657]: I0812 23:52:25.862133 2657 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:52:25.862379 kubelet[2657]: I0812 23:52:25.862350 2657 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:52:25.864635 kubelet[2657]: I0812 23:52:25.864608 2657 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:52:25.865321 kubelet[2657]: I0812 23:52:25.865292 2657 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:52:25.872535 kubelet[2657]: E0812 23:52:25.872498 2657 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:52:25.878951 kubelet[2657]: I0812 23:52:25.878912 2657 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:52:25.880751 kubelet[2657]: I0812 23:52:25.880713 2657 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:52:25.880804 kubelet[2657]: I0812 23:52:25.880756 2657 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:52:25.880804 kubelet[2657]: I0812 23:52:25.880782 2657 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:52:25.880804 kubelet[2657]: I0812 23:52:25.880792 2657 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:52:25.880872 kubelet[2657]: E0812 23:52:25.880850 2657 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:52:25.903624 kubelet[2657]: I0812 23:52:25.903583 2657 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:52:25.903624 kubelet[2657]: I0812 23:52:25.903604 2657 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:52:25.903624 kubelet[2657]: I0812 23:52:25.903635 2657 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:52:25.903832 kubelet[2657]: I0812 23:52:25.903810 2657 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:52:25.903832 kubelet[2657]: I0812 23:52:25.903820 2657 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:52:25.903883 kubelet[2657]: I0812 23:52:25.903842 2657 policy_none.go:49] "None policy: Start" Aug 12 23:52:25.903883 kubelet[2657]: I0812 23:52:25.903852 2657 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:52:25.903883 kubelet[2657]: I0812 23:52:25.903862 2657 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:52:25.903978 kubelet[2657]: I0812 23:52:25.903963 2657 state_mem.go:75] "Updated machine memory state" Aug 12 23:52:25.908572 kubelet[2657]: I0812 23:52:25.908511 2657 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:52:25.908770 kubelet[2657]: I0812 23:52:25.908744 2657 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:52:25.908824 kubelet[2657]: I0812 23:52:25.908763 2657 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:52:25.909448 kubelet[2657]: I0812 23:52:25.909425 2657 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:52:25.910418 kubelet[2657]: E0812 23:52:25.910387 2657 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:52:25.985372 kubelet[2657]: I0812 23:52:25.984641 2657 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:25.985372 kubelet[2657]: I0812 23:52:25.985218 2657 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:25.985372 kubelet[2657]: I0812 23:52:25.985310 2657 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:25.997987 kubelet[2657]: E0812 23:52:25.997936 2657 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.002040 kubelet[2657]: E0812 23:52:26.002006 2657 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:26.012354 kubelet[2657]: I0812 23:52:26.012151 2657 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:52:26.022979 kubelet[2657]: I0812 23:52:26.022924 2657 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 12 23:52:26.023231 kubelet[2657]: I0812 23:52:26.023085 2657 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 12 23:52:26.057807 sudo[2693]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 12 23:52:26.058215 sudo[2693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 12 23:52:26.065823 kubelet[2657]: I0812 23:52:26.065772 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:26.065823 kubelet[2657]: I0812 23:52:26.065817 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d82f190b5723cac0a961d98e8c7d928-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5d82f190b5723cac0a961d98e8c7d928\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:26.065823 kubelet[2657]: I0812 23:52:26.065837 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.065823 kubelet[2657]: I0812 23:52:26.065854 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.065823 kubelet[2657]: I0812 23:52:26.065874 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.066390 kubelet[2657]: I0812 23:52:26.065962 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d82f190b5723cac0a961d98e8c7d928-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d82f190b5723cac0a961d98e8c7d928\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:26.066390 kubelet[2657]: I0812 23:52:26.066040 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d82f190b5723cac0a961d98e8c7d928-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5d82f190b5723cac0a961d98e8c7d928\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:26.066390 kubelet[2657]: I0812 23:52:26.066077 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.066390 kubelet[2657]: I0812 23:52:26.066092 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.588905 sudo[2693]: pam_unix(sudo:session): session closed for user root Aug 12 23:52:26.850702 kubelet[2657]: I0812 23:52:26.850484 2657 apiserver.go:52] "Watching apiserver" Aug 12 23:52:26.863274 kubelet[2657]: I0812 23:52:26.863150 2657 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:52:26.891290 kubelet[2657]: I0812 23:52:26.891251 2657 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.891883 kubelet[2657]: I0812 23:52:26.891831 2657 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:26.892283 kubelet[2657]: I0812 23:52:26.892243 2657 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:26.900451 kubelet[2657]: E0812 23:52:26.900402 2657 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:52:26.900718 kubelet[2657]: E0812 23:52:26.900694 2657 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 12 23:52:26.900881 kubelet[2657]: E0812 23:52:26.900861 2657 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:52:26.925749 kubelet[2657]: I0812 23:52:26.925664 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.925623085 podStartE2EDuration="1.925623085s" podCreationTimestamp="2025-08-12 23:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:52:26.914951443 +0000 UTC m=+1.125997229" watchObservedRunningTime="2025-08-12 23:52:26.925623085 +0000 UTC m=+1.136668850" Aug 12 23:52:26.925937 kubelet[2657]: I0812 23:52:26.925825 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.925819287 podStartE2EDuration="2.925819287s" podCreationTimestamp="2025-08-12 23:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:52:26.925772599 +0000 UTC m=+1.136818364" watchObservedRunningTime="2025-08-12 23:52:26.925819287 +0000 UTC m=+1.136865052" Aug 12 23:52:26.934709 kubelet[2657]: I0812 23:52:26.934584 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.9345071689999997 podStartE2EDuration="3.934507169s" podCreationTimestamp="2025-08-12 23:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:52:26.934233279 +0000 UTC m=+1.145279044" watchObservedRunningTime="2025-08-12 23:52:26.934507169 +0000 UTC m=+1.145552934" Aug 12 23:52:28.749298 sudo[1705]: pam_unix(sudo:session): session closed for user root Aug 12 23:52:28.751134 sshd[1704]: Connection closed by 10.0.0.1 port 60914 Aug 12 23:52:28.752120 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:28.757447 systemd[1]: sshd@6-10.0.0.30:22-10.0.0.1:60914.service: Deactivated successfully. Aug 12 23:52:28.760757 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:52:28.761379 systemd[1]: session-7.scope: Consumed 5.546s CPU time, 250.8M memory peak. Aug 12 23:52:28.762979 systemd-logind[1505]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:52:28.764069 systemd-logind[1505]: Removed session 7. Aug 12 23:52:30.499574 kubelet[2657]: I0812 23:52:30.499525 2657 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:52:30.500144 containerd[1520]: time="2025-08-12T23:52:30.499934054Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:52:30.500467 kubelet[2657]: I0812 23:52:30.500186 2657 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:52:32.031090 systemd[1]: Created slice kubepods-besteffort-poda8f9b42d_af8c_4e18_a75e_9c1b2067dc08.slice - libcontainer container kubepods-besteffort-poda8f9b42d_af8c_4e18_a75e_9c1b2067dc08.slice. Aug 12 23:52:32.040343 kubelet[2657]: I0812 23:52:32.040300 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8f9b42d-af8c-4e18-a75e-9c1b2067dc08-xtables-lock\") pod \"kube-proxy-hrvlw\" (UID: \"a8f9b42d-af8c-4e18-a75e-9c1b2067dc08\") " pod="kube-system/kube-proxy-hrvlw" Aug 12 23:52:32.040963 kubelet[2657]: I0812 23:52:32.040361 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-cgroup\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.040963 kubelet[2657]: I0812 23:52:32.040398 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-hostproc\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.040963 kubelet[2657]: I0812 23:52:32.040426 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-etc-cni-netd\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.040963 kubelet[2657]: I0812 23:52:32.040501 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx689\" (UniqueName: \"kubernetes.io/projected/a8f9b42d-af8c-4e18-a75e-9c1b2067dc08-kube-api-access-vx689\") pod \"kube-proxy-hrvlw\" (UID: \"a8f9b42d-af8c-4e18-a75e-9c1b2067dc08\") " pod="kube-system/kube-proxy-hrvlw" Aug 12 23:52:32.040963 kubelet[2657]: I0812 23:52:32.040539 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-config-path\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.041472 kubelet[2657]: I0812 23:52:32.040604 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztsxn\" (UniqueName: \"kubernetes.io/projected/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-kube-api-access-ztsxn\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.041472 kubelet[2657]: I0812 23:52:32.040654 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8f9b42d-af8c-4e18-a75e-9c1b2067dc08-lib-modules\") pod \"kube-proxy-hrvlw\" (UID: \"a8f9b42d-af8c-4e18-a75e-9c1b2067dc08\") " pod="kube-system/kube-proxy-hrvlw" Aug 12 23:52:32.041472 kubelet[2657]: I0812 23:52:32.040691 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-lib-modules\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.041472 kubelet[2657]: I0812 23:52:32.040730 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-clustermesh-secrets\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.041472 kubelet[2657]: I0812 23:52:32.040767 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a8f9b42d-af8c-4e18-a75e-9c1b2067dc08-kube-proxy\") pod \"kube-proxy-hrvlw\" (UID: \"a8f9b42d-af8c-4e18-a75e-9c1b2067dc08\") " pod="kube-system/kube-proxy-hrvlw" Aug 12 23:52:32.041472 kubelet[2657]: I0812 23:52:32.040829 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-hubble-tls\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.042366 kubelet[2657]: I0812 23:52:32.040902 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cni-path\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.042366 kubelet[2657]: I0812 23:52:32.040949 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-xtables-lock\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.042366 kubelet[2657]: I0812 23:52:32.040977 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-host-proc-sys-net\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.042366 kubelet[2657]: I0812 23:52:32.041004 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-bpf-maps\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.042366 kubelet[2657]: I0812 23:52:32.041031 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-host-proc-sys-kernel\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.042366 kubelet[2657]: I0812 23:52:32.041105 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-run\") pod \"cilium-fkd7j\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " pod="kube-system/cilium-fkd7j" Aug 12 23:52:32.057170 systemd[1]: Created slice kubepods-burstable-pod3df84f06_a8ea_430a_85a8_c86ae28ab4fa.slice - libcontainer container kubepods-burstable-pod3df84f06_a8ea_430a_85a8_c86ae28ab4fa.slice. Aug 12 23:52:32.326641 systemd[1]: Created slice kubepods-besteffort-pode02750d6_1b41_46be_929d_8c800796b280.slice - libcontainer container kubepods-besteffort-pode02750d6_1b41_46be_929d_8c800796b280.slice. Aug 12 23:52:32.343037 kubelet[2657]: I0812 23:52:32.342978 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e02750d6-1b41-46be-929d-8c800796b280-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rzsql\" (UID: \"e02750d6-1b41-46be-929d-8c800796b280\") " pod="kube-system/cilium-operator-6c4d7847fc-rzsql" Aug 12 23:52:32.343037 kubelet[2657]: I0812 23:52:32.343032 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sqdc\" (UniqueName: \"kubernetes.io/projected/e02750d6-1b41-46be-929d-8c800796b280-kube-api-access-9sqdc\") pod \"cilium-operator-6c4d7847fc-rzsql\" (UID: \"e02750d6-1b41-46be-929d-8c800796b280\") " pod="kube-system/cilium-operator-6c4d7847fc-rzsql" Aug 12 23:52:32.354874 containerd[1520]: time="2025-08-12T23:52:32.354684439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrvlw,Uid:a8f9b42d-af8c-4e18-a75e-9c1b2067dc08,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:32.362232 containerd[1520]: time="2025-08-12T23:52:32.362146587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fkd7j,Uid:3df84f06-a8ea-430a-85a8-c86ae28ab4fa,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:32.385267 containerd[1520]: time="2025-08-12T23:52:32.384138066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:32.385267 containerd[1520]: time="2025-08-12T23:52:32.384196737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:32.385267 containerd[1520]: time="2025-08-12T23:52:32.384207787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:32.385267 containerd[1520]: time="2025-08-12T23:52:32.384285143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:32.395457 containerd[1520]: time="2025-08-12T23:52:32.395191064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:32.395457 containerd[1520]: time="2025-08-12T23:52:32.395254012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:32.395457 containerd[1520]: time="2025-08-12T23:52:32.395269392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:32.396331 containerd[1520]: time="2025-08-12T23:52:32.396192997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:32.538559 systemd[1]: Started cri-containerd-2231f18546bf004900bb1eb7988d3ece9dcbc7f07b8f9cda18fd892ad297f5c0.scope - libcontainer container 2231f18546bf004900bb1eb7988d3ece9dcbc7f07b8f9cda18fd892ad297f5c0. Aug 12 23:52:32.543966 systemd[1]: Started cri-containerd-240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881.scope - libcontainer container 240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881. Aug 12 23:52:32.569209 containerd[1520]: time="2025-08-12T23:52:32.569161772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrvlw,Uid:a8f9b42d-af8c-4e18-a75e-9c1b2067dc08,Namespace:kube-system,Attempt:0,} returns sandbox id \"2231f18546bf004900bb1eb7988d3ece9dcbc7f07b8f9cda18fd892ad297f5c0\"" Aug 12 23:52:32.569915 containerd[1520]: time="2025-08-12T23:52:32.569883687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fkd7j,Uid:3df84f06-a8ea-430a-85a8-c86ae28ab4fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\"" Aug 12 23:52:32.572115 containerd[1520]: time="2025-08-12T23:52:32.572083997Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 12 23:52:32.572199 containerd[1520]: time="2025-08-12T23:52:32.572111068Z" level=info msg="CreateContainer within sandbox \"2231f18546bf004900bb1eb7988d3ece9dcbc7f07b8f9cda18fd892ad297f5c0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:52:32.590282 containerd[1520]: time="2025-08-12T23:52:32.590185150Z" level=info msg="CreateContainer within sandbox \"2231f18546bf004900bb1eb7988d3ece9dcbc7f07b8f9cda18fd892ad297f5c0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f143ce443458b5b4e5ac53e693b18d6b03b732382f67182f06d23f02455bbf7\"" Aug 12 23:52:32.591013 containerd[1520]: time="2025-08-12T23:52:32.590993238Z" level=info msg="StartContainer for \"1f143ce443458b5b4e5ac53e693b18d6b03b732382f67182f06d23f02455bbf7\"" Aug 12 23:52:32.622210 systemd[1]: Started cri-containerd-1f143ce443458b5b4e5ac53e693b18d6b03b732382f67182f06d23f02455bbf7.scope - libcontainer container 1f143ce443458b5b4e5ac53e693b18d6b03b732382f67182f06d23f02455bbf7. Aug 12 23:52:32.630938 containerd[1520]: time="2025-08-12T23:52:32.630895880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rzsql,Uid:e02750d6-1b41-46be-929d-8c800796b280,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:32.658420 containerd[1520]: time="2025-08-12T23:52:32.658362991Z" level=info msg="StartContainer for \"1f143ce443458b5b4e5ac53e693b18d6b03b732382f67182f06d23f02455bbf7\" returns successfully" Aug 12 23:52:32.663246 containerd[1520]: time="2025-08-12T23:52:32.662951026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:52:32.663246 containerd[1520]: time="2025-08-12T23:52:32.663016359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:52:32.663246 containerd[1520]: time="2025-08-12T23:52:32.663029063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:32.663246 containerd[1520]: time="2025-08-12T23:52:32.663146886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:52:32.696356 systemd[1]: Started cri-containerd-6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4.scope - libcontainer container 6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4. Aug 12 23:52:32.745017 containerd[1520]: time="2025-08-12T23:52:32.744959976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rzsql,Uid:e02750d6-1b41-46be-929d-8c800796b280,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\"" Aug 12 23:52:39.858597 kubelet[2657]: I0812 23:52:39.858514 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hrvlw" podStartSLOduration=8.857902858 podStartE2EDuration="8.857902858s" podCreationTimestamp="2025-08-12 23:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:52:32.914552697 +0000 UTC m=+7.125598462" watchObservedRunningTime="2025-08-12 23:52:39.857902858 +0000 UTC m=+14.068948623" Aug 12 23:52:40.110872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount694757097.mount: Deactivated successfully. Aug 12 23:52:52.571185 containerd[1520]: time="2025-08-12T23:52:52.568752080Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:52.571862 containerd[1520]: time="2025-08-12T23:52:52.571715810Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 12 23:52:52.571908 containerd[1520]: time="2025-08-12T23:52:52.571861814Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:52.578078 containerd[1520]: time="2025-08-12T23:52:52.575205319Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 20.003077369s" Aug 12 23:52:52.578078 containerd[1520]: time="2025-08-12T23:52:52.577184606Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 12 23:52:52.583025 containerd[1520]: time="2025-08-12T23:52:52.582265722Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 12 23:52:52.586202 containerd[1520]: time="2025-08-12T23:52:52.585477048Z" level=info msg="CreateContainer within sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:52:52.673616 containerd[1520]: time="2025-08-12T23:52:52.673395351Z" level=info msg="CreateContainer within sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489\"" Aug 12 23:52:52.676901 containerd[1520]: time="2025-08-12T23:52:52.676828245Z" level=info msg="StartContainer for \"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489\"" Aug 12 23:52:52.771568 systemd[1]: Started cri-containerd-5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489.scope - libcontainer container 5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489. Aug 12 23:52:52.874385 containerd[1520]: time="2025-08-12T23:52:52.871333359Z" level=info msg="StartContainer for \"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489\" returns successfully" Aug 12 23:52:52.897613 systemd[1]: cri-containerd-5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489.scope: Deactivated successfully. Aug 12 23:52:53.629766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489-rootfs.mount: Deactivated successfully. Aug 12 23:52:53.752200 containerd[1520]: time="2025-08-12T23:52:53.750294475Z" level=info msg="shim disconnected" id=5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489 namespace=k8s.io Aug 12 23:52:53.752200 containerd[1520]: time="2025-08-12T23:52:53.750374385Z" level=warning msg="cleaning up after shim disconnected" id=5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489 namespace=k8s.io Aug 12 23:52:53.752200 containerd[1520]: time="2025-08-12T23:52:53.750387500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:52:53.794845 containerd[1520]: time="2025-08-12T23:52:53.794691042Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:52:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 12 23:52:54.025494 containerd[1520]: time="2025-08-12T23:52:54.022209013Z" level=info msg="CreateContainer within sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:52:54.133991 containerd[1520]: time="2025-08-12T23:52:54.133931522Z" level=info msg="CreateContainer within sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030\"" Aug 12 23:52:54.135658 containerd[1520]: time="2025-08-12T23:52:54.135138303Z" level=info msg="StartContainer for \"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030\"" Aug 12 23:52:54.200386 systemd[1]: Started cri-containerd-84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030.scope - libcontainer container 84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030. Aug 12 23:52:54.318240 containerd[1520]: time="2025-08-12T23:52:54.316188951Z" level=info msg="StartContainer for \"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030\" returns successfully" Aug 12 23:52:54.368908 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:52:54.369185 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:52:54.377753 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:52:54.398119 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:52:54.398772 systemd[1]: cri-containerd-84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030.scope: Deactivated successfully. Aug 12 23:52:54.451658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:52:54.478145 containerd[1520]: time="2025-08-12T23:52:54.475835719Z" level=info msg="shim disconnected" id=84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030 namespace=k8s.io Aug 12 23:52:54.478145 containerd[1520]: time="2025-08-12T23:52:54.475898968Z" level=warning msg="cleaning up after shim disconnected" id=84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030 namespace=k8s.io Aug 12 23:52:54.478145 containerd[1520]: time="2025-08-12T23:52:54.475909417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:52:54.629904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030-rootfs.mount: Deactivated successfully. Aug 12 23:52:54.712789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343550862.mount: Deactivated successfully. Aug 12 23:52:55.042420 containerd[1520]: time="2025-08-12T23:52:55.042363104Z" level=info msg="CreateContainer within sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:52:55.281378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795761187.mount: Deactivated successfully. Aug 12 23:52:55.316397 containerd[1520]: time="2025-08-12T23:52:55.314195177Z" level=info msg="CreateContainer within sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502\"" Aug 12 23:52:55.319842 containerd[1520]: time="2025-08-12T23:52:55.317848173Z" level=info msg="StartContainer for \"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502\"" Aug 12 23:52:55.415337 systemd[1]: Started cri-containerd-bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502.scope - libcontainer container bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502. Aug 12 23:52:55.486928 containerd[1520]: time="2025-08-12T23:52:55.486420584Z" level=info msg="StartContainer for \"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502\" returns successfully" Aug 12 23:52:55.494934 systemd[1]: cri-containerd-bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502.scope: Deactivated successfully. Aug 12 23:52:55.650227 containerd[1520]: time="2025-08-12T23:52:55.646440889Z" level=info msg="shim disconnected" id=bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502 namespace=k8s.io Aug 12 23:52:55.650227 containerd[1520]: time="2025-08-12T23:52:55.646639794Z" level=warning msg="cleaning up after shim disconnected" id=bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502 namespace=k8s.io Aug 12 23:52:55.650227 containerd[1520]: time="2025-08-12T23:52:55.646658318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:52:56.069157 containerd[1520]: time="2025-08-12T23:52:56.068882414Z" level=info msg="CreateContainer within sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:52:56.151641 containerd[1520]: time="2025-08-12T23:52:56.151591303Z" level=info msg="CreateContainer within sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc\"" Aug 12 23:52:56.153021 containerd[1520]: time="2025-08-12T23:52:56.152945381Z" level=info msg="StartContainer for \"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc\"" Aug 12 23:52:56.238313 systemd[1]: Started cri-containerd-cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc.scope - libcontainer container cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc. Aug 12 23:52:56.339220 systemd[1]: cri-containerd-cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc.scope: Deactivated successfully. Aug 12 23:52:56.360602 containerd[1520]: time="2025-08-12T23:52:56.356343318Z" level=info msg="StartContainer for \"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc\" returns successfully" Aug 12 23:52:56.579535 containerd[1520]: time="2025-08-12T23:52:56.578936560Z" level=info msg="shim disconnected" id=cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc namespace=k8s.io Aug 12 23:52:56.579535 containerd[1520]: time="2025-08-12T23:52:56.578992605Z" level=warning msg="cleaning up after shim disconnected" id=cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc namespace=k8s.io Aug 12 23:52:56.579535 containerd[1520]: time="2025-08-12T23:52:56.579003396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:52:56.631608 containerd[1520]: time="2025-08-12T23:52:56.629588749Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:52:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 12 23:52:56.634437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc-rootfs.mount: Deactivated successfully. Aug 12 23:52:56.813212 containerd[1520]: time="2025-08-12T23:52:56.813127530Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:56.814521 containerd[1520]: time="2025-08-12T23:52:56.814418099Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 12 23:52:56.815872 containerd[1520]: time="2025-08-12T23:52:56.815821571Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:52:56.819260 containerd[1520]: time="2025-08-12T23:52:56.819151607Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.23684588s" Aug 12 23:52:56.819260 containerd[1520]: time="2025-08-12T23:52:56.819240225Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 12 23:52:56.825567 containerd[1520]: time="2025-08-12T23:52:56.825383927Z" level=info msg="CreateContainer within sandbox \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 12 23:52:56.884663 containerd[1520]: time="2025-08-12T23:52:56.881214150Z" level=info msg="CreateContainer within sandbox \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\"" Aug 12 23:52:56.884663 containerd[1520]: time="2025-08-12T23:52:56.883362653Z" level=info msg="StartContainer for \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\"" Aug 12 23:52:56.963408 systemd[1]: Started cri-containerd-c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02.scope - libcontainer container c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02. Aug 12 23:52:57.045534 containerd[1520]: time="2025-08-12T23:52:57.045332946Z" level=info msg="StartContainer for \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\" returns successfully" Aug 12 23:52:57.079608 containerd[1520]: time="2025-08-12T23:52:57.077018805Z" level=info msg="CreateContainer within sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:52:57.089558 kubelet[2657]: I0812 23:52:57.089365 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rzsql" podStartSLOduration=2.013948566 podStartE2EDuration="26.089341004s" podCreationTimestamp="2025-08-12 23:52:31 +0000 UTC" firstStartedPulling="2025-08-12 23:52:32.746397023 +0000 UTC m=+6.957442789" lastFinishedPulling="2025-08-12 23:52:56.821789462 +0000 UTC m=+31.032835227" observedRunningTime="2025-08-12 23:52:57.088932295 +0000 UTC m=+31.299978090" watchObservedRunningTime="2025-08-12 23:52:57.089341004 +0000 UTC m=+31.300386779" Aug 12 23:52:57.136761 containerd[1520]: time="2025-08-12T23:52:57.136331604Z" level=info msg="CreateContainer within sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\"" Aug 12 23:52:57.139387 containerd[1520]: time="2025-08-12T23:52:57.139223105Z" level=info msg="StartContainer for \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\"" Aug 12 23:52:57.241797 systemd[1]: Started cri-containerd-d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21.scope - libcontainer container d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21. Aug 12 23:52:57.594096 containerd[1520]: time="2025-08-12T23:52:57.591494228Z" level=info msg="StartContainer for \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\" returns successfully" Aug 12 23:52:57.933742 kubelet[2657]: I0812 23:52:57.933707 2657 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 12 23:52:58.126309 systemd[1]: Created slice kubepods-burstable-podc374d78e_cdd6_4178_8cb7_d1e61b540f01.slice - libcontainer container kubepods-burstable-podc374d78e_cdd6_4178_8cb7_d1e61b540f01.slice. Aug 12 23:52:58.151770 systemd[1]: Created slice kubepods-burstable-pod58aa30d4_197f_4d54_8b69_5b388876d19e.slice - libcontainer container kubepods-burstable-pod58aa30d4_197f_4d54_8b69_5b388876d19e.slice. Aug 12 23:52:58.183095 kubelet[2657]: I0812 23:52:58.182345 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c374d78e-cdd6-4178-8cb7-d1e61b540f01-config-volume\") pod \"coredns-668d6bf9bc-k5mjn\" (UID: \"c374d78e-cdd6-4178-8cb7-d1e61b540f01\") " pod="kube-system/coredns-668d6bf9bc-k5mjn" Aug 12 23:52:58.183095 kubelet[2657]: I0812 23:52:58.182415 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58aa30d4-197f-4d54-8b69-5b388876d19e-config-volume\") pod \"coredns-668d6bf9bc-frblj\" (UID: \"58aa30d4-197f-4d54-8b69-5b388876d19e\") " pod="kube-system/coredns-668d6bf9bc-frblj" Aug 12 23:52:58.183095 kubelet[2657]: I0812 23:52:58.182450 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p4fg\" (UniqueName: \"kubernetes.io/projected/58aa30d4-197f-4d54-8b69-5b388876d19e-kube-api-access-6p4fg\") pod \"coredns-668d6bf9bc-frblj\" (UID: \"58aa30d4-197f-4d54-8b69-5b388876d19e\") " pod="kube-system/coredns-668d6bf9bc-frblj" Aug 12 23:52:58.183095 kubelet[2657]: I0812 23:52:58.182483 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cldvl\" (UniqueName: \"kubernetes.io/projected/c374d78e-cdd6-4178-8cb7-d1e61b540f01-kube-api-access-cldvl\") pod \"coredns-668d6bf9bc-k5mjn\" (UID: \"c374d78e-cdd6-4178-8cb7-d1e61b540f01\") " pod="kube-system/coredns-668d6bf9bc-k5mjn" Aug 12 23:52:58.457826 containerd[1520]: time="2025-08-12T23:52:58.457353916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5mjn,Uid:c374d78e-cdd6-4178-8cb7-d1e61b540f01,Namespace:kube-system,Attempt:0,}" Aug 12 23:52:58.468684 containerd[1520]: time="2025-08-12T23:52:58.463735264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-frblj,Uid:58aa30d4-197f-4d54-8b69-5b388876d19e,Namespace:kube-system,Attempt:0,}" Aug 12 23:53:01.035387 systemd-networkd[1428]: cilium_host: Link UP Aug 12 23:53:01.035735 systemd-networkd[1428]: cilium_net: Link UP Aug 12 23:53:01.036146 systemd-networkd[1428]: cilium_net: Gained carrier Aug 12 23:53:01.036551 systemd-networkd[1428]: cilium_host: Gained carrier Aug 12 23:53:01.036778 systemd-networkd[1428]: cilium_net: Gained IPv6LL Aug 12 23:53:01.038104 systemd-networkd[1428]: cilium_host: Gained IPv6LL Aug 12 23:53:01.248141 systemd-networkd[1428]: cilium_vxlan: Link UP Aug 12 23:53:01.248362 systemd-networkd[1428]: cilium_vxlan: Gained carrier Aug 12 23:53:01.757216 kernel: NET: Registered PF_ALG protocol family Aug 12 23:53:02.778106 systemd-networkd[1428]: lxc_health: Link UP Aug 12 23:53:02.783173 systemd-networkd[1428]: lxc_health: Gained carrier Aug 12 23:53:03.102380 systemd-networkd[1428]: cilium_vxlan: Gained IPv6LL Aug 12 23:53:03.282402 kernel: eth0: renamed from tmp0fefa Aug 12 23:53:03.281507 systemd-networkd[1428]: lxc135799ff4286: Link UP Aug 12 23:53:03.289238 systemd-networkd[1428]: lxc135799ff4286: Gained carrier Aug 12 23:53:03.313107 systemd-networkd[1428]: lxc5760e6d79c6c: Link UP Aug 12 23:53:03.325174 kernel: eth0: renamed from tmp82f8c Aug 12 23:53:03.333297 systemd-networkd[1428]: lxc5760e6d79c6c: Gained carrier Aug 12 23:53:04.067445 systemd-networkd[1428]: lxc_health: Gained IPv6LL Aug 12 23:53:04.490949 kubelet[2657]: I0812 23:53:04.484210 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fkd7j" podStartSLOduration=13.474178151 podStartE2EDuration="33.484181497s" podCreationTimestamp="2025-08-12 23:52:31 +0000 UTC" firstStartedPulling="2025-08-12 23:52:32.571221867 +0000 UTC m=+6.782267632" lastFinishedPulling="2025-08-12 23:52:52.581225213 +0000 UTC m=+26.792270978" observedRunningTime="2025-08-12 23:52:58.353768677 +0000 UTC m=+32.564814452" watchObservedRunningTime="2025-08-12 23:53:04.484181497 +0000 UTC m=+38.695227272" Aug 12 23:53:05.086446 systemd-networkd[1428]: lxc5760e6d79c6c: Gained IPv6LL Aug 12 23:53:05.150426 systemd-networkd[1428]: lxc135799ff4286: Gained IPv6LL Aug 12 23:53:06.673418 systemd[1]: Started sshd@7-10.0.0.30:22-10.0.0.1:58042.service - OpenSSH per-connection server daemon (10.0.0.1:58042). Aug 12 23:53:06.739619 sshd[3866]: Accepted publickey for core from 10.0.0.1 port 58042 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:06.741740 sshd-session[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:06.749138 systemd-logind[1505]: New session 8 of user core. Aug 12 23:53:06.763371 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:53:07.012263 sshd[3870]: Connection closed by 10.0.0.1 port 58042 Aug 12 23:53:07.014105 sshd-session[3866]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:07.018809 systemd[1]: sshd@7-10.0.0.30:22-10.0.0.1:58042.service: Deactivated successfully. Aug 12 23:53:07.021359 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:53:07.022140 systemd-logind[1505]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:53:07.023523 systemd-logind[1505]: Removed session 8. Aug 12 23:53:08.075467 containerd[1520]: time="2025-08-12T23:53:08.075282353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:53:08.075467 containerd[1520]: time="2025-08-12T23:53:08.075399263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:53:08.075467 containerd[1520]: time="2025-08-12T23:53:08.075417728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:08.075976 containerd[1520]: time="2025-08-12T23:53:08.075549405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:08.081037 containerd[1520]: time="2025-08-12T23:53:08.080877436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:53:08.081037 containerd[1520]: time="2025-08-12T23:53:08.080943471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:53:08.081037 containerd[1520]: time="2025-08-12T23:53:08.080974529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:08.084089 containerd[1520]: time="2025-08-12T23:53:08.082394840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:53:08.112210 systemd[1]: Started cri-containerd-0fefa2ee2386ca2866490a638291c5c849691e9884ef5d8bc0f0279015c73855.scope - libcontainer container 0fefa2ee2386ca2866490a638291c5c849691e9884ef5d8bc0f0279015c73855. Aug 12 23:53:08.114315 systemd[1]: Started cri-containerd-82f8cd7703d1b1da48e736cddc499022188d32d400c29ad97a0a075285775b6e.scope - libcontainer container 82f8cd7703d1b1da48e736cddc499022188d32d400c29ad97a0a075285775b6e. Aug 12 23:53:08.131332 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:53:08.131357 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:53:08.163963 containerd[1520]: time="2025-08-12T23:53:08.163899310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-frblj,Uid:58aa30d4-197f-4d54-8b69-5b388876d19e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fefa2ee2386ca2866490a638291c5c849691e9884ef5d8bc0f0279015c73855\"" Aug 12 23:53:08.166984 containerd[1520]: time="2025-08-12T23:53:08.166930951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5mjn,Uid:c374d78e-cdd6-4178-8cb7-d1e61b540f01,Namespace:kube-system,Attempt:0,} returns sandbox id \"82f8cd7703d1b1da48e736cddc499022188d32d400c29ad97a0a075285775b6e\"" Aug 12 23:53:08.168530 containerd[1520]: time="2025-08-12T23:53:08.168187865Z" level=info msg="CreateContainer within sandbox \"0fefa2ee2386ca2866490a638291c5c849691e9884ef5d8bc0f0279015c73855\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:53:08.172193 containerd[1520]: time="2025-08-12T23:53:08.172132073Z" level=info msg="CreateContainer within sandbox \"82f8cd7703d1b1da48e736cddc499022188d32d400c29ad97a0a075285775b6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:53:08.192070 containerd[1520]: time="2025-08-12T23:53:08.192007044Z" level=info msg="CreateContainer within sandbox \"0fefa2ee2386ca2866490a638291c5c849691e9884ef5d8bc0f0279015c73855\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01889278723ca29a84375f01160603d4c6503b5529376d812a45bab8e70008c6\"" Aug 12 23:53:08.192706 containerd[1520]: time="2025-08-12T23:53:08.192644141Z" level=info msg="StartContainer for \"01889278723ca29a84375f01160603d4c6503b5529376d812a45bab8e70008c6\"" Aug 12 23:53:08.198542 containerd[1520]: time="2025-08-12T23:53:08.198483464Z" level=info msg="CreateContainer within sandbox \"82f8cd7703d1b1da48e736cddc499022188d32d400c29ad97a0a075285775b6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78ba33b4502a7f3a5c2eb693af1b013275a5622498060a88cf7b04a69e6a2b0f\"" Aug 12 23:53:08.198987 containerd[1520]: time="2025-08-12T23:53:08.198954190Z" level=info msg="StartContainer for \"78ba33b4502a7f3a5c2eb693af1b013275a5622498060a88cf7b04a69e6a2b0f\"" Aug 12 23:53:08.226216 systemd[1]: Started cri-containerd-01889278723ca29a84375f01160603d4c6503b5529376d812a45bab8e70008c6.scope - libcontainer container 01889278723ca29a84375f01160603d4c6503b5529376d812a45bab8e70008c6. Aug 12 23:53:08.235285 systemd[1]: Started cri-containerd-78ba33b4502a7f3a5c2eb693af1b013275a5622498060a88cf7b04a69e6a2b0f.scope - libcontainer container 78ba33b4502a7f3a5c2eb693af1b013275a5622498060a88cf7b04a69e6a2b0f. Aug 12 23:53:08.275027 containerd[1520]: time="2025-08-12T23:53:08.274971920Z" level=info msg="StartContainer for \"78ba33b4502a7f3a5c2eb693af1b013275a5622498060a88cf7b04a69e6a2b0f\" returns successfully" Aug 12 23:53:08.275164 containerd[1520]: time="2025-08-12T23:53:08.274999963Z" level=info msg="StartContainer for \"01889278723ca29a84375f01160603d4c6503b5529376d812a45bab8e70008c6\" returns successfully" Aug 12 23:53:09.081858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986360192.mount: Deactivated successfully. Aug 12 23:53:09.302386 kubelet[2657]: I0812 23:53:09.302317 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-frblj" podStartSLOduration=38.302295952 podStartE2EDuration="38.302295952s" podCreationTimestamp="2025-08-12 23:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:53:09.249342313 +0000 UTC m=+43.460388078" watchObservedRunningTime="2025-08-12 23:53:09.302295952 +0000 UTC m=+43.513341717" Aug 12 23:53:09.303477 kubelet[2657]: I0812 23:53:09.302695 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k5mjn" podStartSLOduration=37.302691005 podStartE2EDuration="37.302691005s" podCreationTimestamp="2025-08-12 23:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:53:09.30211406 +0000 UTC m=+43.513159825" watchObservedRunningTime="2025-08-12 23:53:09.302691005 +0000 UTC m=+43.513736780" Aug 12 23:53:12.029685 systemd[1]: Started sshd@8-10.0.0.30:22-10.0.0.1:57490.service - OpenSSH per-connection server daemon (10.0.0.1:57490). Aug 12 23:53:12.075017 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 57490 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:12.076619 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:12.081120 systemd-logind[1505]: New session 9 of user core. Aug 12 23:53:12.096208 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:53:12.340665 sshd[4057]: Connection closed by 10.0.0.1 port 57490 Aug 12 23:53:12.341032 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:12.344650 systemd[1]: sshd@8-10.0.0.30:22-10.0.0.1:57490.service: Deactivated successfully. Aug 12 23:53:12.347065 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:53:12.348852 systemd-logind[1505]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:53:12.349781 systemd-logind[1505]: Removed session 9. Aug 12 23:53:17.355718 systemd[1]: Started sshd@9-10.0.0.30:22-10.0.0.1:57494.service - OpenSSH per-connection server daemon (10.0.0.1:57494). Aug 12 23:53:17.394433 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 57494 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:17.396141 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:17.401138 systemd-logind[1505]: New session 10 of user core. Aug 12 23:53:17.418357 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:53:17.540028 sshd[4073]: Connection closed by 10.0.0.1 port 57494 Aug 12 23:53:17.540410 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:17.544408 systemd[1]: sshd@9-10.0.0.30:22-10.0.0.1:57494.service: Deactivated successfully. Aug 12 23:53:17.546566 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:53:17.547314 systemd-logind[1505]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:53:17.548381 systemd-logind[1505]: Removed session 10. Aug 12 23:53:22.553696 systemd[1]: Started sshd@10-10.0.0.30:22-10.0.0.1:52434.service - OpenSSH per-connection server daemon (10.0.0.1:52434). Aug 12 23:53:22.593223 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 52434 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:22.594866 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:22.599552 systemd-logind[1505]: New session 11 of user core. Aug 12 23:53:22.609209 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 12 23:53:22.741658 sshd[4092]: Connection closed by 10.0.0.1 port 52434 Aug 12 23:53:22.742104 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:22.747310 systemd[1]: sshd@10-10.0.0.30:22-10.0.0.1:52434.service: Deactivated successfully. Aug 12 23:53:22.750399 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:53:22.751266 systemd-logind[1505]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:53:22.752226 systemd-logind[1505]: Removed session 11. Aug 12 23:53:27.761392 systemd[1]: Started sshd@11-10.0.0.30:22-10.0.0.1:52446.service - OpenSSH per-connection server daemon (10.0.0.1:52446). Aug 12 23:53:27.806691 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 52446 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:27.808496 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:27.813552 systemd-logind[1505]: New session 12 of user core. Aug 12 23:53:27.824188 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 12 23:53:27.968465 sshd[4111]: Connection closed by 10.0.0.1 port 52446 Aug 12 23:53:27.968851 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:27.981316 systemd[1]: sshd@11-10.0.0.30:22-10.0.0.1:52446.service: Deactivated successfully. Aug 12 23:53:27.983478 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:53:27.984351 systemd-logind[1505]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:53:27.991346 systemd[1]: Started sshd@12-10.0.0.30:22-10.0.0.1:52454.service - OpenSSH per-connection server daemon (10.0.0.1:52454). Aug 12 23:53:27.992228 systemd-logind[1505]: Removed session 12. Aug 12 23:53:28.029273 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 52454 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:28.031322 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:28.036246 systemd-logind[1505]: New session 13 of user core. Aug 12 23:53:28.046211 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 12 23:53:28.452725 sshd[4127]: Connection closed by 10.0.0.1 port 52454 Aug 12 23:53:28.453266 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:28.462077 systemd[1]: sshd@12-10.0.0.30:22-10.0.0.1:52454.service: Deactivated successfully. Aug 12 23:53:28.464008 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:53:28.465847 systemd-logind[1505]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:53:28.472387 systemd[1]: Started sshd@13-10.0.0.30:22-10.0.0.1:52470.service - OpenSSH per-connection server daemon (10.0.0.1:52470). Aug 12 23:53:28.473420 systemd-logind[1505]: Removed session 13. Aug 12 23:53:28.511611 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 52470 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:28.513341 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:28.518154 systemd-logind[1505]: New session 14 of user core. Aug 12 23:53:28.531189 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 12 23:53:28.682796 sshd[4140]: Connection closed by 10.0.0.1 port 52470 Aug 12 23:53:28.683225 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:28.687393 systemd[1]: sshd@13-10.0.0.30:22-10.0.0.1:52470.service: Deactivated successfully. Aug 12 23:53:28.689857 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:53:28.690566 systemd-logind[1505]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:53:28.691410 systemd-logind[1505]: Removed session 14. Aug 12 23:53:33.702217 systemd[1]: Started sshd@14-10.0.0.30:22-10.0.0.1:33464.service - OpenSSH per-connection server daemon (10.0.0.1:33464). Aug 12 23:53:33.740905 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 33464 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:33.743098 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:33.748908 systemd-logind[1505]: New session 15 of user core. Aug 12 23:53:33.758292 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 12 23:53:33.876327 sshd[4157]: Connection closed by 10.0.0.1 port 33464 Aug 12 23:53:33.876798 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:33.881355 systemd[1]: sshd@14-10.0.0.30:22-10.0.0.1:33464.service: Deactivated successfully. Aug 12 23:53:33.884076 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:53:33.885158 systemd-logind[1505]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:53:33.886230 systemd-logind[1505]: Removed session 15. Aug 12 23:53:38.932725 systemd[1]: Started sshd@15-10.0.0.30:22-10.0.0.1:33472.service - OpenSSH per-connection server daemon (10.0.0.1:33472). Aug 12 23:53:39.008511 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 33472 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:39.014285 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:39.047437 systemd-logind[1505]: New session 16 of user core. Aug 12 23:53:39.066513 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 12 23:53:39.357976 sshd[4172]: Connection closed by 10.0.0.1 port 33472 Aug 12 23:53:39.359323 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:39.391592 systemd[1]: sshd@15-10.0.0.30:22-10.0.0.1:33472.service: Deactivated successfully. Aug 12 23:53:39.402157 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:53:39.405930 systemd-logind[1505]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:53:39.407464 systemd-logind[1505]: Removed session 16. Aug 12 23:53:44.424651 systemd[1]: Started sshd@16-10.0.0.30:22-10.0.0.1:46208.service - OpenSSH per-connection server daemon (10.0.0.1:46208). Aug 12 23:53:44.510473 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 46208 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:44.512028 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:44.536280 systemd-logind[1505]: New session 17 of user core. Aug 12 23:53:44.546324 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 12 23:53:44.862875 sshd[4187]: Connection closed by 10.0.0.1 port 46208 Aug 12 23:53:44.863839 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:44.885615 systemd[1]: sshd@16-10.0.0.30:22-10.0.0.1:46208.service: Deactivated successfully. Aug 12 23:53:44.893670 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:53:44.895270 systemd-logind[1505]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:53:44.910438 systemd[1]: Started sshd@17-10.0.0.30:22-10.0.0.1:46210.service - OpenSSH per-connection server daemon (10.0.0.1:46210). Aug 12 23:53:44.911468 systemd-logind[1505]: Removed session 17. Aug 12 23:53:44.974295 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 46210 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:44.976243 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:44.985237 systemd-logind[1505]: New session 18 of user core. Aug 12 23:53:44.999294 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 12 23:53:45.864720 sshd[4202]: Connection closed by 10.0.0.1 port 46210 Aug 12 23:53:45.862984 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:45.887940 systemd[1]: sshd@17-10.0.0.30:22-10.0.0.1:46210.service: Deactivated successfully. Aug 12 23:53:45.890863 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:53:45.895949 systemd-logind[1505]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:53:45.908750 systemd[1]: Started sshd@18-10.0.0.30:22-10.0.0.1:46224.service - OpenSSH per-connection server daemon (10.0.0.1:46224). Aug 12 23:53:45.915376 systemd-logind[1505]: Removed session 18. Aug 12 23:53:46.006152 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 46224 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:46.007082 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:46.017019 systemd-logind[1505]: New session 19 of user core. Aug 12 23:53:46.027811 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 12 23:53:47.148935 sshd[4216]: Connection closed by 10.0.0.1 port 46224 Aug 12 23:53:47.152550 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:47.209374 systemd[1]: Started sshd@19-10.0.0.30:22-10.0.0.1:46230.service - OpenSSH per-connection server daemon (10.0.0.1:46230). Aug 12 23:53:47.212687 systemd[1]: sshd@18-10.0.0.30:22-10.0.0.1:46224.service: Deactivated successfully. Aug 12 23:53:47.222966 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:53:47.228848 systemd-logind[1505]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:53:47.235579 systemd-logind[1505]: Removed session 19. Aug 12 23:53:47.328676 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 46230 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:47.329999 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:47.355313 systemd-logind[1505]: New session 20 of user core. Aug 12 23:53:47.368342 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 12 23:53:48.097797 sshd[4237]: Connection closed by 10.0.0.1 port 46230 Aug 12 23:53:48.099314 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:48.119902 systemd[1]: sshd@19-10.0.0.30:22-10.0.0.1:46230.service: Deactivated successfully. Aug 12 23:53:48.124313 systemd[1]: session-20.scope: Deactivated successfully. Aug 12 23:53:48.131318 systemd-logind[1505]: Session 20 logged out. Waiting for processes to exit. Aug 12 23:53:48.145517 systemd[1]: Started sshd@20-10.0.0.30:22-10.0.0.1:46240.service - OpenSSH per-connection server daemon (10.0.0.1:46240). Aug 12 23:53:48.150840 systemd-logind[1505]: Removed session 20. Aug 12 23:53:48.212745 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 46240 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:48.221558 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:48.242419 systemd-logind[1505]: New session 21 of user core. Aug 12 23:53:48.258452 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 12 23:53:48.482506 sshd[4250]: Connection closed by 10.0.0.1 port 46240 Aug 12 23:53:48.479970 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:48.490220 systemd[1]: sshd@20-10.0.0.30:22-10.0.0.1:46240.service: Deactivated successfully. Aug 12 23:53:48.501470 systemd[1]: session-21.scope: Deactivated successfully. Aug 12 23:53:48.504567 systemd-logind[1505]: Session 21 logged out. Waiting for processes to exit. Aug 12 23:53:48.512802 systemd-logind[1505]: Removed session 21. Aug 12 23:53:53.527504 systemd[1]: Started sshd@21-10.0.0.30:22-10.0.0.1:50068.service - OpenSSH per-connection server daemon (10.0.0.1:50068). Aug 12 23:53:53.602870 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 50068 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:53.605732 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:53.619664 systemd-logind[1505]: New session 22 of user core. Aug 12 23:53:53.634218 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 12 23:53:53.862842 sshd[4265]: Connection closed by 10.0.0.1 port 50068 Aug 12 23:53:53.863688 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:53.872578 systemd[1]: sshd@21-10.0.0.30:22-10.0.0.1:50068.service: Deactivated successfully. Aug 12 23:53:53.876644 systemd[1]: session-22.scope: Deactivated successfully. Aug 12 23:53:53.886655 systemd-logind[1505]: Session 22 logged out. Waiting for processes to exit. Aug 12 23:53:53.892497 systemd-logind[1505]: Removed session 22. Aug 12 23:53:58.877858 systemd[1]: Started sshd@22-10.0.0.30:22-10.0.0.1:50076.service - OpenSSH per-connection server daemon (10.0.0.1:50076). Aug 12 23:53:58.928863 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 50076 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:53:58.930806 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:53:58.935890 systemd-logind[1505]: New session 23 of user core. Aug 12 23:53:58.947211 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 12 23:53:59.059045 sshd[4283]: Connection closed by 10.0.0.1 port 50076 Aug 12 23:53:59.059593 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Aug 12 23:53:59.064986 systemd[1]: sshd@22-10.0.0.30:22-10.0.0.1:50076.service: Deactivated successfully. Aug 12 23:53:59.068118 systemd[1]: session-23.scope: Deactivated successfully. Aug 12 23:53:59.068988 systemd-logind[1505]: Session 23 logged out. Waiting for processes to exit. Aug 12 23:53:59.070278 systemd-logind[1505]: Removed session 23. Aug 12 23:54:04.073715 systemd[1]: Started sshd@23-10.0.0.30:22-10.0.0.1:37038.service - OpenSSH per-connection server daemon (10.0.0.1:37038). Aug 12 23:54:04.111680 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 37038 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:54:04.113101 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:04.117262 systemd-logind[1505]: New session 24 of user core. Aug 12 23:54:04.124189 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 12 23:54:04.235138 sshd[4301]: Connection closed by 10.0.0.1 port 37038 Aug 12 23:54:04.233706 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:04.237785 systemd[1]: sshd@23-10.0.0.30:22-10.0.0.1:37038.service: Deactivated successfully. Aug 12 23:54:04.240399 systemd[1]: session-24.scope: Deactivated successfully. Aug 12 23:54:04.241095 systemd-logind[1505]: Session 24 logged out. Waiting for processes to exit. Aug 12 23:54:04.241939 systemd-logind[1505]: Removed session 24. Aug 12 23:54:09.258462 systemd[1]: Started sshd@24-10.0.0.30:22-10.0.0.1:37050.service - OpenSSH per-connection server daemon (10.0.0.1:37050). Aug 12 23:54:09.297871 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 37050 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:54:09.299799 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:09.305162 systemd-logind[1505]: New session 25 of user core. Aug 12 23:54:09.310248 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 12 23:54:09.445842 sshd[4316]: Connection closed by 10.0.0.1 port 37050 Aug 12 23:54:09.446367 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:09.455128 systemd[1]: sshd@24-10.0.0.30:22-10.0.0.1:37050.service: Deactivated successfully. Aug 12 23:54:09.457274 systemd[1]: session-25.scope: Deactivated successfully. Aug 12 23:54:09.459026 systemd-logind[1505]: Session 25 logged out. Waiting for processes to exit. Aug 12 23:54:09.467374 systemd[1]: Started sshd@25-10.0.0.30:22-10.0.0.1:37066.service - OpenSSH per-connection server daemon (10.0.0.1:37066). Aug 12 23:54:09.468468 systemd-logind[1505]: Removed session 25. Aug 12 23:54:09.504335 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 37066 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:54:09.506352 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:09.512464 systemd-logind[1505]: New session 26 of user core. Aug 12 23:54:09.524359 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 12 23:54:10.927865 containerd[1520]: time="2025-08-12T23:54:10.927167744Z" level=info msg="StopContainer for \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\" with timeout 30 (s)" Aug 12 23:54:10.927865 containerd[1520]: time="2025-08-12T23:54:10.927494534Z" level=info msg="Stop container \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\" with signal terminated" Aug 12 23:54:10.955581 systemd[1]: cri-containerd-c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02.scope: Deactivated successfully. Aug 12 23:54:10.972709 containerd[1520]: time="2025-08-12T23:54:10.972640345Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:54:10.974895 containerd[1520]: time="2025-08-12T23:54:10.974864024Z" level=info msg="StopContainer for \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\" with timeout 2 (s)" Aug 12 23:54:10.980632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02-rootfs.mount: Deactivated successfully. Aug 12 23:54:10.983571 containerd[1520]: time="2025-08-12T23:54:10.983537487Z" level=info msg="Stop container \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\" with signal terminated" Aug 12 23:54:10.987160 containerd[1520]: time="2025-08-12T23:54:10.987033850Z" level=info msg="shim disconnected" id=c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02 namespace=k8s.io Aug 12 23:54:10.987240 containerd[1520]: time="2025-08-12T23:54:10.987164818Z" level=warning msg="cleaning up after shim disconnected" id=c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02 namespace=k8s.io Aug 12 23:54:10.987240 containerd[1520]: time="2025-08-12T23:54:10.987190296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:10.991339 systemd-networkd[1428]: lxc_health: Link DOWN Aug 12 23:54:10.991351 systemd-networkd[1428]: lxc_health: Lost carrier Aug 12 23:54:11.010703 systemd[1]: cri-containerd-d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21.scope: Deactivated successfully. Aug 12 23:54:11.011089 systemd[1]: cri-containerd-d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21.scope: Consumed 9.393s CPU time, 127.2M memory peak, 608K read from disk, 13.3M written to disk. Aug 12 23:54:11.026537 containerd[1520]: time="2025-08-12T23:54:11.026413514Z" level=info msg="StopContainer for \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\" returns successfully" Aug 12 23:54:11.030083 containerd[1520]: time="2025-08-12T23:54:11.030032048Z" level=info msg="StopPodSandbox for \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\"" Aug 12 23:54:11.034215 containerd[1520]: time="2025-08-12T23:54:11.030086751Z" level=info msg="Container to stop \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:11.037296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4-shm.mount: Deactivated successfully. Aug 12 23:54:11.041442 systemd[1]: cri-containerd-6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4.scope: Deactivated successfully. Aug 12 23:54:11.045638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21-rootfs.mount: Deactivated successfully. Aug 12 23:54:11.060769 containerd[1520]: time="2025-08-12T23:54:11.060684575Z" level=info msg="shim disconnected" id=d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21 namespace=k8s.io Aug 12 23:54:11.060769 containerd[1520]: time="2025-08-12T23:54:11.060753155Z" level=warning msg="cleaning up after shim disconnected" id=d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21 namespace=k8s.io Aug 12 23:54:11.060769 containerd[1520]: time="2025-08-12T23:54:11.060762854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:11.068190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4-rootfs.mount: Deactivated successfully. Aug 12 23:54:11.071069 containerd[1520]: time="2025-08-12T23:54:11.070837021Z" level=info msg="shim disconnected" id=6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4 namespace=k8s.io Aug 12 23:54:11.071069 containerd[1520]: time="2025-08-12T23:54:11.070899830Z" level=warning msg="cleaning up after shim disconnected" id=6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4 namespace=k8s.io Aug 12 23:54:11.071069 containerd[1520]: time="2025-08-12T23:54:11.070907645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:11.080483 containerd[1520]: time="2025-08-12T23:54:11.080438672Z" level=info msg="StopContainer for \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\" returns successfully" Aug 12 23:54:11.081352 containerd[1520]: time="2025-08-12T23:54:11.081305657Z" level=info msg="StopPodSandbox for \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\"" Aug 12 23:54:11.081421 containerd[1520]: time="2025-08-12T23:54:11.081373976Z" level=info msg="Container to stop \"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:11.081421 containerd[1520]: time="2025-08-12T23:54:11.081416056Z" level=info msg="Container to stop \"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:11.081472 containerd[1520]: time="2025-08-12T23:54:11.081425474Z" level=info msg="Container to stop \"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:11.081472 containerd[1520]: time="2025-08-12T23:54:11.081435112Z" level=info msg="Container to stop \"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:11.081472 containerd[1520]: time="2025-08-12T23:54:11.081447035Z" level=info msg="Container to stop \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:54:11.089888 systemd[1]: cri-containerd-240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881.scope: Deactivated successfully. Aug 12 23:54:11.092432 containerd[1520]: time="2025-08-12T23:54:11.092353511Z" level=info msg="TearDown network for sandbox \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\" successfully" Aug 12 23:54:11.092432 containerd[1520]: time="2025-08-12T23:54:11.092412392Z" level=info msg="StopPodSandbox for \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\" returns successfully" Aug 12 23:54:11.118833 containerd[1520]: time="2025-08-12T23:54:11.118757240Z" level=info msg="shim disconnected" id=240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881 namespace=k8s.io Aug 12 23:54:11.118833 containerd[1520]: time="2025-08-12T23:54:11.118822393Z" level=warning msg="cleaning up after shim disconnected" id=240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881 namespace=k8s.io Aug 12 23:54:11.118833 containerd[1520]: time="2025-08-12T23:54:11.118830820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:11.136686 containerd[1520]: time="2025-08-12T23:54:11.136639576Z" level=info msg="TearDown network for sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" successfully" Aug 12 23:54:11.136686 containerd[1520]: time="2025-08-12T23:54:11.136666388Z" level=info msg="StopPodSandbox for \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" returns successfully" Aug 12 23:54:11.158086 kubelet[2657]: I0812 23:54:11.156426 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:11.158086 kubelet[2657]: I0812 23:54:11.156749 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-lib-modules\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.158086 kubelet[2657]: I0812 23:54:11.156796 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-xtables-lock\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.158086 kubelet[2657]: I0812 23:54:11.156815 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-host-proc-sys-kernel\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.158086 kubelet[2657]: I0812 23:54:11.156840 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sqdc\" (UniqueName: \"kubernetes.io/projected/e02750d6-1b41-46be-929d-8c800796b280-kube-api-access-9sqdc\") pod \"e02750d6-1b41-46be-929d-8c800796b280\" (UID: \"e02750d6-1b41-46be-929d-8c800796b280\") " Aug 12 23:54:11.158086 kubelet[2657]: I0812 23:54:11.156856 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-run\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.158716 kubelet[2657]: I0812 23:54:11.156869 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-etc-cni-netd\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.158716 kubelet[2657]: I0812 23:54:11.156885 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-config-path\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.158716 kubelet[2657]: I0812 23:54:11.156902 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-host-proc-sys-net\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.158716 kubelet[2657]: I0812 23:54:11.156919 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztsxn\" (UniqueName: \"kubernetes.io/projected/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-kube-api-access-ztsxn\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.158716 kubelet[2657]: I0812 23:54:11.156936 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-hostproc\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.158716 kubelet[2657]: I0812 23:54:11.156951 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-bpf-maps\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.158872 kubelet[2657]: I0812 23:54:11.156993 2657 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.158872 kubelet[2657]: I0812 23:54:11.157028 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:11.158872 kubelet[2657]: I0812 23:54:11.157085 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:11.158872 kubelet[2657]: I0812 23:54:11.156874 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:11.158872 kubelet[2657]: I0812 23:54:11.157383 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-hostproc" (OuterVolumeSpecName: "hostproc") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:11.159078 kubelet[2657]: I0812 23:54:11.157407 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:11.159078 kubelet[2657]: I0812 23:54:11.157426 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:11.159078 kubelet[2657]: I0812 23:54:11.157445 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:11.160862 kubelet[2657]: I0812 23:54:11.160807 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e02750d6-1b41-46be-929d-8c800796b280-kube-api-access-9sqdc" (OuterVolumeSpecName: "kube-api-access-9sqdc") pod "e02750d6-1b41-46be-929d-8c800796b280" (UID: "e02750d6-1b41-46be-929d-8c800796b280"). InnerVolumeSpecName "kube-api-access-9sqdc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 12 23:54:11.161018 kubelet[2657]: I0812 23:54:11.160915 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-kube-api-access-ztsxn" (OuterVolumeSpecName: "kube-api-access-ztsxn") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "kube-api-access-ztsxn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 12 23:54:11.161945 kubelet[2657]: I0812 23:54:11.161894 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 12 23:54:11.257407 kubelet[2657]: I0812 23:54:11.257216 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-hubble-tls\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.257407 kubelet[2657]: I0812 23:54:11.257266 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-cgroup\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.257407 kubelet[2657]: I0812 23:54:11.257286 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cni-path\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.257407 kubelet[2657]: I0812 23:54:11.257304 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-clustermesh-secrets\") pod \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\" (UID: \"3df84f06-a8ea-430a-85a8-c86ae28ab4fa\") " Aug 12 23:54:11.257407 kubelet[2657]: I0812 23:54:11.257325 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e02750d6-1b41-46be-929d-8c800796b280-cilium-config-path\") pod \"e02750d6-1b41-46be-929d-8c800796b280\" (UID: \"e02750d6-1b41-46be-929d-8c800796b280\") " Aug 12 23:54:11.257407 kubelet[2657]: I0812 23:54:11.257365 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:11.257691 kubelet[2657]: I0812 23:54:11.257374 2657 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ztsxn\" (UniqueName: \"kubernetes.io/projected/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-kube-api-access-ztsxn\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.257691 kubelet[2657]: I0812 23:54:11.257412 2657 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.257691 kubelet[2657]: I0812 23:54:11.257426 2657 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.257691 kubelet[2657]: I0812 23:54:11.257435 2657 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.257691 kubelet[2657]: I0812 23:54:11.257447 2657 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.257691 kubelet[2657]: I0812 23:54:11.257457 2657 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9sqdc\" (UniqueName: \"kubernetes.io/projected/e02750d6-1b41-46be-929d-8c800796b280-kube-api-access-9sqdc\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.257691 kubelet[2657]: I0812 23:54:11.257467 2657 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.257691 kubelet[2657]: I0812 23:54:11.257476 2657 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.257890 kubelet[2657]: I0812 23:54:11.257485 2657 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.257890 kubelet[2657]: I0812 23:54:11.257495 2657 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.257890 kubelet[2657]: I0812 23:54:11.257516 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cni-path" (OuterVolumeSpecName: "cni-path") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 12 23:54:11.261063 kubelet[2657]: I0812 23:54:11.260985 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 12 23:54:11.261205 kubelet[2657]: I0812 23:54:11.261193 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3df84f06-a8ea-430a-85a8-c86ae28ab4fa" (UID: "3df84f06-a8ea-430a-85a8-c86ae28ab4fa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 12 23:54:11.262572 kubelet[2657]: I0812 23:54:11.262535 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e02750d6-1b41-46be-929d-8c800796b280-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e02750d6-1b41-46be-929d-8c800796b280" (UID: "e02750d6-1b41-46be-929d-8c800796b280"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 12 23:54:11.358020 kubelet[2657]: I0812 23:54:11.357950 2657 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.358020 kubelet[2657]: I0812 23:54:11.358002 2657 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e02750d6-1b41-46be-929d-8c800796b280-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.358020 kubelet[2657]: I0812 23:54:11.358012 2657 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.358020 kubelet[2657]: I0812 23:54:11.358022 2657 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.358020 kubelet[2657]: I0812 23:54:11.358032 2657 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3df84f06-a8ea-430a-85a8-c86ae28ab4fa-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:54:11.391062 kubelet[2657]: I0812 23:54:11.391014 2657 scope.go:117] "RemoveContainer" containerID="c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02" Aug 12 23:54:11.397862 containerd[1520]: time="2025-08-12T23:54:11.397789241Z" level=info msg="RemoveContainer for \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\"" Aug 12 23:54:11.398855 systemd[1]: Removed slice kubepods-besteffort-pode02750d6_1b41_46be_929d_8c800796b280.slice - libcontainer container kubepods-besteffort-pode02750d6_1b41_46be_929d_8c800796b280.slice. Aug 12 23:54:11.402630 systemd[1]: Removed slice kubepods-burstable-pod3df84f06_a8ea_430a_85a8_c86ae28ab4fa.slice - libcontainer container kubepods-burstable-pod3df84f06_a8ea_430a_85a8_c86ae28ab4fa.slice. Aug 12 23:54:11.402734 systemd[1]: kubepods-burstable-pod3df84f06_a8ea_430a_85a8_c86ae28ab4fa.slice: Consumed 9.572s CPU time, 127.5M memory peak, 632K read from disk, 13.3M written to disk. Aug 12 23:54:11.411267 containerd[1520]: time="2025-08-12T23:54:11.411219223Z" level=info msg="RemoveContainer for \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\" returns successfully" Aug 12 23:54:11.411545 kubelet[2657]: I0812 23:54:11.411521 2657 scope.go:117] "RemoveContainer" containerID="c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02" Aug 12 23:54:11.411892 containerd[1520]: time="2025-08-12T23:54:11.411838939Z" level=error msg="ContainerStatus for \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\": not found" Aug 12 23:54:11.418631 kubelet[2657]: E0812 23:54:11.418594 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\": not found" containerID="c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02" Aug 12 23:54:11.418716 kubelet[2657]: I0812 23:54:11.418642 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02"} err="failed to get container status \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\": rpc error: code = NotFound desc = an error occurred when try to find container \"c139e4a85f7aa86187e26acb3f53bef49beed444d4ba6e69afcc052f77f4bb02\": not found" Aug 12 23:54:11.418759 kubelet[2657]: I0812 23:54:11.418721 2657 scope.go:117] "RemoveContainer" containerID="d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21" Aug 12 23:54:11.419911 containerd[1520]: time="2025-08-12T23:54:11.419876974Z" level=info msg="RemoveContainer for \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\"" Aug 12 23:54:11.424545 containerd[1520]: time="2025-08-12T23:54:11.424513368Z" level=info msg="RemoveContainer for \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\" returns successfully" Aug 12 23:54:11.424727 kubelet[2657]: I0812 23:54:11.424699 2657 scope.go:117] "RemoveContainer" containerID="cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc" Aug 12 23:54:11.425700 containerd[1520]: time="2025-08-12T23:54:11.425662367Z" level=info msg="RemoveContainer for \"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc\"" Aug 12 23:54:11.429282 containerd[1520]: time="2025-08-12T23:54:11.429254841Z" level=info msg="RemoveContainer for \"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc\" returns successfully" Aug 12 23:54:11.429447 kubelet[2657]: I0812 23:54:11.429408 2657 scope.go:117] "RemoveContainer" containerID="bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502" Aug 12 23:54:11.430284 containerd[1520]: time="2025-08-12T23:54:11.430254377Z" level=info msg="RemoveContainer for \"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502\"" Aug 12 23:54:11.442220 containerd[1520]: time="2025-08-12T23:54:11.442102609Z" level=info msg="RemoveContainer for \"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502\" returns successfully" Aug 12 23:54:11.442497 kubelet[2657]: I0812 23:54:11.442449 2657 scope.go:117] "RemoveContainer" containerID="84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030" Aug 12 23:54:11.443759 containerd[1520]: time="2025-08-12T23:54:11.443727290Z" level=info msg="RemoveContainer for \"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030\"" Aug 12 23:54:11.447332 containerd[1520]: time="2025-08-12T23:54:11.447298334Z" level=info msg="RemoveContainer for \"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030\" returns successfully" Aug 12 23:54:11.447467 kubelet[2657]: I0812 23:54:11.447444 2657 scope.go:117] "RemoveContainer" containerID="5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489" Aug 12 23:54:11.448383 containerd[1520]: time="2025-08-12T23:54:11.448338386Z" level=info msg="RemoveContainer for \"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489\"" Aug 12 23:54:11.451829 containerd[1520]: time="2025-08-12T23:54:11.451796665Z" level=info msg="RemoveContainer for \"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489\" returns successfully" Aug 12 23:54:11.451978 kubelet[2657]: I0812 23:54:11.451955 2657 scope.go:117] "RemoveContainer" containerID="d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21" Aug 12 23:54:11.452157 containerd[1520]: time="2025-08-12T23:54:11.452124417Z" level=error msg="ContainerStatus for \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\": not found" Aug 12 23:54:11.452322 kubelet[2657]: E0812 23:54:11.452281 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\": not found" containerID="d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21" Aug 12 23:54:11.452382 kubelet[2657]: I0812 23:54:11.452322 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21"} err="failed to get container status \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4bae90ea5d01e00e33cd0cd6dff9bb7eed1bcebc422e0ea1899fc855ec2af21\": not found" Aug 12 23:54:11.452382 kubelet[2657]: I0812 23:54:11.452351 2657 scope.go:117] "RemoveContainer" containerID="cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc" Aug 12 23:54:11.452552 containerd[1520]: time="2025-08-12T23:54:11.452518364Z" level=error msg="ContainerStatus for \"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc\": not found" Aug 12 23:54:11.452653 kubelet[2657]: E0812 23:54:11.452624 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc\": not found" containerID="cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc" Aug 12 23:54:11.452695 kubelet[2657]: I0812 23:54:11.452660 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc"} err="failed to get container status \"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd68146dc9028d54dc198b00a69eb5953ba0c34978a88b0a666f4ed60e3dffbc\": not found" Aug 12 23:54:11.452695 kubelet[2657]: I0812 23:54:11.452685 2657 scope.go:117] "RemoveContainer" containerID="bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502" Aug 12 23:54:11.452873 containerd[1520]: time="2025-08-12T23:54:11.452833612Z" level=error msg="ContainerStatus for \"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502\": not found" Aug 12 23:54:11.452965 kubelet[2657]: E0812 23:54:11.452944 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502\": not found" containerID="bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502" Aug 12 23:54:11.453041 kubelet[2657]: I0812 23:54:11.452963 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502"} err="failed to get container status \"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb8b782048c2374380ab7889a2d4073695166ffbca9fbde66653c17301d0a502\": not found" Aug 12 23:54:11.453041 kubelet[2657]: I0812 23:54:11.452975 2657 scope.go:117] "RemoveContainer" containerID="84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030" Aug 12 23:54:11.453155 containerd[1520]: time="2025-08-12T23:54:11.453125235Z" level=error msg="ContainerStatus for \"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030\": not found" Aug 12 23:54:11.453275 kubelet[2657]: E0812 23:54:11.453256 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030\": not found" containerID="84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030" Aug 12 23:54:11.453328 kubelet[2657]: I0812 23:54:11.453279 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030"} err="failed to get container status \"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030\": rpc error: code = NotFound desc = an error occurred when try to find container \"84629723792ded64cf51a0225e0225b999db625abe447965b4c1472fdd3db030\": not found" Aug 12 23:54:11.453328 kubelet[2657]: I0812 23:54:11.453293 2657 scope.go:117] "RemoveContainer" containerID="5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489" Aug 12 23:54:11.453464 containerd[1520]: time="2025-08-12T23:54:11.453437488Z" level=error msg="ContainerStatus for \"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489\": not found" Aug 12 23:54:11.453567 kubelet[2657]: E0812 23:54:11.453542 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489\": not found" containerID="5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489" Aug 12 23:54:11.453567 kubelet[2657]: I0812 23:54:11.453568 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489"} err="failed to get container status \"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f7761a5de84ac7d99bff5e0d6e7d6d6aaa1edc2d15399b4d0f5cf5424df6489\": not found" Aug 12 23:54:11.884765 kubelet[2657]: I0812 23:54:11.884701 2657 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df84f06-a8ea-430a-85a8-c86ae28ab4fa" path="/var/lib/kubelet/pods/3df84f06-a8ea-430a-85a8-c86ae28ab4fa/volumes" Aug 12 23:54:11.885839 kubelet[2657]: I0812 23:54:11.885732 2657 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e02750d6-1b41-46be-929d-8c800796b280" path="/var/lib/kubelet/pods/e02750d6-1b41-46be-929d-8c800796b280/volumes" Aug 12 23:54:11.944370 systemd[1]: var-lib-kubelet-pods-e02750d6\x2d1b41\x2d46be\x2d929d\x2d8c800796b280-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9sqdc.mount: Deactivated successfully. Aug 12 23:54:11.944542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881-rootfs.mount: Deactivated successfully. Aug 12 23:54:11.944659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881-shm.mount: Deactivated successfully. Aug 12 23:54:11.944801 systemd[1]: var-lib-kubelet-pods-3df84f06\x2da8ea\x2d430a\x2d85a8\x2dc86ae28ab4fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dztsxn.mount: Deactivated successfully. Aug 12 23:54:11.944948 systemd[1]: var-lib-kubelet-pods-3df84f06\x2da8ea\x2d430a\x2d85a8\x2dc86ae28ab4fa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 12 23:54:11.945076 systemd[1]: var-lib-kubelet-pods-3df84f06\x2da8ea\x2d430a\x2d85a8\x2dc86ae28ab4fa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 12 23:54:12.889399 sshd[4331]: Connection closed by 10.0.0.1 port 37066 Aug 12 23:54:12.889954 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:12.899957 systemd[1]: sshd@25-10.0.0.30:22-10.0.0.1:37066.service: Deactivated successfully. Aug 12 23:54:12.901935 systemd[1]: session-26.scope: Deactivated successfully. Aug 12 23:54:12.903616 systemd-logind[1505]: Session 26 logged out. Waiting for processes to exit. Aug 12 23:54:12.912314 systemd[1]: Started sshd@26-10.0.0.30:22-10.0.0.1:46840.service - OpenSSH per-connection server daemon (10.0.0.1:46840). Aug 12 23:54:12.913528 systemd-logind[1505]: Removed session 26. Aug 12 23:54:12.952670 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 46840 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:54:12.954492 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:12.959481 systemd-logind[1505]: New session 27 of user core. Aug 12 23:54:12.970198 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 12 23:54:13.400451 sshd[4491]: Connection closed by 10.0.0.1 port 46840 Aug 12 23:54:13.400835 sshd-session[4488]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:13.411821 kubelet[2657]: I0812 23:54:13.410745 2657 memory_manager.go:355] "RemoveStaleState removing state" podUID="3df84f06-a8ea-430a-85a8-c86ae28ab4fa" containerName="cilium-agent" Aug 12 23:54:13.411821 kubelet[2657]: I0812 23:54:13.410776 2657 memory_manager.go:355] "RemoveStaleState removing state" podUID="e02750d6-1b41-46be-929d-8c800796b280" containerName="cilium-operator" Aug 12 23:54:13.419395 systemd[1]: sshd@26-10.0.0.30:22-10.0.0.1:46840.service: Deactivated successfully. Aug 12 23:54:13.421831 systemd[1]: session-27.scope: Deactivated successfully. Aug 12 23:54:13.424704 systemd-logind[1505]: Session 27 logged out. Waiting for processes to exit. Aug 12 23:54:13.435784 systemd[1]: Started sshd@27-10.0.0.30:22-10.0.0.1:46844.service - OpenSSH per-connection server daemon (10.0.0.1:46844). Aug 12 23:54:13.439399 systemd-logind[1505]: Removed session 27. Aug 12 23:54:13.449889 systemd[1]: Created slice kubepods-burstable-pod53e3307a_f4eb_4fab_8fc0_6301633bb774.slice - libcontainer container kubepods-burstable-pod53e3307a_f4eb_4fab_8fc0_6301633bb774.slice. Aug 12 23:54:13.469305 kubelet[2657]: I0812 23:54:13.469237 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53e3307a-f4eb-4fab-8fc0-6301633bb774-bpf-maps\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469305 kubelet[2657]: I0812 23:54:13.469289 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/53e3307a-f4eb-4fab-8fc0-6301633bb774-cilium-ipsec-secrets\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469305 kubelet[2657]: I0812 23:54:13.469310 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mls8c\" (UniqueName: \"kubernetes.io/projected/53e3307a-f4eb-4fab-8fc0-6301633bb774-kube-api-access-mls8c\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469515 kubelet[2657]: I0812 23:54:13.469326 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53e3307a-f4eb-4fab-8fc0-6301633bb774-lib-modules\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469515 kubelet[2657]: I0812 23:54:13.469343 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53e3307a-f4eb-4fab-8fc0-6301633bb774-clustermesh-secrets\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469515 kubelet[2657]: I0812 23:54:13.469359 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53e3307a-f4eb-4fab-8fc0-6301633bb774-hubble-tls\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469515 kubelet[2657]: I0812 23:54:13.469375 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53e3307a-f4eb-4fab-8fc0-6301633bb774-host-proc-sys-kernel\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469515 kubelet[2657]: I0812 23:54:13.469393 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53e3307a-f4eb-4fab-8fc0-6301633bb774-xtables-lock\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469515 kubelet[2657]: I0812 23:54:13.469407 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53e3307a-f4eb-4fab-8fc0-6301633bb774-host-proc-sys-net\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469662 kubelet[2657]: I0812 23:54:13.469423 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53e3307a-f4eb-4fab-8fc0-6301633bb774-cilium-run\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469662 kubelet[2657]: I0812 23:54:13.469437 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53e3307a-f4eb-4fab-8fc0-6301633bb774-hostproc\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469662 kubelet[2657]: I0812 23:54:13.469451 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53e3307a-f4eb-4fab-8fc0-6301633bb774-cilium-cgroup\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469662 kubelet[2657]: I0812 23:54:13.469463 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53e3307a-f4eb-4fab-8fc0-6301633bb774-cni-path\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469662 kubelet[2657]: I0812 23:54:13.469478 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53e3307a-f4eb-4fab-8fc0-6301633bb774-etc-cni-netd\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.469662 kubelet[2657]: I0812 23:54:13.469494 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53e3307a-f4eb-4fab-8fc0-6301633bb774-cilium-config-path\") pod \"cilium-cdn7b\" (UID: \"53e3307a-f4eb-4fab-8fc0-6301633bb774\") " pod="kube-system/cilium-cdn7b" Aug 12 23:54:13.494532 sshd[4503]: Accepted publickey for core from 10.0.0.1 port 46844 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:54:13.498904 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:13.505750 systemd-logind[1505]: New session 28 of user core. Aug 12 23:54:13.516213 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 12 23:54:13.568092 sshd[4506]: Connection closed by 10.0.0.1 port 46844 Aug 12 23:54:13.568470 sshd-session[4503]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:13.590222 systemd[1]: sshd@27-10.0.0.30:22-10.0.0.1:46844.service: Deactivated successfully. Aug 12 23:54:13.592424 systemd[1]: session-28.scope: Deactivated successfully. Aug 12 23:54:13.594235 systemd-logind[1505]: Session 28 logged out. Waiting for processes to exit. Aug 12 23:54:13.605324 systemd[1]: Started sshd@28-10.0.0.30:22-10.0.0.1:46848.service - OpenSSH per-connection server daemon (10.0.0.1:46848). Aug 12 23:54:13.606122 systemd-logind[1505]: Removed session 28. Aug 12 23:54:13.640910 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 46848 ssh2: RSA SHA256:wGd+03EaUmBByFl09gD4UfhoSWgR+BOzL4n2I5S9IQ0 Aug 12 23:54:13.642524 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:54:13.647437 systemd-logind[1505]: New session 29 of user core. Aug 12 23:54:13.657201 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 12 23:54:13.756411 containerd[1520]: time="2025-08-12T23:54:13.756339352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdn7b,Uid:53e3307a-f4eb-4fab-8fc0-6301633bb774,Namespace:kube-system,Attempt:0,}" Aug 12 23:54:13.779420 containerd[1520]: time="2025-08-12T23:54:13.779020826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:54:13.779420 containerd[1520]: time="2025-08-12T23:54:13.779178615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:54:13.779420 containerd[1520]: time="2025-08-12T23:54:13.779202421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:13.779631 containerd[1520]: time="2025-08-12T23:54:13.779399715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:54:13.807202 systemd[1]: Started cri-containerd-afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31.scope - libcontainer container afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31. Aug 12 23:54:13.838336 containerd[1520]: time="2025-08-12T23:54:13.838284874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdn7b,Uid:53e3307a-f4eb-4fab-8fc0-6301633bb774,Namespace:kube-system,Attempt:0,} returns sandbox id \"afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31\"" Aug 12 23:54:13.843293 containerd[1520]: time="2025-08-12T23:54:13.843239158Z" level=info msg="CreateContainer within sandbox \"afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:54:13.856231 containerd[1520]: time="2025-08-12T23:54:13.856183171Z" level=info msg="CreateContainer within sandbox \"afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2822aa4f3d4c8fccdcd9a68365f379334c6b4fcd3b6ae7869852204d1b855467\"" Aug 12 23:54:13.856637 containerd[1520]: time="2025-08-12T23:54:13.856602235Z" level=info msg="StartContainer for \"2822aa4f3d4c8fccdcd9a68365f379334c6b4fcd3b6ae7869852204d1b855467\"" Aug 12 23:54:13.887237 systemd[1]: Started cri-containerd-2822aa4f3d4c8fccdcd9a68365f379334c6b4fcd3b6ae7869852204d1b855467.scope - libcontainer container 2822aa4f3d4c8fccdcd9a68365f379334c6b4fcd3b6ae7869852204d1b855467. Aug 12 23:54:13.914304 containerd[1520]: time="2025-08-12T23:54:13.914175247Z" level=info msg="StartContainer for \"2822aa4f3d4c8fccdcd9a68365f379334c6b4fcd3b6ae7869852204d1b855467\" returns successfully" Aug 12 23:54:13.929099 systemd[1]: cri-containerd-2822aa4f3d4c8fccdcd9a68365f379334c6b4fcd3b6ae7869852204d1b855467.scope: Deactivated successfully. Aug 12 23:54:13.964417 containerd[1520]: time="2025-08-12T23:54:13.964341765Z" level=info msg="shim disconnected" id=2822aa4f3d4c8fccdcd9a68365f379334c6b4fcd3b6ae7869852204d1b855467 namespace=k8s.io Aug 12 23:54:13.964417 containerd[1520]: time="2025-08-12T23:54:13.964400606Z" level=warning msg="cleaning up after shim disconnected" id=2822aa4f3d4c8fccdcd9a68365f379334c6b4fcd3b6ae7869852204d1b855467 namespace=k8s.io Aug 12 23:54:13.964417 containerd[1520]: time="2025-08-12T23:54:13.964408952Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:14.405120 containerd[1520]: time="2025-08-12T23:54:14.405024283Z" level=info msg="CreateContainer within sandbox \"afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:54:14.419108 containerd[1520]: time="2025-08-12T23:54:14.419038500Z" level=info msg="CreateContainer within sandbox \"afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"378d81d22356ba0f3304fb1aeadc143284c521c34d2eccd8e5d1d3f171a9216c\"" Aug 12 23:54:14.420771 containerd[1520]: time="2025-08-12T23:54:14.420669943Z" level=info msg="StartContainer for \"378d81d22356ba0f3304fb1aeadc143284c521c34d2eccd8e5d1d3f171a9216c\"" Aug 12 23:54:14.471299 systemd[1]: Started cri-containerd-378d81d22356ba0f3304fb1aeadc143284c521c34d2eccd8e5d1d3f171a9216c.scope - libcontainer container 378d81d22356ba0f3304fb1aeadc143284c521c34d2eccd8e5d1d3f171a9216c. Aug 12 23:54:14.501325 containerd[1520]: time="2025-08-12T23:54:14.501276043Z" level=info msg="StartContainer for \"378d81d22356ba0f3304fb1aeadc143284c521c34d2eccd8e5d1d3f171a9216c\" returns successfully" Aug 12 23:54:14.508563 systemd[1]: cri-containerd-378d81d22356ba0f3304fb1aeadc143284c521c34d2eccd8e5d1d3f171a9216c.scope: Deactivated successfully. Aug 12 23:54:14.534347 containerd[1520]: time="2025-08-12T23:54:14.534262889Z" level=info msg="shim disconnected" id=378d81d22356ba0f3304fb1aeadc143284c521c34d2eccd8e5d1d3f171a9216c namespace=k8s.io Aug 12 23:54:14.534347 containerd[1520]: time="2025-08-12T23:54:14.534336478Z" level=warning msg="cleaning up after shim disconnected" id=378d81d22356ba0f3304fb1aeadc143284c521c34d2eccd8e5d1d3f171a9216c namespace=k8s.io Aug 12 23:54:14.534347 containerd[1520]: time="2025-08-12T23:54:14.534350153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:15.409860 containerd[1520]: time="2025-08-12T23:54:15.409787053Z" level=info msg="CreateContainer within sandbox \"afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:54:15.427419 containerd[1520]: time="2025-08-12T23:54:15.427363419Z" level=info msg="CreateContainer within sandbox \"afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bd546afa01b35eee1ac79ac5a878d5dc990e9f2829d8a8ea657fa49fcab3c4ce\"" Aug 12 23:54:15.433263 containerd[1520]: time="2025-08-12T23:54:15.433205813Z" level=info msg="StartContainer for \"bd546afa01b35eee1ac79ac5a878d5dc990e9f2829d8a8ea657fa49fcab3c4ce\"" Aug 12 23:54:15.468254 systemd[1]: Started cri-containerd-bd546afa01b35eee1ac79ac5a878d5dc990e9f2829d8a8ea657fa49fcab3c4ce.scope - libcontainer container bd546afa01b35eee1ac79ac5a878d5dc990e9f2829d8a8ea657fa49fcab3c4ce. Aug 12 23:54:15.507823 containerd[1520]: time="2025-08-12T23:54:15.507761852Z" level=info msg="StartContainer for \"bd546afa01b35eee1ac79ac5a878d5dc990e9f2829d8a8ea657fa49fcab3c4ce\" returns successfully" Aug 12 23:54:15.513619 systemd[1]: cri-containerd-bd546afa01b35eee1ac79ac5a878d5dc990e9f2829d8a8ea657fa49fcab3c4ce.scope: Deactivated successfully. Aug 12 23:54:15.540423 containerd[1520]: time="2025-08-12T23:54:15.540330205Z" level=info msg="shim disconnected" id=bd546afa01b35eee1ac79ac5a878d5dc990e9f2829d8a8ea657fa49fcab3c4ce namespace=k8s.io Aug 12 23:54:15.540423 containerd[1520]: time="2025-08-12T23:54:15.540417641Z" level=warning msg="cleaning up after shim disconnected" id=bd546afa01b35eee1ac79ac5a878d5dc990e9f2829d8a8ea657fa49fcab3c4ce namespace=k8s.io Aug 12 23:54:15.540423 containerd[1520]: time="2025-08-12T23:54:15.540427048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:15.582782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd546afa01b35eee1ac79ac5a878d5dc990e9f2829d8a8ea657fa49fcab3c4ce-rootfs.mount: Deactivated successfully. Aug 12 23:54:15.955436 kubelet[2657]: E0812 23:54:15.955377 2657 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 12 23:54:16.414444 containerd[1520]: time="2025-08-12T23:54:16.414388394Z" level=info msg="CreateContainer within sandbox \"afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:54:16.430535 containerd[1520]: time="2025-08-12T23:54:16.430456316Z" level=info msg="CreateContainer within sandbox \"afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de8d9c095db697bad44ef10261ed93cf981566b38b56799b946bc3b9485fba83\"" Aug 12 23:54:16.431499 containerd[1520]: time="2025-08-12T23:54:16.431464347Z" level=info msg="StartContainer for \"de8d9c095db697bad44ef10261ed93cf981566b38b56799b946bc3b9485fba83\"" Aug 12 23:54:16.467268 systemd[1]: Started cri-containerd-de8d9c095db697bad44ef10261ed93cf981566b38b56799b946bc3b9485fba83.scope - libcontainer container de8d9c095db697bad44ef10261ed93cf981566b38b56799b946bc3b9485fba83. Aug 12 23:54:16.495065 systemd[1]: cri-containerd-de8d9c095db697bad44ef10261ed93cf981566b38b56799b946bc3b9485fba83.scope: Deactivated successfully. Aug 12 23:54:16.496734 containerd[1520]: time="2025-08-12T23:54:16.496685604Z" level=info msg="StartContainer for \"de8d9c095db697bad44ef10261ed93cf981566b38b56799b946bc3b9485fba83\" returns successfully" Aug 12 23:54:16.522130 containerd[1520]: time="2025-08-12T23:54:16.522032552Z" level=info msg="shim disconnected" id=de8d9c095db697bad44ef10261ed93cf981566b38b56799b946bc3b9485fba83 namespace=k8s.io Aug 12 23:54:16.522130 containerd[1520]: time="2025-08-12T23:54:16.522116620Z" level=warning msg="cleaning up after shim disconnected" id=de8d9c095db697bad44ef10261ed93cf981566b38b56799b946bc3b9485fba83 namespace=k8s.io Aug 12 23:54:16.522130 containerd[1520]: time="2025-08-12T23:54:16.522128033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:54:16.583564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de8d9c095db697bad44ef10261ed93cf981566b38b56799b946bc3b9485fba83-rootfs.mount: Deactivated successfully. Aug 12 23:54:17.417870 containerd[1520]: time="2025-08-12T23:54:17.417815605Z" level=info msg="CreateContainer within sandbox \"afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:54:17.481946 containerd[1520]: time="2025-08-12T23:54:17.481884281Z" level=info msg="CreateContainer within sandbox \"afb270e0c01cd5e87d622f0cd6f5ec312722526d61e05ed8bb19188239101b31\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dbafcf22798950927c7dda380cf9c93d44a1b844010fa48237c5f5265b3f655f\"" Aug 12 23:54:17.482547 containerd[1520]: time="2025-08-12T23:54:17.482521789Z" level=info msg="StartContainer for \"dbafcf22798950927c7dda380cf9c93d44a1b844010fa48237c5f5265b3f655f\"" Aug 12 23:54:17.520348 systemd[1]: Started cri-containerd-dbafcf22798950927c7dda380cf9c93d44a1b844010fa48237c5f5265b3f655f.scope - libcontainer container dbafcf22798950927c7dda380cf9c93d44a1b844010fa48237c5f5265b3f655f. Aug 12 23:54:17.554953 containerd[1520]: time="2025-08-12T23:54:17.554894250Z" level=info msg="StartContainer for \"dbafcf22798950927c7dda380cf9c93d44a1b844010fa48237c5f5265b3f655f\" returns successfully" Aug 12 23:54:18.024139 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 12 23:54:18.436894 kubelet[2657]: I0812 23:54:18.436816 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cdn7b" podStartSLOduration=5.436795578 podStartE2EDuration="5.436795578s" podCreationTimestamp="2025-08-12 23:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:54:18.435754014 +0000 UTC m=+112.646799779" watchObservedRunningTime="2025-08-12 23:54:18.436795578 +0000 UTC m=+112.647841343" Aug 12 23:54:18.804979 kubelet[2657]: I0812 23:54:18.804792 2657 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-12T23:54:18Z","lastTransitionTime":"2025-08-12T23:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 12 23:54:20.881641 kubelet[2657]: E0812 23:54:20.881554 2657 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-frblj" podUID="58aa30d4-197f-4d54-8b69-5b388876d19e" Aug 12 23:54:21.264820 systemd-networkd[1428]: lxc_health: Link UP Aug 12 23:54:21.265203 systemd-networkd[1428]: lxc_health: Gained carrier Aug 12 23:54:23.039334 systemd-networkd[1428]: lxc_health: Gained IPv6LL Aug 12 23:54:25.869762 containerd[1520]: time="2025-08-12T23:54:25.869707743Z" level=info msg="StopPodSandbox for \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\"" Aug 12 23:54:25.870226 containerd[1520]: time="2025-08-12T23:54:25.869818663Z" level=info msg="TearDown network for sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" successfully" Aug 12 23:54:25.870226 containerd[1520]: time="2025-08-12T23:54:25.869831307Z" level=info msg="StopPodSandbox for \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" returns successfully" Aug 12 23:54:25.870226 containerd[1520]: time="2025-08-12T23:54:25.870197049Z" level=info msg="RemovePodSandbox for \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\"" Aug 12 23:54:25.870226 containerd[1520]: time="2025-08-12T23:54:25.870218720Z" level=info msg="Forcibly stopping sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\"" Aug 12 23:54:25.870381 containerd[1520]: time="2025-08-12T23:54:25.870267613Z" level=info msg="TearDown network for sandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" successfully" Aug 12 23:54:25.874208 containerd[1520]: time="2025-08-12T23:54:25.874177464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:54:25.874276 containerd[1520]: time="2025-08-12T23:54:25.874225465Z" level=info msg="RemovePodSandbox \"240078df70e3eee635b2031bd4935ea41b5fa5c153afc3eacb8e04a3b6c4c881\" returns successfully" Aug 12 23:54:25.874718 containerd[1520]: time="2025-08-12T23:54:25.874578253Z" level=info msg="StopPodSandbox for \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\"" Aug 12 23:54:25.874718 containerd[1520]: time="2025-08-12T23:54:25.874649879Z" level=info msg="TearDown network for sandbox \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\" successfully" Aug 12 23:54:25.874718 containerd[1520]: time="2025-08-12T23:54:25.874660068Z" level=info msg="StopPodSandbox for \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\" returns successfully" Aug 12 23:54:25.874952 containerd[1520]: time="2025-08-12T23:54:25.874920501Z" level=info msg="RemovePodSandbox for \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\"" Aug 12 23:54:25.874952 containerd[1520]: time="2025-08-12T23:54:25.874947693Z" level=info msg="Forcibly stopping sandbox \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\"" Aug 12 23:54:25.875027 containerd[1520]: time="2025-08-12T23:54:25.874995894Z" level=info msg="TearDown network for sandbox \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\" successfully" Aug 12 23:54:25.878350 containerd[1520]: time="2025-08-12T23:54:25.878301061Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:54:25.878350 containerd[1520]: time="2025-08-12T23:54:25.878336588Z" level=info msg="RemovePodSandbox \"6ca11e355c1190297151a7d199bb3aa028dcc6c0f770cbb83051b697aae015e4\" returns successfully" Aug 12 23:54:28.439429 sshd[4521]: Connection closed by 10.0.0.1 port 46848 Aug 12 23:54:28.440161 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Aug 12 23:54:28.444614 systemd[1]: sshd@28-10.0.0.30:22-10.0.0.1:46848.service: Deactivated successfully. Aug 12 23:54:28.446758 systemd[1]: session-29.scope: Deactivated successfully. Aug 12 23:54:28.447543 systemd-logind[1505]: Session 29 logged out. Waiting for processes to exit. Aug 12 23:54:28.448612 systemd-logind[1505]: Removed session 29.