Apr 30 12:39:48.953743 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:26:36 -00 2025 Apr 30 12:39:48.953784 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:39:48.953798 kernel: BIOS-provided physical RAM map: Apr 30 12:39:48.953807 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 12:39:48.953814 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 30 12:39:48.953822 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 30 12:39:48.953832 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 30 12:39:48.953840 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 30 12:39:48.953848 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 30 12:39:48.953856 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 30 12:39:48.953865 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 30 12:39:48.953876 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 30 12:39:48.953900 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 30 12:39:48.953916 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 30 12:39:48.953930 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 30 12:39:48.953939 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 30 12:39:48.953952 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Apr 30 12:39:48.954554 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Apr 30 12:39:48.954569 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Apr 30 12:39:48.954578 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Apr 30 12:39:48.954587 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 30 12:39:48.954598 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 30 12:39:48.954607 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 12:39:48.954623 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 12:39:48.954632 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 30 12:39:48.954640 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 12:39:48.954649 kernel: NX (Execute Disable) protection: active Apr 30 12:39:48.954662 kernel: APIC: Static calls initialized Apr 30 12:39:48.954674 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Apr 30 12:39:48.954685 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Apr 30 12:39:48.954693 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Apr 30 12:39:48.954704 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Apr 30 12:39:48.954712 kernel: extended physical RAM map: Apr 30 12:39:48.954721 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 12:39:48.954729 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 30 12:39:48.954738 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 30 12:39:48.954747 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 30 12:39:48.954755 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 30 12:39:48.954764 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 30 12:39:48.954776 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 30 12:39:48.954790 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Apr 30 12:39:48.954799 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Apr 30 12:39:48.954808 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Apr 30 12:39:48.954816 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Apr 30 12:39:48.954825 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Apr 30 12:39:48.954841 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 30 12:39:48.954850 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 30 12:39:48.954865 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 30 12:39:48.954880 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 30 12:39:48.955023 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 30 12:39:48.955032 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Apr 30 12:39:48.955041 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Apr 30 12:39:48.955050 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Apr 30 12:39:48.955067 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Apr 30 12:39:48.955081 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 30 12:39:48.955095 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 30 12:39:48.955105 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 12:39:48.955117 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 12:39:48.955129 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 30 12:39:48.955138 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 12:39:48.955147 kernel: efi: EFI v2.7 by EDK II Apr 30 12:39:48.955156 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Apr 30 12:39:48.955165 kernel: random: crng init done Apr 30 12:39:48.955174 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 30 12:39:48.955183 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 30 12:39:48.955194 kernel: secureboot: Secure boot disabled Apr 30 12:39:48.955207 kernel: SMBIOS 2.8 present. Apr 30 12:39:48.955216 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 30 12:39:48.955225 kernel: Hypervisor detected: KVM Apr 30 12:39:48.955234 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 12:39:48.955243 kernel: kvm-clock: using sched offset of 4090574258 cycles Apr 30 12:39:48.955253 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 12:39:48.955262 kernel: tsc: Detected 2794.748 MHz processor Apr 30 12:39:48.955272 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 12:39:48.955281 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 12:39:48.955291 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Apr 30 12:39:48.955303 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 12:39:48.955312 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 12:39:48.955321 kernel: Using GB pages for direct mapping Apr 30 12:39:48.955331 kernel: ACPI: Early table checksum verification disabled Apr 30 12:39:48.955340 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 30 12:39:48.955350 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 30 12:39:48.955359 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:48.955369 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:48.955380 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 30 12:39:48.955392 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:48.955413 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:48.955424 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:48.955434 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:48.955443 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 12:39:48.955452 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 30 12:39:48.955461 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Apr 30 12:39:48.955471 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 30 12:39:48.955484 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 30 12:39:48.955493 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 30 12:39:48.955502 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 30 12:39:48.955511 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 30 12:39:48.955520 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 30 12:39:48.955529 kernel: No NUMA configuration found Apr 30 12:39:48.955539 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 30 12:39:48.955548 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Apr 30 12:39:48.955557 kernel: Zone ranges: Apr 30 12:39:48.955566 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 12:39:48.955579 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 30 12:39:48.955594 kernel: Normal empty Apr 30 12:39:48.955616 kernel: Movable zone start for each node Apr 30 12:39:48.955637 kernel: Early memory node ranges Apr 30 12:39:48.955648 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 12:39:48.955657 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 30 12:39:48.955666 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 30 12:39:48.955676 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 30 12:39:48.955685 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 30 12:39:48.955698 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 30 12:39:48.955708 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Apr 30 12:39:48.955717 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Apr 30 12:39:48.955726 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 30 12:39:48.955735 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 12:39:48.955744 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 12:39:48.955764 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 30 12:39:48.955786 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 12:39:48.955800 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 30 12:39:48.955815 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 30 12:39:48.955824 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 30 12:39:48.955837 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 30 12:39:48.955859 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 30 12:39:48.955869 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 12:39:48.955878 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 12:39:48.955907 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 12:39:48.955917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 12:39:48.955933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 12:39:48.955943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 12:39:48.955953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 12:39:48.955963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 12:39:48.955972 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 12:39:48.955982 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 12:39:48.955992 kernel: TSC deadline timer available Apr 30 12:39:48.956001 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 30 12:39:48.956017 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 12:39:48.956031 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 30 12:39:48.956040 kernel: kvm-guest: setup PV sched yield Apr 30 12:39:48.956062 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 30 12:39:48.956073 kernel: Booting paravirtualized kernel on KVM Apr 30 12:39:48.956083 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 12:39:48.956093 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 30 12:39:48.956103 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Apr 30 12:39:48.956113 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Apr 30 12:39:48.956122 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 30 12:39:48.956135 kernel: kvm-guest: PV spinlocks enabled Apr 30 12:39:48.956145 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 12:39:48.956157 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:39:48.956167 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 12:39:48.956176 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 12:39:48.956189 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 12:39:48.956199 kernel: Fallback order for Node 0: 0 Apr 30 12:39:48.956209 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Apr 30 12:39:48.956224 kernel: Policy zone: DMA32 Apr 30 12:39:48.956234 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 12:39:48.956244 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 177824K reserved, 0K cma-reserved) Apr 30 12:39:48.956254 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 12:39:48.956264 kernel: ftrace: allocating 37918 entries in 149 pages Apr 30 12:39:48.956273 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 12:39:48.956283 kernel: Dynamic Preempt: voluntary Apr 30 12:39:48.956293 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 12:39:48.956303 kernel: rcu: RCU event tracing is enabled. Apr 30 12:39:48.956316 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 12:39:48.956326 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 12:39:48.956336 kernel: Rude variant of Tasks RCU enabled. Apr 30 12:39:48.956346 kernel: Tracing variant of Tasks RCU enabled. Apr 30 12:39:48.956358 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 12:39:48.956368 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 12:39:48.956378 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 30 12:39:48.956387 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 12:39:48.956397 kernel: Console: colour dummy device 80x25 Apr 30 12:39:48.956421 kernel: printk: console [ttyS0] enabled Apr 30 12:39:48.956432 kernel: ACPI: Core revision 20230628 Apr 30 12:39:48.956442 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 12:39:48.956451 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 12:39:48.956471 kernel: x2apic enabled Apr 30 12:39:48.956491 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 12:39:48.956514 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 30 12:39:48.956534 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 30 12:39:48.956568 kernel: kvm-guest: setup PV IPIs Apr 30 12:39:48.956589 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 12:39:48.956620 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 12:39:48.956649 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Apr 30 12:39:48.956660 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 12:39:48.956675 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 12:39:48.956685 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 12:39:48.956695 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 12:39:48.956705 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 12:39:48.956715 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 12:39:48.956731 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 12:39:48.956741 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 12:39:48.956751 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 12:39:48.956761 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 12:39:48.956771 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 12:39:48.956780 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 30 12:39:48.956791 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 30 12:39:48.956804 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 30 12:39:48.956814 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 12:39:48.956827 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 12:39:48.956843 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 12:39:48.956852 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 12:39:48.956862 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 12:39:48.956872 kernel: Freeing SMP alternatives memory: 32K Apr 30 12:39:48.956882 kernel: pid_max: default: 32768 minimum: 301 Apr 30 12:39:48.956904 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 12:39:48.956914 kernel: landlock: Up and running. Apr 30 12:39:48.956924 kernel: SELinux: Initializing. Apr 30 12:39:48.956938 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:39:48.956948 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:39:48.956958 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 12:39:48.956968 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 12:39:48.956978 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 12:39:48.956988 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 12:39:48.956997 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 12:39:48.957007 kernel: ... version: 0 Apr 30 12:39:48.957022 kernel: ... bit width: 48 Apr 30 12:39:48.957032 kernel: ... generic registers: 6 Apr 30 12:39:48.957042 kernel: ... value mask: 0000ffffffffffff Apr 30 12:39:48.957051 kernel: ... max period: 00007fffffffffff Apr 30 12:39:48.957068 kernel: ... fixed-purpose events: 0 Apr 30 12:39:48.957078 kernel: ... event mask: 000000000000003f Apr 30 12:39:48.957087 kernel: signal: max sigframe size: 1776 Apr 30 12:39:48.957097 kernel: rcu: Hierarchical SRCU implementation. Apr 30 12:39:48.957107 kernel: rcu: Max phase no-delay instances is 400. Apr 30 12:39:48.957117 kernel: smp: Bringing up secondary CPUs ... Apr 30 12:39:48.957133 kernel: smpboot: x86: Booting SMP configuration: Apr 30 12:39:48.957142 kernel: .... node #0, CPUs: #1 #2 #3 Apr 30 12:39:48.957152 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 12:39:48.957162 kernel: smpboot: Max logical packages: 1 Apr 30 12:39:48.957172 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Apr 30 12:39:48.957181 kernel: devtmpfs: initialized Apr 30 12:39:48.957191 kernel: x86/mm: Memory block size: 128MB Apr 30 12:39:48.957201 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 30 12:39:48.957211 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 30 12:39:48.957224 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 30 12:39:48.957234 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 30 12:39:48.957244 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Apr 30 12:39:48.957254 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 30 12:39:48.957264 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 12:39:48.957274 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 12:39:48.957284 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 12:39:48.957293 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 12:39:48.957306 kernel: audit: initializing netlink subsys (disabled) Apr 30 12:39:48.957316 kernel: audit: type=2000 audit(1746016788.618:1): state=initialized audit_enabled=0 res=1 Apr 30 12:39:48.957326 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 12:39:48.957335 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 12:39:48.957345 kernel: cpuidle: using governor menu Apr 30 12:39:48.957355 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 12:39:48.957364 kernel: dca service started, version 1.12.1 Apr 30 12:39:48.957391 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 30 12:39:48.957402 kernel: PCI: Using configuration type 1 for base access Apr 30 12:39:48.957416 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 12:39:48.957426 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 12:39:48.957436 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 12:39:48.957446 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 12:39:48.957456 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 12:39:48.957465 kernel: ACPI: Added _OSI(Module Device) Apr 30 12:39:48.957475 kernel: ACPI: Added _OSI(Processor Device) Apr 30 12:39:48.957485 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 12:39:48.957494 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 12:39:48.957507 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 12:39:48.957517 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 12:39:48.957527 kernel: ACPI: Interpreter enabled Apr 30 12:39:48.957536 kernel: ACPI: PM: (supports S0 S3 S5) Apr 30 12:39:48.957552 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 12:39:48.957562 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 12:39:48.957572 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 12:39:48.957581 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 12:39:48.957591 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 12:39:48.958409 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 12:39:48.958605 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 12:39:48.958757 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 12:39:48.958769 kernel: PCI host bridge to bus 0000:00 Apr 30 12:39:48.958959 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 12:39:48.959110 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 12:39:48.959265 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 12:39:48.959412 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 30 12:39:48.959545 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 30 12:39:48.959678 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 30 12:39:48.959874 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 12:39:48.960106 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 12:39:48.960282 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 30 12:39:48.960436 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 30 12:39:48.960610 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 30 12:39:48.960775 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 12:39:48.960946 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 30 12:39:48.961108 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 12:39:48.961321 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 12:39:48.961473 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 30 12:39:48.961632 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 30 12:39:48.961779 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Apr 30 12:39:48.961985 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 30 12:39:48.962150 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 30 12:39:48.962304 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 30 12:39:48.962452 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Apr 30 12:39:48.962658 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 12:39:48.962818 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 30 12:39:48.962980 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 30 12:39:48.963141 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 30 12:39:48.963302 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 30 12:39:48.963478 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 12:39:48.963628 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 12:39:48.963822 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 12:39:48.963995 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 30 12:39:48.964153 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 30 12:39:48.964335 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 12:39:48.964490 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 30 12:39:48.964503 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 12:39:48.964513 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 12:39:48.964523 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 12:39:48.964538 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 12:39:48.964548 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 12:39:48.964558 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 12:39:48.964567 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 12:39:48.964577 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 12:39:48.964587 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 12:39:48.964596 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 12:39:48.964606 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 12:39:48.964615 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 12:39:48.964629 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 12:39:48.964638 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 12:39:48.964648 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 12:39:48.964658 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 12:39:48.964667 kernel: iommu: Default domain type: Translated Apr 30 12:39:48.964677 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 12:39:48.964687 kernel: efivars: Registered efivars operations Apr 30 12:39:48.964697 kernel: PCI: Using ACPI for IRQ routing Apr 30 12:39:48.964707 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 12:39:48.964720 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 30 12:39:48.964729 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 30 12:39:48.964738 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Apr 30 12:39:48.964748 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Apr 30 12:39:48.964758 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 30 12:39:48.964767 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 30 12:39:48.964777 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Apr 30 12:39:48.964786 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 30 12:39:48.964953 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 12:39:48.965112 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 12:39:48.965264 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 12:39:48.965276 kernel: vgaarb: loaded Apr 30 12:39:48.965286 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 12:39:48.965296 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 12:39:48.965306 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 12:39:48.965316 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 12:39:48.965326 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 12:39:48.965341 kernel: pnp: PnP ACPI init Apr 30 12:39:48.965527 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 30 12:39:48.965542 kernel: pnp: PnP ACPI: found 6 devices Apr 30 12:39:48.965552 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 12:39:48.965562 kernel: NET: Registered PF_INET protocol family Apr 30 12:39:48.965596 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 12:39:48.965610 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 12:39:48.965620 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 12:39:48.965634 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 12:39:48.965644 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 12:39:48.965654 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 12:39:48.965665 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:39:48.965675 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:39:48.965685 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 12:39:48.965695 kernel: NET: Registered PF_XDP protocol family Apr 30 12:39:48.965849 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 30 12:39:48.966079 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 30 12:39:48.966219 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 12:39:48.966356 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 12:39:48.966487 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 12:39:48.966618 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 30 12:39:48.966749 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 30 12:39:48.966879 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 30 12:39:48.966905 kernel: PCI: CLS 0 bytes, default 64 Apr 30 12:39:48.966921 kernel: Initialise system trusted keyrings Apr 30 12:39:48.966931 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 12:39:48.966941 kernel: Key type asymmetric registered Apr 30 12:39:48.966951 kernel: Asymmetric key parser 'x509' registered Apr 30 12:39:48.966962 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 12:39:48.966972 kernel: io scheduler mq-deadline registered Apr 30 12:39:48.966982 kernel: io scheduler kyber registered Apr 30 12:39:48.966992 kernel: io scheduler bfq registered Apr 30 12:39:48.967002 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 12:39:48.967016 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 12:39:48.967027 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 12:39:48.967040 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 30 12:39:48.967051 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 12:39:48.967070 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 12:39:48.967080 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 12:39:48.967094 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 12:39:48.967104 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 12:39:48.967303 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 30 12:39:48.967447 kernel: rtc_cmos 00:04: registered as rtc0 Apr 30 12:39:48.967583 kernel: rtc_cmos 00:04: setting system clock to 2025-04-30T12:39:48 UTC (1746016788) Apr 30 12:39:48.967719 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 30 12:39:48.967732 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 30 12:39:48.967742 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 12:39:48.967758 kernel: efifb: probing for efifb Apr 30 12:39:48.967768 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 30 12:39:48.967778 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 30 12:39:48.967788 kernel: efifb: scrolling: redraw Apr 30 12:39:48.967798 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 12:39:48.967808 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 12:39:48.967819 kernel: fb0: EFI VGA frame buffer device Apr 30 12:39:48.967829 kernel: pstore: Using crash dump compression: deflate Apr 30 12:39:48.967839 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 12:39:48.967853 kernel: NET: Registered PF_INET6 protocol family Apr 30 12:39:48.967863 kernel: Segment Routing with IPv6 Apr 30 12:39:48.967873 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 12:39:48.967883 kernel: NET: Registered PF_PACKET protocol family Apr 30 12:39:48.967907 kernel: Key type dns_resolver registered Apr 30 12:39:48.967938 kernel: IPI shorthand broadcast: enabled Apr 30 12:39:48.967949 kernel: sched_clock: Marking stable (1238005959, 163397191)->(1428684644, -27281494) Apr 30 12:39:48.967959 kernel: registered taskstats version 1 Apr 30 12:39:48.967969 kernel: Loading compiled-in X.509 certificates Apr 30 12:39:48.967984 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 10d2d341d26c1df942e743344427c053ef3a2a5f' Apr 30 12:39:48.967994 kernel: Key type .fscrypt registered Apr 30 12:39:48.968004 kernel: Key type fscrypt-provisioning registered Apr 30 12:39:48.968015 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 12:39:48.968025 kernel: ima: Allocated hash algorithm: sha1 Apr 30 12:39:48.968035 kernel: ima: No architecture policies found Apr 30 12:39:48.968045 kernel: clk: Disabling unused clocks Apr 30 12:39:48.968064 kernel: Freeing unused kernel image (initmem) memory: 43484K Apr 30 12:39:48.968074 kernel: Write protecting the kernel read-only data: 38912k Apr 30 12:39:48.968088 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K Apr 30 12:39:48.968098 kernel: Run /init as init process Apr 30 12:39:48.968108 kernel: with arguments: Apr 30 12:39:48.968118 kernel: /init Apr 30 12:39:48.968128 kernel: with environment: Apr 30 12:39:48.968138 kernel: HOME=/ Apr 30 12:39:48.968148 kernel: TERM=linux Apr 30 12:39:48.968157 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 12:39:48.968176 systemd[1]: Successfully made /usr/ read-only. Apr 30 12:39:48.968194 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:39:48.968206 systemd[1]: Detected virtualization kvm. Apr 30 12:39:48.968217 systemd[1]: Detected architecture x86-64. Apr 30 12:39:48.968227 systemd[1]: Running in initrd. Apr 30 12:39:48.968238 systemd[1]: No hostname configured, using default hostname. Apr 30 12:39:48.968249 systemd[1]: Hostname set to . Apr 30 12:39:48.968260 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:39:48.968274 systemd[1]: Queued start job for default target initrd.target. Apr 30 12:39:48.968285 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:39:48.968296 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:39:48.968308 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 12:39:48.968319 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:39:48.968330 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 12:39:48.968342 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 12:39:48.968357 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 12:39:48.968368 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 12:39:48.968379 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:39:48.968389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:39:48.968400 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:39:48.968411 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:39:48.968422 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:39:48.968432 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:39:48.968446 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:39:48.968457 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:39:48.968467 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 12:39:48.968478 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 12:39:48.968489 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:39:48.968500 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:39:48.968511 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:39:48.968522 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:39:48.968532 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 12:39:48.968547 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:39:48.968557 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 12:39:48.968568 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 12:39:48.968579 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:39:48.968589 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:39:48.968600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:39:48.968610 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 12:39:48.968621 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:39:48.968635 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 12:39:48.968646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:39:48.968692 systemd-journald[194]: Collecting audit messages is disabled. Apr 30 12:39:48.968722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:39:48.968733 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:39:48.968745 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:39:48.968755 systemd-journald[194]: Journal started Apr 30 12:39:48.968785 systemd-journald[194]: Runtime Journal (/run/log/journal/53654af77a5b4e52a662690bfeebf89e) is 6M, max 48.2M, 42.2M free. Apr 30 12:39:48.958512 systemd-modules-load[195]: Inserted module 'overlay' Apr 30 12:39:48.973911 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:39:48.982937 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:39:48.983725 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:39:48.997141 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 12:39:48.999571 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 30 12:39:49.000613 kernel: Bridge firewalling registered Apr 30 12:39:49.000759 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:39:49.001131 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:39:49.003413 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:39:49.003933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:39:49.011635 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:39:49.014384 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 12:39:49.016444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:39:49.020707 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:39:49.037636 dracut-cmdline[227]: dracut-dracut-053 Apr 30 12:39:49.041694 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:39:49.060109 systemd-resolved[230]: Positive Trust Anchors: Apr 30 12:39:49.060133 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:39:49.060165 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:39:49.063134 systemd-resolved[230]: Defaulting to hostname 'linux'. Apr 30 12:39:49.064574 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:39:49.070231 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:39:49.154925 kernel: SCSI subsystem initialized Apr 30 12:39:49.164912 kernel: Loading iSCSI transport class v2.0-870. Apr 30 12:39:49.175914 kernel: iscsi: registered transport (tcp) Apr 30 12:39:49.196058 kernel: iscsi: registered transport (qla4xxx) Apr 30 12:39:49.196090 kernel: QLogic iSCSI HBA Driver Apr 30 12:39:49.245552 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 12:39:49.254104 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 12:39:49.278344 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 12:39:49.278377 kernel: device-mapper: uevent: version 1.0.3 Apr 30 12:39:49.279395 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 12:39:49.322920 kernel: raid6: avx2x4 gen() 29083 MB/s Apr 30 12:39:49.339915 kernel: raid6: avx2x2 gen() 31554 MB/s Apr 30 12:39:49.356986 kernel: raid6: avx2x1 gen() 26025 MB/s Apr 30 12:39:49.357014 kernel: raid6: using algorithm avx2x2 gen() 31554 MB/s Apr 30 12:39:49.375002 kernel: raid6: .... xor() 19998 MB/s, rmw enabled Apr 30 12:39:49.375024 kernel: raid6: using avx2x2 recovery algorithm Apr 30 12:39:49.395918 kernel: xor: automatically using best checksumming function avx Apr 30 12:39:49.547924 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 12:39:49.561729 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:39:49.570100 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:39:49.585415 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 30 12:39:49.591229 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:39:49.605080 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 12:39:49.620055 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Apr 30 12:39:49.654101 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:39:49.665047 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:39:49.743131 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:39:49.754072 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 12:39:49.765934 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 12:39:49.769098 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:39:49.771924 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:39:49.774435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:39:49.781332 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 30 12:39:49.799966 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 12:39:49.800515 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 12:39:49.800533 kernel: GPT:9289727 != 19775487 Apr 30 12:39:49.800547 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 12:39:49.800561 kernel: GPT:9289727 != 19775487 Apr 30 12:39:49.800574 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 12:39:49.800588 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 12:39:49.784657 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 12:39:49.800405 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:39:49.814955 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 12:39:49.822921 kernel: libata version 3.00 loaded. Apr 30 12:39:49.829005 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 12:39:49.829049 kernel: AES CTR mode by8 optimization enabled Apr 30 12:39:49.829668 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:39:49.830164 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:39:49.836135 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:39:49.849986 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 12:39:49.870626 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 12:39:49.870644 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 12:39:49.870854 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 12:39:49.871057 kernel: BTRFS: device fsid 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (467) Apr 30 12:39:49.871071 kernel: scsi host0: ahci Apr 30 12:39:49.871249 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (466) Apr 30 12:39:49.871261 kernel: scsi host1: ahci Apr 30 12:39:49.871423 kernel: scsi host2: ahci Apr 30 12:39:49.871578 kernel: scsi host3: ahci Apr 30 12:39:49.871736 kernel: scsi host4: ahci Apr 30 12:39:49.871941 kernel: scsi host5: ahci Apr 30 12:39:49.872116 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 30 12:39:49.872131 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 30 12:39:49.872145 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 30 12:39:49.872162 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 30 12:39:49.872173 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 30 12:39:49.872184 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 30 12:39:49.839480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:39:49.839679 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:39:49.845435 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:39:49.860229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:39:49.879071 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 12:39:49.904194 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 12:39:49.918996 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 12:39:49.920256 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 12:39:49.931233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 12:39:49.944022 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 12:39:49.945203 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:39:49.945261 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:39:49.949368 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:39:49.952008 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:39:49.953855 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:39:49.958233 disk-uuid[559]: Primary Header is updated. Apr 30 12:39:49.958233 disk-uuid[559]: Secondary Entries is updated. Apr 30 12:39:49.958233 disk-uuid[559]: Secondary Header is updated. Apr 30 12:39:49.961856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 12:39:49.965915 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 12:39:49.974728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:39:49.988168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:39:50.012360 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:39:50.178932 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 12:39:50.179035 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 30 12:39:50.179927 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 12:39:50.180907 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 12:39:50.180934 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 12:39:50.181910 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 12:39:50.182922 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 12:39:50.183926 kernel: ata3.00: applying bridge limits Apr 30 12:39:50.184914 kernel: ata3.00: configured for UDMA/100 Apr 30 12:39:50.184940 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 12:39:50.231930 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 12:39:50.245541 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 12:39:50.245557 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 30 12:39:50.967928 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 12:39:50.968556 disk-uuid[561]: The operation has completed successfully. Apr 30 12:39:51.000682 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 12:39:51.000833 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 12:39:51.056060 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 12:39:51.059706 sh[600]: Success Apr 30 12:39:51.072934 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 12:39:51.109538 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 12:39:51.124533 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 12:39:51.127415 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 12:39:51.138101 kernel: BTRFS info (device dm-0): first mount of filesystem 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 Apr 30 12:39:51.138132 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:39:51.138144 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 12:39:51.139132 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 12:39:51.140514 kernel: BTRFS info (device dm-0): using free space tree Apr 30 12:39:51.144823 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 12:39:51.147168 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 12:39:51.159053 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 12:39:51.161763 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 12:39:51.180539 kernel: BTRFS info (device vda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:39:51.180596 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:39:51.180611 kernel: BTRFS info (device vda6): using free space tree Apr 30 12:39:51.182918 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 12:39:51.187912 kernel: BTRFS info (device vda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:39:51.194039 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 12:39:51.203079 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 12:39:51.375067 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:39:51.382260 ignition[688]: Ignition 2.20.0 Apr 30 12:39:51.382275 ignition[688]: Stage: fetch-offline Apr 30 12:39:51.386407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:39:51.382328 ignition[688]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:51.382339 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:51.382477 ignition[688]: parsed url from cmdline: "" Apr 30 12:39:51.382483 ignition[688]: no config URL provided Apr 30 12:39:51.382489 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:39:51.382504 ignition[688]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:39:51.382544 ignition[688]: op(1): [started] loading QEMU firmware config module Apr 30 12:39:51.382551 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 12:39:51.393055 ignition[688]: op(1): [finished] loading QEMU firmware config module Apr 30 12:39:51.430156 systemd-networkd[785]: lo: Link UP Apr 30 12:39:51.430168 systemd-networkd[785]: lo: Gained carrier Apr 30 12:39:51.433719 systemd-networkd[785]: Enumeration completed Apr 30 12:39:51.433947 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:39:51.437053 systemd[1]: Reached target network.target - Network. Apr 30 12:39:51.438974 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:39:51.438982 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:39:51.443410 systemd-networkd[785]: eth0: Link UP Apr 30 12:39:51.443423 systemd-networkd[785]: eth0: Gained carrier Apr 30 12:39:51.443432 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:39:51.448631 ignition[688]: parsing config with SHA512: 1cbeced998f12e503a1e458cd317a295b05f7d0f00aa48e2f90f60a2e3ea5d647b2b201d7b63ebc73477e34c92c9ba9f6e6a8457fe4061c653470461ac254a49 Apr 30 12:39:51.456521 unknown[688]: fetched base config from "system" Apr 30 12:39:51.456721 unknown[688]: fetched user config from "qemu" Apr 30 12:39:51.460969 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:39:51.462055 ignition[688]: fetch-offline: fetch-offline passed Apr 30 12:39:51.462281 ignition[688]: Ignition finished successfully Apr 30 12:39:51.464804 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:39:51.466780 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 12:39:51.478046 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 12:39:51.502874 ignition[793]: Ignition 2.20.0 Apr 30 12:39:51.502910 ignition[793]: Stage: kargs Apr 30 12:39:51.503088 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:51.503101 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:51.506884 ignition[793]: kargs: kargs passed Apr 30 12:39:51.507558 ignition[793]: Ignition finished successfully Apr 30 12:39:51.511706 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 12:39:51.523159 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 12:39:51.588861 ignition[801]: Ignition 2.20.0 Apr 30 12:39:51.588874 ignition[801]: Stage: disks Apr 30 12:39:51.589116 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:51.589133 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:51.593006 ignition[801]: disks: disks passed Apr 30 12:39:51.593066 ignition[801]: Ignition finished successfully Apr 30 12:39:51.596556 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 12:39:51.596879 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 12:39:51.599653 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 12:39:51.599877 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:39:51.603904 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:39:51.604142 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:39:51.618104 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 12:39:51.634971 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 12:39:51.642814 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 12:39:51.652103 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 12:39:51.749926 kernel: EXT4-fs (vda9): mounted filesystem 59d16236-967d-47d1-a9bd-4b055a17ab77 r/w with ordered data mode. Quota mode: none. Apr 30 12:39:51.750434 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 12:39:51.752128 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 12:39:51.763006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:39:51.765042 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 12:39:51.766353 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 12:39:51.766398 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 12:39:51.775011 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (819) Apr 30 12:39:51.775042 kernel: BTRFS info (device vda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:39:51.766422 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:39:51.780728 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:39:51.780748 kernel: BTRFS info (device vda6): using free space tree Apr 30 12:39:51.780759 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 12:39:51.773055 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 12:39:51.787043 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 12:39:51.789388 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:39:51.820098 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 12:39:51.825566 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Apr 30 12:39:51.829647 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 12:39:51.833838 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 12:39:51.934947 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 12:39:51.949065 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 12:39:51.951129 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 12:39:51.957920 kernel: BTRFS info (device vda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:39:51.976683 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 12:39:51.992233 ignition[932]: INFO : Ignition 2.20.0 Apr 30 12:39:51.992233 ignition[932]: INFO : Stage: mount Apr 30 12:39:51.994006 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:51.994006 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:51.996605 ignition[932]: INFO : mount: mount passed Apr 30 12:39:51.997400 ignition[932]: INFO : Ignition finished successfully Apr 30 12:39:52.000312 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 12:39:52.006100 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 12:39:52.137762 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 12:39:52.155115 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:39:52.163917 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (946) Apr 30 12:39:52.163952 kernel: BTRFS info (device vda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:39:52.163965 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:39:52.166000 kernel: BTRFS info (device vda6): using free space tree Apr 30 12:39:52.168910 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 12:39:52.170227 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:39:52.207731 ignition[963]: INFO : Ignition 2.20.0 Apr 30 12:39:52.207731 ignition[963]: INFO : Stage: files Apr 30 12:39:52.209663 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:52.209663 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:52.209663 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Apr 30 12:39:52.213844 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 12:39:52.213844 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 12:39:52.217276 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 12:39:52.217276 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 12:39:52.217276 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 12:39:52.216660 unknown[963]: wrote ssh authorized keys file for user: core Apr 30 12:39:52.223176 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 12:39:52.223176 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 12:39:52.259934 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 12:39:52.464370 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 12:39:52.464370 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:39:52.468814 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 12:39:53.063292 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 12:39:53.313188 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:39:53.315422 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 12:39:53.315422 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 12:39:53.315422 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:39:53.315422 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:39:53.315422 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:39:53.324014 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:39:53.324014 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:39:53.324014 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:39:53.324014 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:39:53.324014 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:39:53.324014 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 12:39:53.324014 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 12:39:53.324014 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 12:39:53.324014 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Apr 30 12:39:53.324131 systemd-networkd[785]: eth0: Gained IPv6LL Apr 30 12:39:53.819461 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 12:39:55.154554 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 12:39:55.154554 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 12:39:55.159031 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:39:55.161085 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:39:55.161085 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 12:39:55.164183 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 12:39:55.164183 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 12:39:55.167332 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 12:39:55.167332 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 12:39:55.170457 ignition[963]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 12:39:55.331388 ignition[963]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 12:39:55.337388 ignition[963]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 12:39:55.339377 ignition[963]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 12:39:55.339377 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 30 12:39:55.339377 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 12:39:55.339377 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:39:55.339377 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:39:55.339377 ignition[963]: INFO : files: files passed Apr 30 12:39:55.339377 ignition[963]: INFO : Ignition finished successfully Apr 30 12:39:55.351692 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 12:39:55.361120 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 12:39:55.364393 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 12:39:55.367674 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 12:39:55.368918 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 12:39:55.375738 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 12:39:55.380217 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:39:55.380217 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:39:55.383912 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:39:55.385032 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:39:55.387786 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 12:39:55.398082 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 12:39:55.423862 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 12:39:55.424069 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 12:39:55.427058 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 12:39:55.428578 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 12:39:55.430688 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 12:39:55.438084 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 12:39:55.453117 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:39:55.466235 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 12:39:55.478368 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:39:55.478633 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:39:55.482034 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 12:39:55.483266 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 12:39:55.483447 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:39:55.485461 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 12:39:55.485778 systemd[1]: Stopped target basic.target - Basic System. Apr 30 12:39:55.486458 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 12:39:55.486771 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:39:55.487278 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 12:39:55.487605 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 12:39:55.487957 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:39:55.488444 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 12:39:55.488765 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 12:39:55.489281 systemd[1]: Stopped target swap.target - Swaps. Apr 30 12:39:55.489567 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 12:39:55.489748 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:39:55.509104 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:39:55.509307 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:39:55.511311 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 12:39:55.511475 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:39:55.513580 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 12:39:55.513751 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 12:39:55.516154 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 12:39:55.516313 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:39:55.519797 systemd[1]: Stopped target paths.target - Path Units. Apr 30 12:39:55.521738 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 12:39:55.525004 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:39:55.526777 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 12:39:55.529110 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 12:39:55.530947 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 12:39:55.531059 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:39:55.532819 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 12:39:55.532926 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:39:55.534719 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 12:39:55.534851 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:39:55.536859 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 12:39:55.536990 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 12:39:55.548039 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 12:39:55.549742 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 12:39:55.550985 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 12:39:55.551110 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:39:55.553552 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 12:39:55.553661 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:39:55.561751 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 12:39:55.561926 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 12:39:55.566093 ignition[1018]: INFO : Ignition 2.20.0 Apr 30 12:39:55.566093 ignition[1018]: INFO : Stage: umount Apr 30 12:39:55.566093 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:55.566093 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:55.566093 ignition[1018]: INFO : umount: umount passed Apr 30 12:39:55.566093 ignition[1018]: INFO : Ignition finished successfully Apr 30 12:39:55.566674 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 12:39:55.566808 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 12:39:55.569042 systemd[1]: Stopped target network.target - Network. Apr 30 12:39:55.570214 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 12:39:55.570287 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 12:39:55.572639 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 12:39:55.572691 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 12:39:55.575253 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 12:39:55.575307 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 12:39:55.577392 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 12:39:55.577445 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 12:39:55.579843 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 12:39:55.582187 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 12:39:55.585782 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 12:39:55.587452 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 12:39:55.587578 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 12:39:55.590318 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 12:39:55.590459 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 12:39:55.594574 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 12:39:55.594821 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 12:39:55.594972 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 12:39:55.598331 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 12:39:55.599330 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 12:39:55.599407 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:39:55.600964 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 12:39:55.601044 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 12:39:55.618979 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 12:39:55.620071 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 12:39:55.620133 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:39:55.622479 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:39:55.622530 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:39:55.624698 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 12:39:55.624754 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 12:39:55.627132 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 12:39:55.627181 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:39:55.628482 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:39:55.632021 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:39:55.632093 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:39:55.640810 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 12:39:55.641000 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 12:39:55.646799 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 12:39:55.647044 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:39:55.649350 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 12:39:55.649413 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 12:39:55.651364 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 12:39:55.651413 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:39:55.653385 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 12:39:55.653449 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:39:55.655513 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 12:39:55.655574 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 12:39:55.657556 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:39:55.657624 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:39:55.668031 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 12:39:55.669207 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 12:39:55.669282 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:39:55.671953 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 12:39:55.672025 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:39:55.674535 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 12:39:55.674603 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:39:55.677344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:39:55.677414 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:39:55.681041 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 12:39:55.681128 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:39:55.681591 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 12:39:55.681736 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 12:39:55.683688 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 12:39:55.692038 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 12:39:55.700017 systemd[1]: Switching root. Apr 30 12:39:55.730622 systemd-journald[194]: Journal stopped Apr 30 12:39:57.428046 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 30 12:39:57.428137 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 12:39:57.428154 kernel: SELinux: policy capability open_perms=1 Apr 30 12:39:57.428168 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 12:39:57.428182 kernel: SELinux: policy capability always_check_network=0 Apr 30 12:39:57.428205 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 12:39:57.428219 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 12:39:57.428233 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 12:39:57.428246 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 12:39:57.428261 kernel: audit: type=1403 audit(1746016796.548:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 12:39:57.428276 systemd[1]: Successfully loaded SELinux policy in 42.012ms. Apr 30 12:39:57.428311 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.079ms. Apr 30 12:39:57.428327 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:39:57.428343 systemd[1]: Detected virtualization kvm. Apr 30 12:39:57.428362 systemd[1]: Detected architecture x86-64. Apr 30 12:39:57.428377 systemd[1]: Detected first boot. Apr 30 12:39:57.428391 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:39:57.428407 zram_generator::config[1066]: No configuration found. Apr 30 12:39:57.428422 kernel: Guest personality initialized and is inactive Apr 30 12:39:57.428437 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Apr 30 12:39:57.428451 kernel: Initialized host personality Apr 30 12:39:57.428466 kernel: NET: Registered PF_VSOCK protocol family Apr 30 12:39:57.428484 systemd[1]: Populated /etc with preset unit settings. Apr 30 12:39:57.428506 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 12:39:57.428521 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 12:39:57.428537 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 12:39:57.428557 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 12:39:57.428572 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 12:39:57.428588 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 12:39:57.428603 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 12:39:57.428618 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 12:39:57.428637 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 12:39:57.428652 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 12:39:57.428668 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 12:39:57.428683 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 12:39:57.428699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:39:57.428714 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:39:57.428729 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 12:39:57.428745 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 12:39:57.428763 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 12:39:57.428779 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:39:57.428794 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 12:39:57.428810 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:39:57.428825 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 12:39:57.428848 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 12:39:57.428871 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 12:39:57.428899 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 12:39:57.428923 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:39:57.428938 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:39:57.428954 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:39:57.428969 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:39:57.428984 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 12:39:57.428999 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 12:39:57.429014 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 12:39:57.429030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:39:57.429045 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:39:57.429063 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:39:57.429078 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 12:39:57.429093 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 12:39:57.429108 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 12:39:57.429123 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 12:39:57.429138 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:57.429153 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 12:39:57.429168 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 12:39:57.429182 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 12:39:57.429201 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 12:39:57.429217 systemd[1]: Reached target machines.target - Containers. Apr 30 12:39:57.429232 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 12:39:57.429253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:39:57.429269 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:39:57.429284 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 12:39:57.429299 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:39:57.429314 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:39:57.429332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:39:57.429354 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 12:39:57.429369 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:39:57.429384 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 12:39:57.429399 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 12:39:57.429415 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 12:39:57.429429 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 12:39:57.429444 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 12:39:57.429460 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:39:57.429478 kernel: fuse: init (API version 7.39) Apr 30 12:39:57.429493 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:39:57.429507 kernel: loop: module loaded Apr 30 12:39:57.429522 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:39:57.429537 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 12:39:57.429552 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 12:39:57.429567 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 12:39:57.429606 systemd-journald[1137]: Collecting audit messages is disabled. Apr 30 12:39:57.429637 systemd-journald[1137]: Journal started Apr 30 12:39:57.429664 systemd-journald[1137]: Runtime Journal (/run/log/journal/53654af77a5b4e52a662690bfeebf89e) is 6M, max 48.2M, 42.2M free. Apr 30 12:39:57.193302 systemd[1]: Queued start job for default target multi-user.target. Apr 30 12:39:57.211612 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 12:39:57.212222 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 12:39:57.434154 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:39:57.435985 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 12:39:57.436067 systemd[1]: Stopped verity-setup.service. Apr 30 12:39:57.438928 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:57.439003 kernel: ACPI: bus type drm_connector registered Apr 30 12:39:57.445616 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:39:57.446566 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 12:39:57.447924 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 12:39:57.449279 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 12:39:57.450510 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 12:39:57.451855 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 12:39:57.453234 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 12:39:57.454752 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 12:39:57.456435 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:39:57.458201 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 12:39:57.458479 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 12:39:57.460180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:39:57.460443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:39:57.462102 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:39:57.462366 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:39:57.463926 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:39:57.464191 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:39:57.465937 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 12:39:57.466199 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 12:39:57.467759 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:39:57.468044 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:39:57.469661 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:39:57.471315 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 12:39:57.473111 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 12:39:57.474912 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 12:39:57.496445 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 12:39:57.511068 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 12:39:57.513942 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 12:39:57.515327 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 12:39:57.515432 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:39:57.517952 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 12:39:57.520749 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 12:39:57.523357 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 12:39:57.524766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:39:57.527949 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 12:39:57.532126 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 12:39:57.532275 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:39:57.534761 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 12:39:57.536096 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:39:57.538083 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:39:57.545609 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 12:39:57.548150 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:39:57.553207 systemd-journald[1137]: Time spent on flushing to /var/log/journal/53654af77a5b4e52a662690bfeebf89e is 15.504ms for 1063 entries. Apr 30 12:39:57.553207 systemd-journald[1137]: System Journal (/var/log/journal/53654af77a5b4e52a662690bfeebf89e) is 8M, max 195.6M, 187.6M free. Apr 30 12:39:57.581205 systemd-journald[1137]: Received client request to flush runtime journal. Apr 30 12:39:57.581244 kernel: loop0: detected capacity change from 0 to 147912 Apr 30 12:39:57.553761 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:39:57.556468 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 12:39:57.559186 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 12:39:57.564246 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 12:39:57.567539 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 12:39:57.574948 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 12:39:57.587004 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 12:39:57.590369 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 12:39:57.594322 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 12:39:57.598067 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 12:39:57.601123 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:39:57.615110 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Apr 30 12:39:57.615136 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Apr 30 12:39:57.622025 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 12:39:57.625305 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:39:57.627527 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 12:39:57.640745 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 12:39:57.644949 kernel: loop1: detected capacity change from 0 to 138176 Apr 30 12:39:57.676248 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 12:39:57.683134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:39:57.686923 kernel: loop2: detected capacity change from 0 to 205544 Apr 30 12:39:57.705920 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 30 12:39:57.705944 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 30 12:39:57.713630 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:39:57.731930 kernel: loop3: detected capacity change from 0 to 147912 Apr 30 12:39:57.749932 kernel: loop4: detected capacity change from 0 to 138176 Apr 30 12:39:57.766951 kernel: loop5: detected capacity change from 0 to 205544 Apr 30 12:39:57.779412 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 12:39:57.780220 (sd-merge)[1215]: Merged extensions into '/usr'. Apr 30 12:39:57.788088 systemd[1]: Reload requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 12:39:57.788112 systemd[1]: Reloading... Apr 30 12:39:57.877969 zram_generator::config[1249]: No configuration found. Apr 30 12:39:57.974409 ldconfig[1181]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 12:39:58.016406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:39:58.092597 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 12:39:58.093591 systemd[1]: Reloading finished in 304 ms. Apr 30 12:39:58.112588 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 12:39:58.114450 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 12:39:58.135934 systemd[1]: Starting ensure-sysext.service... Apr 30 12:39:58.138484 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:39:58.150390 systemd[1]: Reload requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... Apr 30 12:39:58.150410 systemd[1]: Reloading... Apr 30 12:39:58.164362 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 12:39:58.164674 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 12:39:58.165668 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 12:39:58.165999 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Apr 30 12:39:58.166081 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Apr 30 12:39:58.203973 zram_generator::config[1310]: No configuration found. Apr 30 12:39:58.225410 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:39:58.225427 systemd-tmpfiles[1281]: Skipping /boot Apr 30 12:39:58.240908 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:39:58.240925 systemd-tmpfiles[1281]: Skipping /boot Apr 30 12:39:58.337060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:39:58.407232 systemd[1]: Reloading finished in 256 ms. Apr 30 12:39:58.424783 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 12:39:58.445698 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:39:58.466358 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:39:58.470050 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 12:39:58.473029 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 12:39:58.477720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:39:58.482141 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:39:58.490730 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 12:39:58.495621 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:58.495853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:39:58.497947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:39:58.502581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:39:58.511962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:39:58.514725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:39:58.514883 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:39:58.517225 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 12:39:58.519054 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:58.521565 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 12:39:58.524697 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:39:58.525123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:39:58.528328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:39:58.528670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:39:58.531029 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:39:58.531286 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:39:58.533682 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Apr 30 12:39:58.541128 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:39:58.541522 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:39:58.548638 augenrules[1383]: No rules Apr 30 12:39:58.550213 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 12:39:58.552282 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:39:58.552616 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:39:58.558565 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 12:39:58.565582 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 12:39:58.573695 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:39:58.576084 systemd[1]: Finished ensure-sysext.service. Apr 30 12:39:58.577925 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 12:39:58.585315 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:58.595189 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:39:58.596508 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:39:58.599012 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:39:58.613122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:39:58.616201 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:39:58.630099 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:39:58.632265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:39:58.632327 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:39:58.637128 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:39:58.640964 augenrules[1405]: /sbin/augenrules: No change Apr 30 12:39:58.642260 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 12:39:58.643737 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:39:58.643777 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:58.644415 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 12:39:58.649009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:39:58.649316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:39:58.656770 augenrules[1440]: No rules Apr 30 12:39:58.657996 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:39:58.658310 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:39:58.661403 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:39:58.661634 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:39:58.663366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:39:58.664323 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:39:58.666437 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:39:58.666760 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:39:58.683647 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 12:39:58.687948 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:39:58.688000 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:39:58.695951 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1407) Apr 30 12:39:58.736930 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 12:39:58.757978 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 12:39:58.766697 kernel: ACPI: button: Power Button [PWRF] Apr 30 12:39:58.766183 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 12:39:58.787953 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 30 12:39:58.788534 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 12:39:58.788775 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 12:39:58.789442 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 12:39:58.789593 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 12:39:58.800476 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 12:39:58.805291 systemd-resolved[1352]: Positive Trust Anchors: Apr 30 12:39:58.805313 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:39:58.805349 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:39:58.810356 systemd-resolved[1352]: Defaulting to hostname 'linux'. Apr 30 12:39:58.826034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:39:58.827405 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:39:58.832343 systemd-networkd[1430]: lo: Link UP Apr 30 12:39:58.832356 systemd-networkd[1430]: lo: Gained carrier Apr 30 12:39:58.834214 systemd-networkd[1430]: Enumeration completed Apr 30 12:39:58.834355 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:39:58.836056 systemd[1]: Reached target network.target - Network. Apr 30 12:39:58.841328 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:39:58.841334 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:39:58.842052 systemd-networkd[1430]: eth0: Link UP Apr 30 12:39:58.842056 systemd-networkd[1430]: eth0: Gained carrier Apr 30 12:39:58.842070 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:39:58.846498 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 12:39:58.851091 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 12:39:58.853117 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 12:39:58.854963 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 12:39:58.884277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:39:58.886418 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 12:39:58.895018 systemd-networkd[1430]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:39:58.896580 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Apr 30 12:40:00.141907 systemd-resolved[1352]: Clock change detected. Flushing caches. Apr 30 12:40:00.142036 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 12:40:00.142114 systemd-timesyncd[1433]: Initial clock synchronization to Wed 2025-04-30 12:40:00.141852 UTC. Apr 30 12:40:00.142736 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:40:00.143027 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:40:00.158644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:40:00.166453 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 12:40:00.176310 kernel: kvm_amd: TSC scaling supported Apr 30 12:40:00.176388 kernel: kvm_amd: Nested Virtualization enabled Apr 30 12:40:00.176459 kernel: kvm_amd: Nested Paging enabled Apr 30 12:40:00.176482 kernel: kvm_amd: LBR virtualization supported Apr 30 12:40:00.176504 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Apr 30 12:40:00.177793 kernel: kvm_amd: Virtual GIF supported Apr 30 12:40:00.202482 kernel: EDAC MC: Ver: 3.0.0 Apr 30 12:40:00.239301 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:40:00.246751 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 12:40:00.262698 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 12:40:00.275244 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:40:00.315128 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 12:40:00.316988 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:40:00.318293 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:40:00.319756 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 12:40:00.321225 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 12:40:00.322945 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 12:40:00.324349 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 12:40:00.325987 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 12:40:00.327416 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 12:40:00.327475 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:40:00.328553 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:40:00.330828 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 12:40:00.333994 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 12:40:00.338671 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 12:40:00.340321 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 12:40:00.341797 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 12:40:00.347930 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 12:40:00.349657 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 12:40:00.352576 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 12:40:00.354550 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 12:40:00.355894 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:40:00.357005 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:40:00.358148 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:40:00.358186 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:40:00.359459 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 12:40:00.361651 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:40:00.362081 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 12:40:00.365604 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 12:40:00.371382 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 12:40:00.372685 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 12:40:00.375728 jq[1488]: false Apr 30 12:40:00.376622 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 12:40:00.379614 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 12:40:00.386616 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 12:40:00.390028 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 12:40:00.395566 dbus-daemon[1487]: [system] SELinux support is enabled Apr 30 12:40:00.395651 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 12:40:00.398189 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 12:40:00.398832 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 12:40:00.399124 extend-filesystems[1489]: Found loop3 Apr 30 12:40:00.400182 extend-filesystems[1489]: Found loop4 Apr 30 12:40:00.400182 extend-filesystems[1489]: Found loop5 Apr 30 12:40:00.400182 extend-filesystems[1489]: Found sr0 Apr 30 12:40:00.400182 extend-filesystems[1489]: Found vda Apr 30 12:40:00.400182 extend-filesystems[1489]: Found vda1 Apr 30 12:40:00.406899 extend-filesystems[1489]: Found vda2 Apr 30 12:40:00.406899 extend-filesystems[1489]: Found vda3 Apr 30 12:40:00.406899 extend-filesystems[1489]: Found usr Apr 30 12:40:00.406899 extend-filesystems[1489]: Found vda4 Apr 30 12:40:00.406899 extend-filesystems[1489]: Found vda6 Apr 30 12:40:00.406899 extend-filesystems[1489]: Found vda7 Apr 30 12:40:00.406899 extend-filesystems[1489]: Found vda9 Apr 30 12:40:00.406899 extend-filesystems[1489]: Checking size of /dev/vda9 Apr 30 12:40:00.401027 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 12:40:00.416543 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 12:40:00.418725 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 12:40:00.422926 extend-filesystems[1489]: Resized partition /dev/vda9 Apr 30 12:40:00.423952 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 12:40:00.427458 jq[1504]: true Apr 30 12:40:00.429422 extend-filesystems[1511]: resize2fs 1.47.1 (20-May-2024) Apr 30 12:40:00.436700 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 12:40:00.436925 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 12:40:00.437231 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 12:40:00.437615 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 12:40:00.437871 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 12:40:00.440765 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 12:40:00.442831 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 12:40:00.444258 update_engine[1497]: I20250430 12:40:00.444176 1497 main.cc:92] Flatcar Update Engine starting Apr 30 12:40:00.452923 update_engine[1497]: I20250430 12:40:00.449689 1497 update_check_scheduler.cc:74] Next update check in 6m44s Apr 30 12:40:00.458797 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 12:40:00.465620 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1394) Apr 30 12:40:00.477565 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 12:40:00.488110 tar[1512]: linux-amd64/helm Apr 30 12:40:00.486662 systemd[1]: Started update-engine.service - Update Engine. Apr 30 12:40:00.488249 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 12:40:00.488278 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 12:40:00.490452 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 12:40:00.490524 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 12:40:00.490557 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 12:40:00.492336 jq[1513]: true Apr 30 12:40:00.500675 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 12:40:00.532409 systemd-logind[1496]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 12:40:00.532461 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 12:40:00.533776 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 12:40:00.537420 extend-filesystems[1511]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 12:40:00.537420 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 12:40:00.537420 extend-filesystems[1511]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 12:40:00.547422 extend-filesystems[1489]: Resized filesystem in /dev/vda9 Apr 30 12:40:00.540720 systemd-logind[1496]: New seat seat0. Apr 30 12:40:00.544170 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 12:40:00.545244 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 12:40:00.549829 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 12:40:00.550048 locksmithd[1529]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 12:40:00.793262 bash[1549]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:40:00.838347 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 12:40:00.847385 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 12:40:00.849718 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 12:40:00.850174 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 12:40:00.856506 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 12:40:00.858745 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 12:40:00.902652 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 12:40:00.906783 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 12:40:00.909764 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 12:40:00.911597 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 12:40:01.477112 containerd[1514]: time="2025-04-30T12:40:01.476915641Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 12:40:01.542957 containerd[1514]: time="2025-04-30T12:40:01.542867938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.546123 containerd[1514]: time="2025-04-30T12:40:01.545841767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:40:01.546123 containerd[1514]: time="2025-04-30T12:40:01.545876542Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 12:40:01.546123 containerd[1514]: time="2025-04-30T12:40:01.545898072Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 12:40:01.546272 containerd[1514]: time="2025-04-30T12:40:01.546185391Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 12:40:01.546272 containerd[1514]: time="2025-04-30T12:40:01.546206911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.546361 containerd[1514]: time="2025-04-30T12:40:01.546310446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:40:01.546361 containerd[1514]: time="2025-04-30T12:40:01.546327207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.546708 containerd[1514]: time="2025-04-30T12:40:01.546668247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:40:01.546708 containerd[1514]: time="2025-04-30T12:40:01.546694817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.546789 containerd[1514]: time="2025-04-30T12:40:01.546711778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:40:01.546789 containerd[1514]: time="2025-04-30T12:40:01.546724392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.546888 containerd[1514]: time="2025-04-30T12:40:01.546855999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.547242 containerd[1514]: time="2025-04-30T12:40:01.547207027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.547477 containerd[1514]: time="2025-04-30T12:40:01.547449632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:40:01.547477 containerd[1514]: time="2025-04-30T12:40:01.547472976Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 12:40:01.547659 containerd[1514]: time="2025-04-30T12:40:01.547626805Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 12:40:01.547736 containerd[1514]: time="2025-04-30T12:40:01.547708568Z" level=info msg="metadata content store policy set" policy=shared Apr 30 12:40:01.554055 containerd[1514]: time="2025-04-30T12:40:01.553953802Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 12:40:01.554055 containerd[1514]: time="2025-04-30T12:40:01.554007653Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 12:40:01.554055 containerd[1514]: time="2025-04-30T12:40:01.554025186Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 12:40:01.554055 containerd[1514]: time="2025-04-30T12:40:01.554043601Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 12:40:01.554055 containerd[1514]: time="2025-04-30T12:40:01.554059521Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 12:40:01.554329 containerd[1514]: time="2025-04-30T12:40:01.554224109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 12:40:01.554585 containerd[1514]: time="2025-04-30T12:40:01.554549269Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 12:40:01.554733 containerd[1514]: time="2025-04-30T12:40:01.554704671Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 12:40:01.554733 containerd[1514]: time="2025-04-30T12:40:01.554727824Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 12:40:01.554814 containerd[1514]: time="2025-04-30T12:40:01.554745367Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 12:40:01.554814 containerd[1514]: time="2025-04-30T12:40:01.554761387Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.554814 containerd[1514]: time="2025-04-30T12:40:01.554779591Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.554814 containerd[1514]: time="2025-04-30T12:40:01.554797495Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.554921 containerd[1514]: time="2025-04-30T12:40:01.554817162Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.554921 containerd[1514]: time="2025-04-30T12:40:01.554837971Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.554921 containerd[1514]: time="2025-04-30T12:40:01.554854181Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.554921 containerd[1514]: time="2025-04-30T12:40:01.554872085Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.554921 containerd[1514]: time="2025-04-30T12:40:01.554888956Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.554921 containerd[1514]: time="2025-04-30T12:40:01.554912661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555065 containerd[1514]: time="2025-04-30T12:40:01.554929753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555065 containerd[1514]: time="2025-04-30T12:40:01.554952937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555065 containerd[1514]: time="2025-04-30T12:40:01.554969277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555065 containerd[1514]: time="2025-04-30T12:40:01.554988102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555065 containerd[1514]: time="2025-04-30T12:40:01.555005074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555065 containerd[1514]: time="2025-04-30T12:40:01.555020303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555065 containerd[1514]: time="2025-04-30T12:40:01.555038046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555065 containerd[1514]: time="2025-04-30T12:40:01.555058585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555277 containerd[1514]: time="2025-04-30T12:40:01.555078021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555277 containerd[1514]: time="2025-04-30T12:40:01.555092248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555277 containerd[1514]: time="2025-04-30T12:40:01.555106464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555277 containerd[1514]: time="2025-04-30T12:40:01.555120561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555277 containerd[1514]: time="2025-04-30T12:40:01.555159284Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 12:40:01.555277 containerd[1514]: time="2025-04-30T12:40:01.555184671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555277 containerd[1514]: time="2025-04-30T12:40:01.555199319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555277 containerd[1514]: time="2025-04-30T12:40:01.555210981Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 12:40:01.555277 containerd[1514]: time="2025-04-30T12:40:01.555272296Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 12:40:01.555550 containerd[1514]: time="2025-04-30T12:40:01.555292193Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 12:40:01.555550 containerd[1514]: time="2025-04-30T12:40:01.555315637Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 12:40:01.555550 containerd[1514]: time="2025-04-30T12:40:01.555335173Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 12:40:01.555550 containerd[1514]: time="2025-04-30T12:40:01.555348248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555550 containerd[1514]: time="2025-04-30T12:40:01.555365330Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 12:40:01.555550 containerd[1514]: time="2025-04-30T12:40:01.555382663Z" level=info msg="NRI interface is disabled by configuration." Apr 30 12:40:01.555550 containerd[1514]: time="2025-04-30T12:40:01.555394785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.555823 containerd[1514]: time="2025-04-30T12:40:01.555765831Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 12:40:01.555823 containerd[1514]: time="2025-04-30T12:40:01.555813330Z" level=info msg="Connect containerd service" Apr 30 12:40:01.556098 containerd[1514]: time="2025-04-30T12:40:01.555838167Z" level=info msg="using legacy CRI server" Apr 30 12:40:01.556098 containerd[1514]: time="2025-04-30T12:40:01.555845100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 12:40:01.556098 containerd[1514]: time="2025-04-30T12:40:01.555985824Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 12:40:01.556831 containerd[1514]: time="2025-04-30T12:40:01.556796054Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:40:01.557054 containerd[1514]: time="2025-04-30T12:40:01.556987693Z" level=info msg="Start subscribing containerd event" Apr 30 12:40:01.558131 containerd[1514]: time="2025-04-30T12:40:01.557591816Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 12:40:01.558131 containerd[1514]: time="2025-04-30T12:40:01.557663491Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 12:40:01.558356 containerd[1514]: time="2025-04-30T12:40:01.558255752Z" level=info msg="Start recovering state" Apr 30 12:40:01.558523 containerd[1514]: time="2025-04-30T12:40:01.558495752Z" level=info msg="Start event monitor" Apr 30 12:40:01.558563 containerd[1514]: time="2025-04-30T12:40:01.558533232Z" level=info msg="Start snapshots syncer" Apr 30 12:40:01.558585 containerd[1514]: time="2025-04-30T12:40:01.558560373Z" level=info msg="Start cni network conf syncer for default" Apr 30 12:40:01.558585 containerd[1514]: time="2025-04-30T12:40:01.558574580Z" level=info msg="Start streaming server" Apr 30 12:40:01.558788 containerd[1514]: time="2025-04-30T12:40:01.558763905Z" level=info msg="containerd successfully booted in 0.083693s" Apr 30 12:40:01.559111 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 12:40:01.671943 tar[1512]: linux-amd64/LICENSE Apr 30 12:40:01.672788 tar[1512]: linux-amd64/README.md Apr 30 12:40:01.692641 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 12:40:02.055666 systemd-networkd[1430]: eth0: Gained IPv6LL Apr 30 12:40:02.060270 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 12:40:02.062363 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 12:40:02.073791 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 12:40:02.077164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:02.079961 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 12:40:02.103632 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 12:40:02.103972 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 12:40:02.107798 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 12:40:02.111191 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 12:40:03.429379 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 12:40:03.455997 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:53068.service - OpenSSH per-connection server daemon (10.0.0.1:53068). Apr 30 12:40:03.559597 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 53068 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:03.562012 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:03.576049 systemd-logind[1496]: New session 1 of user core. Apr 30 12:40:03.577731 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 12:40:03.644912 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 12:40:03.668114 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 12:40:03.671081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:03.678989 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:40:03.699988 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 12:40:03.703574 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 12:40:03.714627 (systemd)[1606]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 12:40:03.718170 systemd-logind[1496]: New session c1 of user core. Apr 30 12:40:03.876774 systemd[1606]: Queued start job for default target default.target. Apr 30 12:40:03.887827 systemd[1606]: Created slice app.slice - User Application Slice. Apr 30 12:40:03.887868 systemd[1606]: Reached target paths.target - Paths. Apr 30 12:40:03.887939 systemd[1606]: Reached target timers.target - Timers. Apr 30 12:40:03.890206 systemd[1606]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 12:40:03.906195 systemd[1606]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 12:40:03.906371 systemd[1606]: Reached target sockets.target - Sockets. Apr 30 12:40:03.906420 systemd[1606]: Reached target basic.target - Basic System. Apr 30 12:40:03.906492 systemd[1606]: Reached target default.target - Main User Target. Apr 30 12:40:03.906533 systemd[1606]: Startup finished in 178ms. Apr 30 12:40:04.009590 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 12:40:04.019685 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 12:40:04.021171 systemd[1]: Startup finished in 1.382s (kernel) + 7.819s (initrd) + 6.269s (userspace) = 15.471s. Apr 30 12:40:04.089280 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:53080.service - OpenSSH per-connection server daemon (10.0.0.1:53080). Apr 30 12:40:04.150669 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 53080 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:04.152688 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:04.157914 systemd-logind[1496]: New session 2 of user core. Apr 30 12:40:04.194683 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 12:40:04.255081 sshd[1628]: Connection closed by 10.0.0.1 port 53080 Apr 30 12:40:04.255632 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:04.269810 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:53080.service: Deactivated successfully. Apr 30 12:40:04.271998 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 12:40:04.274223 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Apr 30 12:40:04.279722 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:53088.service - OpenSSH per-connection server daemon (10.0.0.1:53088). Apr 30 12:40:04.280880 systemd-logind[1496]: Removed session 2. Apr 30 12:40:04.324576 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 53088 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:04.327844 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:04.334901 systemd-logind[1496]: New session 3 of user core. Apr 30 12:40:04.349777 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 12:40:04.412913 sshd[1637]: Connection closed by 10.0.0.1 port 53088 Apr 30 12:40:04.413476 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:04.424955 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:53088.service: Deactivated successfully. Apr 30 12:40:04.427353 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 12:40:04.429652 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Apr 30 12:40:04.439128 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:53092.service - OpenSSH per-connection server daemon (10.0.0.1:53092). Apr 30 12:40:04.441032 systemd-logind[1496]: Removed session 3. Apr 30 12:40:04.481400 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 53092 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:04.484651 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:04.491886 systemd-logind[1496]: New session 4 of user core. Apr 30 12:40:04.511793 kubelet[1603]: E0430 12:40:04.511525 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:40:04.511832 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 12:40:04.516982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:40:04.517258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:40:04.517777 systemd[1]: kubelet.service: Consumed 2.269s CPU time, 238.3M memory peak. Apr 30 12:40:04.571607 sshd[1646]: Connection closed by 10.0.0.1 port 53092 Apr 30 12:40:04.572128 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:04.586739 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:53092.service: Deactivated successfully. Apr 30 12:40:04.589309 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 12:40:04.591174 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Apr 30 12:40:04.598827 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:53102.service - OpenSSH per-connection server daemon (10.0.0.1:53102). Apr 30 12:40:04.599981 systemd-logind[1496]: Removed session 4. Apr 30 12:40:04.641478 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 53102 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:04.643348 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:04.648652 systemd-logind[1496]: New session 5 of user core. Apr 30 12:40:04.657614 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 12:40:04.718663 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 12:40:04.719010 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:40:04.735545 sudo[1655]: pam_unix(sudo:session): session closed for user root Apr 30 12:40:04.737379 sshd[1654]: Connection closed by 10.0.0.1 port 53102 Apr 30 12:40:04.737936 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:04.752180 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:53102.service: Deactivated successfully. Apr 30 12:40:04.754267 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 12:40:04.755284 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Apr 30 12:40:04.766938 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:53114.service - OpenSSH per-connection server daemon (10.0.0.1:53114). Apr 30 12:40:04.767843 systemd-logind[1496]: Removed session 5. Apr 30 12:40:04.806020 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 53114 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:04.808270 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:04.813468 systemd-logind[1496]: New session 6 of user core. Apr 30 12:40:04.823609 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 12:40:04.882309 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 12:40:04.882725 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:40:04.887475 sudo[1665]: pam_unix(sudo:session): session closed for user root Apr 30 12:40:04.894889 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 12:40:04.895263 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:40:04.914762 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:40:04.950751 augenrules[1687]: No rules Apr 30 12:40:04.952921 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:40:04.953269 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:40:04.954663 sudo[1664]: pam_unix(sudo:session): session closed for user root Apr 30 12:40:04.956276 sshd[1663]: Connection closed by 10.0.0.1 port 53114 Apr 30 12:40:04.956759 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:04.973117 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:53114.service: Deactivated successfully. Apr 30 12:40:04.975728 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 12:40:04.977421 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Apr 30 12:40:04.984736 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:53126.service - OpenSSH per-connection server daemon (10.0.0.1:53126). Apr 30 12:40:04.985828 systemd-logind[1496]: Removed session 6. Apr 30 12:40:05.020894 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 53126 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:05.022648 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:05.027291 systemd-logind[1496]: New session 7 of user core. Apr 30 12:40:05.042623 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 12:40:05.098238 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 12:40:05.098614 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:40:05.714700 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 12:40:05.714857 (dockerd)[1718]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 12:40:06.004176 dockerd[1718]: time="2025-04-30T12:40:06.004025014Z" level=info msg="Starting up" Apr 30 12:40:07.171365 dockerd[1718]: time="2025-04-30T12:40:07.171292244Z" level=info msg="Loading containers: start." Apr 30 12:40:07.393459 kernel: Initializing XFRM netlink socket Apr 30 12:40:07.515666 systemd-networkd[1430]: docker0: Link UP Apr 30 12:40:07.627005 dockerd[1718]: time="2025-04-30T12:40:07.626959846Z" level=info msg="Loading containers: done." Apr 30 12:40:07.645044 dockerd[1718]: time="2025-04-30T12:40:07.644972683Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 12:40:07.645238 dockerd[1718]: time="2025-04-30T12:40:07.645116192Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 12:40:07.645336 dockerd[1718]: time="2025-04-30T12:40:07.645306279Z" level=info msg="Daemon has completed initialization" Apr 30 12:40:07.690178 dockerd[1718]: time="2025-04-30T12:40:07.690104718Z" level=info msg="API listen on /run/docker.sock" Apr 30 12:40:07.690312 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 12:40:09.138124 containerd[1514]: time="2025-04-30T12:40:09.138056211Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Apr 30 12:40:09.953338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213138030.mount: Deactivated successfully. Apr 30 12:40:11.752735 containerd[1514]: time="2025-04-30T12:40:11.752653488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:11.753388 containerd[1514]: time="2025-04-30T12:40:11.753310721Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" Apr 30 12:40:11.754798 containerd[1514]: time="2025-04-30T12:40:11.754742526Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:11.758589 containerd[1514]: time="2025-04-30T12:40:11.758522316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:11.760359 containerd[1514]: time="2025-04-30T12:40:11.760300071Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.622180521s" Apr 30 12:40:11.760359 containerd[1514]: time="2025-04-30T12:40:11.760344234Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Apr 30 12:40:11.765100 containerd[1514]: time="2025-04-30T12:40:11.762863629Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Apr 30 12:40:14.008838 containerd[1514]: time="2025-04-30T12:40:14.008198936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:14.009922 containerd[1514]: time="2025-04-30T12:40:14.009758422Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" Apr 30 12:40:14.012233 containerd[1514]: time="2025-04-30T12:40:14.012126113Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:14.016692 containerd[1514]: time="2025-04-30T12:40:14.016571651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:14.018264 containerd[1514]: time="2025-04-30T12:40:14.018044915Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.255131081s" Apr 30 12:40:14.018264 containerd[1514]: time="2025-04-30T12:40:14.018089889Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Apr 30 12:40:14.018919 containerd[1514]: time="2025-04-30T12:40:14.018854563Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Apr 30 12:40:14.767672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 12:40:14.777636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:15.076822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:15.082005 (kubelet)[1982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:40:15.453975 kubelet[1982]: E0430 12:40:15.453681 1982 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:40:15.460920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:40:15.461144 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:40:15.461607 systemd[1]: kubelet.service: Consumed 368ms CPU time, 100M memory peak. Apr 30 12:40:17.058588 containerd[1514]: time="2025-04-30T12:40:17.058532270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:17.059809 containerd[1514]: time="2025-04-30T12:40:17.059765383Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" Apr 30 12:40:17.061905 containerd[1514]: time="2025-04-30T12:40:17.061837289Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:17.065368 containerd[1514]: time="2025-04-30T12:40:17.065326133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:17.066454 containerd[1514]: time="2025-04-30T12:40:17.066414384Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 3.047500621s" Apr 30 12:40:17.066490 containerd[1514]: time="2025-04-30T12:40:17.066459369Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Apr 30 12:40:17.066930 containerd[1514]: time="2025-04-30T12:40:17.066863838Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 12:40:18.351623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3448520574.mount: Deactivated successfully. Apr 30 12:40:19.252152 containerd[1514]: time="2025-04-30T12:40:19.252060131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:19.252967 containerd[1514]: time="2025-04-30T12:40:19.252897382Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" Apr 30 12:40:19.254241 containerd[1514]: time="2025-04-30T12:40:19.254206848Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:19.257000 containerd[1514]: time="2025-04-30T12:40:19.256853833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:19.257745 containerd[1514]: time="2025-04-30T12:40:19.257696533Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.190800726s" Apr 30 12:40:19.257803 containerd[1514]: time="2025-04-30T12:40:19.257746507Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Apr 30 12:40:19.258470 containerd[1514]: time="2025-04-30T12:40:19.258415552Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 12:40:20.304878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067364348.mount: Deactivated successfully. Apr 30 12:40:23.118979 containerd[1514]: time="2025-04-30T12:40:23.118878888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:23.127918 containerd[1514]: time="2025-04-30T12:40:23.127843252Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 12:40:23.135346 containerd[1514]: time="2025-04-30T12:40:23.135306010Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:23.157549 containerd[1514]: time="2025-04-30T12:40:23.157495211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:23.158552 containerd[1514]: time="2025-04-30T12:40:23.158522067Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.900052654s" Apr 30 12:40:23.158552 containerd[1514]: time="2025-04-30T12:40:23.158550911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 12:40:23.159324 containerd[1514]: time="2025-04-30T12:40:23.159272745Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 12:40:24.001597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916185046.mount: Deactivated successfully. Apr 30 12:40:24.008363 containerd[1514]: time="2025-04-30T12:40:24.008303795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:24.009159 containerd[1514]: time="2025-04-30T12:40:24.009098936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 12:40:24.010339 containerd[1514]: time="2025-04-30T12:40:24.010298166Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:24.014234 containerd[1514]: time="2025-04-30T12:40:24.014192821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:24.015352 containerd[1514]: time="2025-04-30T12:40:24.015291101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 855.969404ms" Apr 30 12:40:24.015352 containerd[1514]: time="2025-04-30T12:40:24.015345202Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 12:40:24.016111 containerd[1514]: time="2025-04-30T12:40:24.016060634Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Apr 30 12:40:24.549383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1845860277.mount: Deactivated successfully. Apr 30 12:40:25.520094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 12:40:25.528876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:25.755195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:25.768835 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:40:25.826414 kubelet[2111]: E0430 12:40:25.826125 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:40:25.833175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:40:25.833646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:40:25.834515 systemd[1]: kubelet.service: Consumed 234ms CPU time, 96M memory peak. Apr 30 12:40:27.436753 containerd[1514]: time="2025-04-30T12:40:27.436666333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:27.453085 containerd[1514]: time="2025-04-30T12:40:27.452977027Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Apr 30 12:40:27.471448 containerd[1514]: time="2025-04-30T12:40:27.471367553Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:27.475380 containerd[1514]: time="2025-04-30T12:40:27.475297143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:27.477000 containerd[1514]: time="2025-04-30T12:40:27.476931038Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.460818637s" Apr 30 12:40:27.477059 containerd[1514]: time="2025-04-30T12:40:27.477001821Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Apr 30 12:40:30.330081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:30.330327 systemd[1]: kubelet.service: Consumed 234ms CPU time, 96M memory peak. Apr 30 12:40:30.341737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:30.373528 systemd[1]: Reload requested from client PID 2152 ('systemctl') (unit session-7.scope)... Apr 30 12:40:30.373549 systemd[1]: Reloading... Apr 30 12:40:30.492749 zram_generator::config[2196]: No configuration found. Apr 30 12:40:30.778640 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:40:30.901094 systemd[1]: Reloading finished in 526 ms. Apr 30 12:40:30.950195 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 12:40:30.950298 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 12:40:30.950623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:30.950669 systemd[1]: kubelet.service: Consumed 152ms CPU time, 83.5M memory peak. Apr 30 12:40:30.954353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:31.108814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:31.113609 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:40:31.198559 kubelet[2245]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:40:31.198559 kubelet[2245]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:40:31.198559 kubelet[2245]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:40:31.199015 kubelet[2245]: I0430 12:40:31.198634 2245 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:40:31.637469 kubelet[2245]: I0430 12:40:31.637377 2245 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 12:40:31.637469 kubelet[2245]: I0430 12:40:31.637448 2245 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:40:31.637860 kubelet[2245]: I0430 12:40:31.637832 2245 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 12:40:31.727974 kubelet[2245]: I0430 12:40:31.727898 2245 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:40:31.731181 kubelet[2245]: E0430 12:40:31.731098 2245 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:31.749832 kubelet[2245]: E0430 12:40:31.749774 2245 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 12:40:31.749832 kubelet[2245]: I0430 12:40:31.749812 2245 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 12:40:31.761390 kubelet[2245]: I0430 12:40:31.761104 2245 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:40:31.763196 kubelet[2245]: I0430 12:40:31.762673 2245 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 12:40:31.763196 kubelet[2245]: I0430 12:40:31.762895 2245 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:40:31.763549 kubelet[2245]: I0430 12:40:31.762932 2245 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 12:40:31.763549 kubelet[2245]: I0430 12:40:31.763541 2245 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:40:31.763808 kubelet[2245]: I0430 12:40:31.763557 2245 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 12:40:31.763808 kubelet[2245]: I0430 12:40:31.763758 2245 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:40:31.767441 kubelet[2245]: I0430 12:40:31.767395 2245 kubelet.go:408] "Attempting to sync node with API server" Apr 30 12:40:31.767441 kubelet[2245]: I0430 12:40:31.767422 2245 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:40:31.767563 kubelet[2245]: I0430 12:40:31.767479 2245 kubelet.go:314] "Adding apiserver pod source" Apr 30 12:40:31.767563 kubelet[2245]: I0430 12:40:31.767498 2245 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:40:31.768216 kubelet[2245]: W0430 12:40:31.768133 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Apr 30 12:40:31.768216 kubelet[2245]: E0430 12:40:31.768191 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:31.770178 kubelet[2245]: W0430 12:40:31.770145 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Apr 30 12:40:31.770256 kubelet[2245]: E0430 12:40:31.770186 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:31.773962 kubelet[2245]: I0430 12:40:31.773939 2245 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:40:31.777756 kubelet[2245]: I0430 12:40:31.777736 2245 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:40:31.778583 kubelet[2245]: W0430 12:40:31.778545 2245 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 12:40:31.779587 kubelet[2245]: I0430 12:40:31.779314 2245 server.go:1269] "Started kubelet" Apr 30 12:40:31.779587 kubelet[2245]: I0430 12:40:31.779394 2245 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:40:31.782777 kubelet[2245]: I0430 12:40:31.780555 2245 server.go:460] "Adding debug handlers to kubelet server" Apr 30 12:40:31.782777 kubelet[2245]: I0430 12:40:31.780702 2245 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:40:31.789653 kubelet[2245]: I0430 12:40:31.789548 2245 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:40:31.789653 kubelet[2245]: I0430 12:40:31.789608 2245 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 12:40:31.789924 kubelet[2245]: I0430 12:40:31.789903 2245 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:40:31.791406 kubelet[2245]: E0430 12:40:31.791142 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:31.791406 kubelet[2245]: I0430 12:40:31.791220 2245 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 12:40:31.791594 kubelet[2245]: I0430 12:40:31.791445 2245 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 12:40:31.791594 kubelet[2245]: I0430 12:40:31.791536 2245 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:40:31.792178 kubelet[2245]: W0430 12:40:31.791941 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Apr 30 12:40:31.792178 kubelet[2245]: E0430 12:40:31.792001 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:31.792463 kubelet[2245]: I0430 12:40:31.792395 2245 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:40:31.793007 kubelet[2245]: E0430 12:40:31.792790 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Apr 30 12:40:31.793530 kubelet[2245]: I0430 12:40:31.793479 2245 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:40:31.793530 kubelet[2245]: I0430 12:40:31.793501 2245 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:40:31.793692 kubelet[2245]: E0430 12:40:31.793559 2245 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:40:31.807719 kubelet[2245]: I0430 12:40:31.805625 2245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:40:31.808088 kubelet[2245]: E0430 12:40:31.805913 2245 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183b190ec60ea115 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 12:40:31.779283221 +0000 UTC m=+0.661173103,LastTimestamp:2025-04-30 12:40:31.779283221 +0000 UTC m=+0.661173103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 12:40:31.852469 kubelet[2245]: I0430 12:40:31.852388 2245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:40:31.852469 kubelet[2245]: I0430 12:40:31.852478 2245 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:40:31.852682 kubelet[2245]: I0430 12:40:31.852512 2245 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 12:40:31.852682 kubelet[2245]: E0430 12:40:31.852609 2245 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:40:31.858480 kubelet[2245]: W0430 12:40:31.858378 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Apr 30 12:40:31.858480 kubelet[2245]: E0430 12:40:31.858476 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:31.859733 kubelet[2245]: I0430 12:40:31.859710 2245 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:40:31.859825 kubelet[2245]: I0430 12:40:31.859806 2245 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:40:31.859882 kubelet[2245]: I0430 12:40:31.859831 2245 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:40:31.892108 kubelet[2245]: E0430 12:40:31.891972 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:31.953392 kubelet[2245]: E0430 12:40:31.953333 2245 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 12:40:31.992882 kubelet[2245]: E0430 12:40:31.992797 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:31.993498 kubelet[2245]: E0430 12:40:31.993407 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Apr 30 12:40:32.093727 kubelet[2245]: E0430 12:40:32.093657 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.154172 kubelet[2245]: E0430 12:40:32.153968 2245 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 12:40:32.194480 kubelet[2245]: E0430 12:40:32.194397 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.295356 kubelet[2245]: E0430 12:40:32.295311 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.394377 kubelet[2245]: E0430 12:40:32.394313 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Apr 30 12:40:32.396453 kubelet[2245]: E0430 12:40:32.396406 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.497125 kubelet[2245]: E0430 12:40:32.496958 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.554226 kubelet[2245]: E0430 12:40:32.554147 2245 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 12:40:32.597756 kubelet[2245]: E0430 12:40:32.597678 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.698222 kubelet[2245]: E0430 12:40:32.698151 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.798815 kubelet[2245]: E0430 12:40:32.798773 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.899331 kubelet[2245]: E0430 12:40:32.899276 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.999813 kubelet[2245]: E0430 12:40:32.999740 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.045457 kubelet[2245]: I0430 12:40:33.045373 2245 policy_none.go:49] "None policy: Start" Apr 30 12:40:33.046251 kubelet[2245]: I0430 12:40:33.046222 2245 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:40:33.046251 kubelet[2245]: I0430 12:40:33.046249 2245 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:40:33.073967 kubelet[2245]: W0430 12:40:33.073864 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Apr 30 12:40:33.073967 kubelet[2245]: E0430 12:40:33.073925 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:33.100534 kubelet[2245]: E0430 12:40:33.100496 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.146108 kubelet[2245]: W0430 12:40:33.146057 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Apr 30 12:40:33.146156 kubelet[2245]: E0430 12:40:33.146111 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:33.194964 kubelet[2245]: E0430 12:40:33.194910 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Apr 30 12:40:33.201072 kubelet[2245]: E0430 12:40:33.201021 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.239966 kubelet[2245]: W0430 12:40:33.239894 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Apr 30 12:40:33.239966 kubelet[2245]: E0430 12:40:33.239952 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:33.301686 kubelet[2245]: E0430 12:40:33.301638 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.309697 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 12:40:33.326086 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 12:40:33.328189 kubelet[2245]: W0430 12:40:33.328101 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Apr 30 12:40:33.328189 kubelet[2245]: E0430 12:40:33.328190 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:33.329912 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 12:40:33.339761 kubelet[2245]: I0430 12:40:33.339701 2245 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:40:33.340045 kubelet[2245]: I0430 12:40:33.340014 2245 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 12:40:33.340080 kubelet[2245]: I0430 12:40:33.340032 2245 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:40:33.340778 kubelet[2245]: I0430 12:40:33.340683 2245 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:40:33.341849 kubelet[2245]: E0430 12:40:33.341821 2245 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 12:40:33.364269 systemd[1]: Created slice kubepods-burstable-pod503026338e0c3ed6f3efea1fa2e23430.slice - libcontainer container kubepods-burstable-pod503026338e0c3ed6f3efea1fa2e23430.slice. Apr 30 12:40:33.387216 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. Apr 30 12:40:33.402359 kubelet[2245]: I0430 12:40:33.402287 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:33.402359 kubelet[2245]: I0430 12:40:33.402333 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:33.402359 kubelet[2245]: I0430 12:40:33.402354 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/503026338e0c3ed6f3efea1fa2e23430-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"503026338e0c3ed6f3efea1fa2e23430\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:33.402359 kubelet[2245]: I0430 12:40:33.402372 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/503026338e0c3ed6f3efea1fa2e23430-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"503026338e0c3ed6f3efea1fa2e23430\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:33.402675 kubelet[2245]: I0430 12:40:33.402392 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:33.402675 kubelet[2245]: I0430 12:40:33.402410 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:33.402675 kubelet[2245]: I0430 12:40:33.402448 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:33.402675 kubelet[2245]: I0430 12:40:33.402466 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:33.402675 kubelet[2245]: I0430 12:40:33.402481 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/503026338e0c3ed6f3efea1fa2e23430-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"503026338e0c3ed6f3efea1fa2e23430\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:33.403922 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. Apr 30 12:40:33.442693 kubelet[2245]: I0430 12:40:33.442657 2245 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 12:40:33.443261 kubelet[2245]: E0430 12:40:33.443201 2245 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 30 12:40:33.645698 kubelet[2245]: I0430 12:40:33.645570 2245 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 12:40:33.646091 kubelet[2245]: E0430 12:40:33.646033 2245 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 30 12:40:33.684543 kubelet[2245]: E0430 12:40:33.684467 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:33.685264 containerd[1514]: time="2025-04-30T12:40:33.685213117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:503026338e0c3ed6f3efea1fa2e23430,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:33.700777 kubelet[2245]: E0430 12:40:33.700734 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:33.701344 containerd[1514]: time="2025-04-30T12:40:33.701304558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:33.706790 kubelet[2245]: E0430 12:40:33.706766 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:33.707196 containerd[1514]: time="2025-04-30T12:40:33.707159220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:33.759714 kubelet[2245]: E0430 12:40:33.759652 2245 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:34.048052 kubelet[2245]: I0430 12:40:34.048012 2245 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 12:40:34.051947 kubelet[2245]: E0430 12:40:34.051911 2245 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 30 12:40:34.282498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463257644.mount: Deactivated successfully. Apr 30 12:40:34.291544 containerd[1514]: time="2025-04-30T12:40:34.291480841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:40:34.294497 containerd[1514]: time="2025-04-30T12:40:34.294446897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 12:40:34.295521 containerd[1514]: time="2025-04-30T12:40:34.295471284Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:40:34.297706 containerd[1514]: time="2025-04-30T12:40:34.297659657Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:40:34.298715 containerd[1514]: time="2025-04-30T12:40:34.298629499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:40:34.299682 containerd[1514]: time="2025-04-30T12:40:34.299637786Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:40:34.301095 containerd[1514]: time="2025-04-30T12:40:34.301058223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:40:34.301760 containerd[1514]: time="2025-04-30T12:40:34.301705757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:40:34.302112 containerd[1514]: time="2025-04-30T12:40:34.302071680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.740796ms" Apr 30 12:40:34.304492 containerd[1514]: time="2025-04-30T12:40:34.304456620Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 597.22435ms" Apr 30 12:40:34.311740 containerd[1514]: time="2025-04-30T12:40:34.311693127Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 610.283027ms" Apr 30 12:40:34.598491 containerd[1514]: time="2025-04-30T12:40:34.598041120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:34.598491 containerd[1514]: time="2025-04-30T12:40:34.598157505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:34.598491 containerd[1514]: time="2025-04-30T12:40:34.598185238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:34.598491 containerd[1514]: time="2025-04-30T12:40:34.598094383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:34.598491 containerd[1514]: time="2025-04-30T12:40:34.598234241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:34.598491 containerd[1514]: time="2025-04-30T12:40:34.598265212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:34.599842 containerd[1514]: time="2025-04-30T12:40:34.598398888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:34.599842 containerd[1514]: time="2025-04-30T12:40:34.598300198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:34.635105 containerd[1514]: time="2025-04-30T12:40:34.634961996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:34.635105 containerd[1514]: time="2025-04-30T12:40:34.635032071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:34.635105 containerd[1514]: time="2025-04-30T12:40:34.635047901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:34.635374 containerd[1514]: time="2025-04-30T12:40:34.635147773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:34.648752 systemd[1]: Started cri-containerd-f2cd86739a795a92d8e1212bec49347e0b122e22208684d1bf8c0cce95b5d123.scope - libcontainer container f2cd86739a795a92d8e1212bec49347e0b122e22208684d1bf8c0cce95b5d123. Apr 30 12:40:34.653332 systemd[1]: Started cri-containerd-ee6d5d8306b2db92dc31918a8d5d0a6b52deeb6fe0959a5af914f0cd6282ad59.scope - libcontainer container ee6d5d8306b2db92dc31918a8d5d0a6b52deeb6fe0959a5af914f0cd6282ad59. Apr 30 12:40:34.668015 systemd[1]: Started cri-containerd-9ef8a79a09da930a82079899284ca20c4a7e81a0f1929323b26aa099c921866c.scope - libcontainer container 9ef8a79a09da930a82079899284ca20c4a7e81a0f1929323b26aa099c921866c. Apr 30 12:40:34.797248 kubelet[2245]: E0430 12:40:34.797136 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="3.2s" Apr 30 12:40:34.801085 containerd[1514]: time="2025-04-30T12:40:34.800729748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2cd86739a795a92d8e1212bec49347e0b122e22208684d1bf8c0cce95b5d123\"" Apr 30 12:40:34.802209 kubelet[2245]: E0430 12:40:34.801994 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:34.804416 containerd[1514]: time="2025-04-30T12:40:34.804388394Z" level=info msg="CreateContainer within sandbox \"f2cd86739a795a92d8e1212bec49347e0b122e22208684d1bf8c0cce95b5d123\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 12:40:34.818615 containerd[1514]: time="2025-04-30T12:40:34.818552035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee6d5d8306b2db92dc31918a8d5d0a6b52deeb6fe0959a5af914f0cd6282ad59\"" Apr 30 12:40:34.818860 containerd[1514]: time="2025-04-30T12:40:34.818701342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:503026338e0c3ed6f3efea1fa2e23430,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ef8a79a09da930a82079899284ca20c4a7e81a0f1929323b26aa099c921866c\"" Apr 30 12:40:34.819456 kubelet[2245]: E0430 12:40:34.819377 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:34.820073 kubelet[2245]: E0430 12:40:34.819608 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:34.821292 kubelet[2245]: W0430 12:40:34.821242 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Apr 30 12:40:34.821292 kubelet[2245]: E0430 12:40:34.821283 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:34.821576 containerd[1514]: time="2025-04-30T12:40:34.821538631Z" level=info msg="CreateContainer within sandbox \"ee6d5d8306b2db92dc31918a8d5d0a6b52deeb6fe0959a5af914f0cd6282ad59\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 12:40:34.821852 containerd[1514]: time="2025-04-30T12:40:34.821793000Z" level=info msg="CreateContainer within sandbox \"9ef8a79a09da930a82079899284ca20c4a7e81a0f1929323b26aa099c921866c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 12:40:34.830150 containerd[1514]: time="2025-04-30T12:40:34.830108118Z" level=info msg="CreateContainer within sandbox \"f2cd86739a795a92d8e1212bec49347e0b122e22208684d1bf8c0cce95b5d123\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d64ce0dfec8a14ea82fb7f814c204ec1dabacf31b11cdf60b2c808635472adfb\"" Apr 30 12:40:34.831052 containerd[1514]: time="2025-04-30T12:40:34.830992757Z" level=info msg="StartContainer for \"d64ce0dfec8a14ea82fb7f814c204ec1dabacf31b11cdf60b2c808635472adfb\"" Apr 30 12:40:34.854878 kubelet[2245]: I0430 12:40:34.854629 2245 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 12:40:34.855368 containerd[1514]: time="2025-04-30T12:40:34.855298887Z" level=info msg="CreateContainer within sandbox \"ee6d5d8306b2db92dc31918a8d5d0a6b52deeb6fe0959a5af914f0cd6282ad59\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a80933006d8cd2d8e5608992f375b5a4a9763dea5068ea36c40a3740da00be3b\"" Apr 30 12:40:34.856420 containerd[1514]: time="2025-04-30T12:40:34.856383280Z" level=info msg="StartContainer for \"a80933006d8cd2d8e5608992f375b5a4a9763dea5068ea36c40a3740da00be3b\"" Apr 30 12:40:34.856717 kubelet[2245]: E0430 12:40:34.856647 2245 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 30 12:40:34.858239 containerd[1514]: time="2025-04-30T12:40:34.858197213Z" level=info msg="CreateContainer within sandbox \"9ef8a79a09da930a82079899284ca20c4a7e81a0f1929323b26aa099c921866c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1331ca8243a6f587b02df1ac2d023f80c41017032924ba73fc47555a38c17e6b\"" Apr 30 12:40:34.859728 containerd[1514]: time="2025-04-30T12:40:34.859701311Z" level=info msg="StartContainer for \"1331ca8243a6f587b02df1ac2d023f80c41017032924ba73fc47555a38c17e6b\"" Apr 30 12:40:34.878826 systemd[1]: Started cri-containerd-d64ce0dfec8a14ea82fb7f814c204ec1dabacf31b11cdf60b2c808635472adfb.scope - libcontainer container d64ce0dfec8a14ea82fb7f814c204ec1dabacf31b11cdf60b2c808635472adfb. Apr 30 12:40:34.918746 systemd[1]: Started cri-containerd-1331ca8243a6f587b02df1ac2d023f80c41017032924ba73fc47555a38c17e6b.scope - libcontainer container 1331ca8243a6f587b02df1ac2d023f80c41017032924ba73fc47555a38c17e6b. Apr 30 12:40:34.920649 systemd[1]: Started cri-containerd-a80933006d8cd2d8e5608992f375b5a4a9763dea5068ea36c40a3740da00be3b.scope - libcontainer container a80933006d8cd2d8e5608992f375b5a4a9763dea5068ea36c40a3740da00be3b. Apr 30 12:40:34.926747 kubelet[2245]: E0430 12:40:34.926502 2245 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183b190ec60ea115 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 12:40:31.779283221 +0000 UTC m=+0.661173103,LastTimestamp:2025-04-30 12:40:31.779283221 +0000 UTC m=+0.661173103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 12:40:34.948989 containerd[1514]: time="2025-04-30T12:40:34.948930934Z" level=info msg="StartContainer for \"d64ce0dfec8a14ea82fb7f814c204ec1dabacf31b11cdf60b2c808635472adfb\" returns successfully" Apr 30 12:40:34.988142 containerd[1514]: time="2025-04-30T12:40:34.988078566Z" level=info msg="StartContainer for \"a80933006d8cd2d8e5608992f375b5a4a9763dea5068ea36c40a3740da00be3b\" returns successfully" Apr 30 12:40:34.988299 containerd[1514]: time="2025-04-30T12:40:34.988194098Z" level=info msg="StartContainer for \"1331ca8243a6f587b02df1ac2d023f80c41017032924ba73fc47555a38c17e6b\" returns successfully" Apr 30 12:40:35.878188 kubelet[2245]: E0430 12:40:35.878139 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:35.879067 kubelet[2245]: E0430 12:40:35.879032 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:35.880508 kubelet[2245]: E0430 12:40:35.880487 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:36.458346 kubelet[2245]: I0430 12:40:36.458283 2245 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 12:40:36.884464 kubelet[2245]: E0430 12:40:36.884291 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:36.884464 kubelet[2245]: E0430 12:40:36.884332 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:36.885125 kubelet[2245]: E0430 12:40:36.884610 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:36.974488 kubelet[2245]: I0430 12:40:36.974297 2245 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Apr 30 12:40:36.974488 kubelet[2245]: E0430 12:40:36.974352 2245 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 30 12:40:37.020977 kubelet[2245]: E0430 12:40:37.020920 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:37.122114 kubelet[2245]: E0430 12:40:37.122038 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:37.223125 kubelet[2245]: E0430 12:40:37.222933 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:37.323390 kubelet[2245]: E0430 12:40:37.323304 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:37.424182 kubelet[2245]: E0430 12:40:37.424104 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:37.524871 kubelet[2245]: E0430 12:40:37.524777 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:37.625776 kubelet[2245]: E0430 12:40:37.625704 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:37.726549 kubelet[2245]: E0430 12:40:37.726493 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:37.827752 kubelet[2245]: E0430 12:40:37.827594 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:37.885141 kubelet[2245]: E0430 12:40:37.885097 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:37.928186 kubelet[2245]: E0430 12:40:37.928119 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:38.028537 kubelet[2245]: E0430 12:40:38.028480 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:38.129012 kubelet[2245]: E0430 12:40:38.128840 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:38.229862 kubelet[2245]: E0430 12:40:38.229764 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:38.280296 kubelet[2245]: E0430 12:40:38.280229 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:38.330841 kubelet[2245]: E0430 12:40:38.330773 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:38.433055 kubelet[2245]: E0430 12:40:38.432150 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:38.533617 kubelet[2245]: E0430 12:40:38.533385 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:38.634463 kubelet[2245]: E0430 12:40:38.634348 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:38.734712 kubelet[2245]: E0430 12:40:38.734502 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:38.835185 kubelet[2245]: E0430 12:40:38.835130 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:38.935555 kubelet[2245]: E0430 12:40:38.935488 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:39.036055 kubelet[2245]: E0430 12:40:39.036000 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:39.094884 systemd[1]: Reload requested from client PID 2526 ('systemctl') (unit session-7.scope)... Apr 30 12:40:39.094909 systemd[1]: Reloading... Apr 30 12:40:39.136821 kubelet[2245]: E0430 12:40:39.136761 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:39.209517 zram_generator::config[2573]: No configuration found. Apr 30 12:40:39.237237 kubelet[2245]: E0430 12:40:39.237136 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:39.337852 kubelet[2245]: E0430 12:40:39.337652 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:39.347036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:40:39.438762 kubelet[2245]: E0430 12:40:39.438680 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:39.481276 systemd[1]: Reloading finished in 385 ms. Apr 30 12:40:39.507506 kubelet[2245]: I0430 12:40:39.507351 2245 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:40:39.507504 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:39.535623 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:40:39.536048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:39.536124 systemd[1]: kubelet.service: Consumed 1.381s CPU time, 120M memory peak. Apr 30 12:40:39.544864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:39.765168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:39.771710 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:40:39.836179 kubelet[2615]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:40:39.836784 kubelet[2615]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:40:39.836784 kubelet[2615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:40:39.837083 kubelet[2615]: I0430 12:40:39.836956 2615 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:40:39.844866 kubelet[2615]: I0430 12:40:39.844804 2615 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 12:40:39.844866 kubelet[2615]: I0430 12:40:39.844844 2615 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:40:39.845221 kubelet[2615]: I0430 12:40:39.845192 2615 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 12:40:39.846902 kubelet[2615]: I0430 12:40:39.846875 2615 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 12:40:39.849057 kubelet[2615]: I0430 12:40:39.849018 2615 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:40:39.853738 kubelet[2615]: E0430 12:40:39.853691 2615 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 12:40:39.853738 kubelet[2615]: I0430 12:40:39.853729 2615 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 12:40:39.859315 kubelet[2615]: I0430 12:40:39.859265 2615 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:40:39.859410 kubelet[2615]: I0430 12:40:39.859400 2615 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 12:40:39.859640 kubelet[2615]: I0430 12:40:39.859576 2615 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:40:39.859839 kubelet[2615]: I0430 12:40:39.859623 2615 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 12:40:39.859839 kubelet[2615]: I0430 12:40:39.859834 2615 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:40:39.860008 kubelet[2615]: I0430 12:40:39.859846 2615 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 12:40:39.860008 kubelet[2615]: I0430 12:40:39.859891 2615 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:40:39.860069 kubelet[2615]: I0430 12:40:39.860033 2615 kubelet.go:408] "Attempting to sync node with API server" Apr 30 12:40:39.860069 kubelet[2615]: I0430 12:40:39.860056 2615 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:40:39.860141 kubelet[2615]: I0430 12:40:39.860099 2615 kubelet.go:314] "Adding apiserver pod source" Apr 30 12:40:39.860141 kubelet[2615]: I0430 12:40:39.860117 2615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:40:39.861497 kubelet[2615]: I0430 12:40:39.860761 2615 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:40:39.861497 kubelet[2615]: I0430 12:40:39.861277 2615 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:40:39.861897 kubelet[2615]: I0430 12:40:39.861852 2615 server.go:1269] "Started kubelet" Apr 30 12:40:39.864470 kubelet[2615]: I0430 12:40:39.862141 2615 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:40:39.864470 kubelet[2615]: I0430 12:40:39.863212 2615 server.go:460] "Adding debug handlers to kubelet server" Apr 30 12:40:39.864470 kubelet[2615]: I0430 12:40:39.864266 2615 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:40:39.864810 kubelet[2615]: I0430 12:40:39.864566 2615 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:40:39.867924 kubelet[2615]: I0430 12:40:39.867871 2615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:40:39.870714 kubelet[2615]: I0430 12:40:39.870685 2615 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 12:40:39.872796 kubelet[2615]: I0430 12:40:39.872767 2615 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 12:40:39.872892 kubelet[2615]: I0430 12:40:39.872872 2615 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 12:40:39.873061 kubelet[2615]: I0430 12:40:39.873039 2615 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:40:39.874876 kubelet[2615]: I0430 12:40:39.874845 2615 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:40:39.874991 kubelet[2615]: I0430 12:40:39.874961 2615 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:40:39.876117 kubelet[2615]: E0430 12:40:39.876097 2615 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:40:39.876478 kubelet[2615]: E0430 12:40:39.876345 2615 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:39.876657 kubelet[2615]: I0430 12:40:39.876523 2615 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:40:39.889130 kubelet[2615]: I0430 12:40:39.888807 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:40:39.891767 kubelet[2615]: I0430 12:40:39.891384 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:40:39.891767 kubelet[2615]: I0430 12:40:39.891459 2615 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:40:39.891767 kubelet[2615]: I0430 12:40:39.891485 2615 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 12:40:39.891767 kubelet[2615]: E0430 12:40:39.891590 2615 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:40:39.921869 kubelet[2615]: I0430 12:40:39.921828 2615 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:40:39.921869 kubelet[2615]: I0430 12:40:39.921849 2615 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:40:39.921869 kubelet[2615]: I0430 12:40:39.921870 2615 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:40:39.922130 kubelet[2615]: I0430 12:40:39.922049 2615 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 12:40:39.922130 kubelet[2615]: I0430 12:40:39.922061 2615 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 12:40:39.922130 kubelet[2615]: I0430 12:40:39.922083 2615 policy_none.go:49] "None policy: Start" Apr 30 12:40:39.922748 kubelet[2615]: I0430 12:40:39.922727 2615 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:40:39.922821 kubelet[2615]: I0430 12:40:39.922753 2615 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:40:39.922925 kubelet[2615]: I0430 12:40:39.922906 2615 state_mem.go:75] "Updated machine memory state" Apr 30 12:40:39.928079 kubelet[2615]: I0430 12:40:39.928045 2615 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:40:39.928569 kubelet[2615]: I0430 12:40:39.928264 2615 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 12:40:39.928569 kubelet[2615]: I0430 12:40:39.928284 2615 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:40:39.928674 kubelet[2615]: I0430 12:40:39.928575 2615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:40:40.034689 kubelet[2615]: I0430 12:40:40.034646 2615 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 12:40:40.174960 kubelet[2615]: I0430 12:40:40.174884 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:40.174960 kubelet[2615]: I0430 12:40:40.174941 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:40.174960 kubelet[2615]: I0430 12:40:40.174967 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:40.175206 kubelet[2615]: I0430 12:40:40.174989 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:40.175206 kubelet[2615]: I0430 12:40:40.175010 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/503026338e0c3ed6f3efea1fa2e23430-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"503026338e0c3ed6f3efea1fa2e23430\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:40.175206 kubelet[2615]: I0430 12:40:40.175099 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/503026338e0c3ed6f3efea1fa2e23430-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"503026338e0c3ed6f3efea1fa2e23430\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:40.175206 kubelet[2615]: I0430 12:40:40.175187 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:40.175313 kubelet[2615]: I0430 12:40:40.175218 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:40.175313 kubelet[2615]: I0430 12:40:40.175242 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/503026338e0c3ed6f3efea1fa2e23430-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"503026338e0c3ed6f3efea1fa2e23430\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:40.242654 kubelet[2615]: I0430 12:40:40.242596 2615 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Apr 30 12:40:40.242841 kubelet[2615]: I0430 12:40:40.242694 2615 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Apr 30 12:40:40.304340 kubelet[2615]: E0430 12:40:40.304107 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:40.304503 kubelet[2615]: E0430 12:40:40.304382 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:40.304853 kubelet[2615]: E0430 12:40:40.304641 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:40.363903 sudo[2651]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 12:40:40.364295 sudo[2651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 12:40:40.861093 kubelet[2615]: I0430 12:40:40.861017 2615 apiserver.go:52] "Watching apiserver" Apr 30 12:40:40.873784 kubelet[2615]: I0430 12:40:40.873740 2615 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 12:40:40.905279 kubelet[2615]: E0430 12:40:40.905134 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:40.905279 kubelet[2615]: E0430 12:40:40.905222 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:40.959766 sudo[2651]: pam_unix(sudo:session): session closed for user root Apr 30 12:40:40.964060 kubelet[2615]: E0430 12:40:40.963760 2615 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:40.964060 kubelet[2615]: E0430 12:40:40.963978 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:41.225453 kubelet[2615]: I0430 12:40:41.224186 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.224158928 podStartE2EDuration="2.224158928s" podCreationTimestamp="2025-04-30 12:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:40:41.062519695 +0000 UTC m=+1.280307944" watchObservedRunningTime="2025-04-30 12:40:41.224158928 +0000 UTC m=+1.441947187" Apr 30 12:40:41.309787 kubelet[2615]: I0430 12:40:41.309709 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.3096869509999998 podStartE2EDuration="2.309686951s" podCreationTimestamp="2025-04-30 12:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:40:41.224334352 +0000 UTC m=+1.442122601" watchObservedRunningTime="2025-04-30 12:40:41.309686951 +0000 UTC m=+1.527475200" Apr 30 12:40:41.325956 kubelet[2615]: I0430 12:40:41.325136 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.325113156 podStartE2EDuration="2.325113156s" podCreationTimestamp="2025-04-30 12:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:40:41.30981231 +0000 UTC m=+1.527600559" watchObservedRunningTime="2025-04-30 12:40:41.325113156 +0000 UTC m=+1.542901405" Apr 30 12:40:41.906092 kubelet[2615]: E0430 12:40:41.906041 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:42.606793 kubelet[2615]: E0430 12:40:42.606747 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:43.042123 kubelet[2615]: E0430 12:40:43.042040 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:43.365531 sudo[1699]: pam_unix(sudo:session): session closed for user root Apr 30 12:40:43.367504 sshd[1698]: Connection closed by 10.0.0.1 port 53126 Apr 30 12:40:43.373643 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:43.377619 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:53126.service: Deactivated successfully. Apr 30 12:40:43.380702 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 12:40:43.380979 systemd[1]: session-7.scope: Consumed 5.809s CPU time, 256.1M memory peak. Apr 30 12:40:43.384134 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Apr 30 12:40:43.385162 systemd-logind[1496]: Removed session 7. Apr 30 12:40:45.441719 kubelet[2615]: I0430 12:40:45.441670 2615 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 12:40:45.442239 containerd[1514]: time="2025-04-30T12:40:45.442085882Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 12:40:45.442552 kubelet[2615]: I0430 12:40:45.442271 2615 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 12:40:45.552201 update_engine[1497]: I20250430 12:40:45.552103 1497 update_attempter.cc:509] Updating boot flags... Apr 30 12:40:45.604867 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2698) Apr 30 12:40:45.672479 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2698) Apr 30 12:40:45.708790 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2698) Apr 30 12:40:46.669658 systemd[1]: Created slice kubepods-besteffort-podc972eaa0_45ef_4e80_b0e8_82d62e7f0e3d.slice - libcontainer container kubepods-besteffort-podc972eaa0_45ef_4e80_b0e8_82d62e7f0e3d.slice. Apr 30 12:40:46.683254 systemd[1]: Created slice kubepods-burstable-pod58720998_e616_4570_ac60_294fc3eef92c.slice - libcontainer container kubepods-burstable-pod58720998_e616_4570_ac60_294fc3eef92c.slice. Apr 30 12:40:46.693006 systemd[1]: Created slice kubepods-besteffort-pod13bb02f6_a7fa_4882_bbf5_108ef90e13bb.slice - libcontainer container kubepods-besteffort-pod13bb02f6_a7fa_4882_bbf5_108ef90e13bb.slice. Apr 30 12:40:46.715658 kubelet[2615]: I0430 12:40:46.715606 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86zrf\" (UniqueName: \"kubernetes.io/projected/58720998-e616-4570-ac60-294fc3eef92c-kube-api-access-86zrf\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.717509 kubelet[2615]: I0430 12:40:46.717481 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccjpk\" (UniqueName: \"kubernetes.io/projected/13bb02f6-a7fa-4882-bbf5-108ef90e13bb-kube-api-access-ccjpk\") pod \"cilium-operator-5d85765b45-kn8fj\" (UID: \"13bb02f6-a7fa-4882-bbf5-108ef90e13bb\") " pod="kube-system/cilium-operator-5d85765b45-kn8fj" Apr 30 12:40:46.717647 kubelet[2615]: I0430 12:40:46.717629 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-hostproc\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.717758 kubelet[2615]: I0430 12:40:46.717743 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cni-path\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.717876 kubelet[2615]: I0430 12:40:46.717859 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-host-proc-sys-kernel\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.717984 kubelet[2615]: I0430 12:40:46.717970 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58720998-e616-4570-ac60-294fc3eef92c-clustermesh-secrets\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.718091 kubelet[2615]: I0430 12:40:46.718077 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58720998-e616-4570-ac60-294fc3eef92c-hubble-tls\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.718213 kubelet[2615]: I0430 12:40:46.718198 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-host-proc-sys-net\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.718343 kubelet[2615]: I0430 12:40:46.718325 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13bb02f6-a7fa-4882-bbf5-108ef90e13bb-cilium-config-path\") pod \"cilium-operator-5d85765b45-kn8fj\" (UID: \"13bb02f6-a7fa-4882-bbf5-108ef90e13bb\") " pod="kube-system/cilium-operator-5d85765b45-kn8fj" Apr 30 12:40:46.718451 kubelet[2615]: I0430 12:40:46.718414 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c972eaa0-45ef-4e80-b0e8-82d62e7f0e3d-xtables-lock\") pod \"kube-proxy-b987k\" (UID: \"c972eaa0-45ef-4e80-b0e8-82d62e7f0e3d\") " pod="kube-system/kube-proxy-b987k" Apr 30 12:40:46.718451 kubelet[2615]: I0430 12:40:46.718465 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c972eaa0-45ef-4e80-b0e8-82d62e7f0e3d-lib-modules\") pod \"kube-proxy-b987k\" (UID: \"c972eaa0-45ef-4e80-b0e8-82d62e7f0e3d\") " pod="kube-system/kube-proxy-b987k" Apr 30 12:40:46.718887 kubelet[2615]: I0430 12:40:46.718483 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cilium-run\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.718887 kubelet[2615]: I0430 12:40:46.718503 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cilium-cgroup\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.718887 kubelet[2615]: I0430 12:40:46.718528 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58720998-e616-4570-ac60-294fc3eef92c-cilium-config-path\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.718887 kubelet[2615]: I0430 12:40:46.718546 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d24cf\" (UniqueName: \"kubernetes.io/projected/c972eaa0-45ef-4e80-b0e8-82d62e7f0e3d-kube-api-access-d24cf\") pod \"kube-proxy-b987k\" (UID: \"c972eaa0-45ef-4e80-b0e8-82d62e7f0e3d\") " pod="kube-system/kube-proxy-b987k" Apr 30 12:40:46.718887 kubelet[2615]: I0430 12:40:46.718565 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-xtables-lock\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.718887 kubelet[2615]: I0430 12:40:46.718646 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-bpf-maps\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.719063 kubelet[2615]: I0430 12:40:46.718698 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-etc-cni-netd\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.719063 kubelet[2615]: I0430 12:40:46.718717 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c972eaa0-45ef-4e80-b0e8-82d62e7f0e3d-kube-proxy\") pod \"kube-proxy-b987k\" (UID: \"c972eaa0-45ef-4e80-b0e8-82d62e7f0e3d\") " pod="kube-system/kube-proxy-b987k" Apr 30 12:40:46.719063 kubelet[2615]: I0430 12:40:46.718748 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-lib-modules\") pod \"cilium-zjg6h\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " pod="kube-system/cilium-zjg6h" Apr 30 12:40:46.980627 kubelet[2615]: E0430 12:40:46.980423 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:46.981994 containerd[1514]: time="2025-04-30T12:40:46.981217327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b987k,Uid:c972eaa0-45ef-4e80-b0e8-82d62e7f0e3d,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:46.987669 kubelet[2615]: E0430 12:40:46.987626 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:46.988285 containerd[1514]: time="2025-04-30T12:40:46.988220838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zjg6h,Uid:58720998-e616-4570-ac60-294fc3eef92c,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:47.000002 kubelet[2615]: E0430 12:40:46.999893 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.000552 containerd[1514]: time="2025-04-30T12:40:47.000488522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kn8fj,Uid:13bb02f6-a7fa-4882-bbf5-108ef90e13bb,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:47.063859 containerd[1514]: time="2025-04-30T12:40:47.063652491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:47.065757 containerd[1514]: time="2025-04-30T12:40:47.065634397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:47.065757 containerd[1514]: time="2025-04-30T12:40:47.065702205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:47.065757 containerd[1514]: time="2025-04-30T12:40:47.065713196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.065896 containerd[1514]: time="2025-04-30T12:40:47.065806493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.066085 containerd[1514]: time="2025-04-30T12:40:47.066028664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:47.066085 containerd[1514]: time="2025-04-30T12:40:47.066064692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.066243 containerd[1514]: time="2025-04-30T12:40:47.066160053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.069597 containerd[1514]: time="2025-04-30T12:40:47.069470186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:47.069597 containerd[1514]: time="2025-04-30T12:40:47.069554947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:47.070103 containerd[1514]: time="2025-04-30T12:40:47.069571378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.070103 containerd[1514]: time="2025-04-30T12:40:47.069680003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.089856 systemd[1]: Started cri-containerd-6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41.scope - libcontainer container 6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41. Apr 30 12:40:47.093003 systemd[1]: Started cri-containerd-84541fd79cc279bdca8cefde9efcc12666d6eb9a4dd05945c8fb943c58e8849d.scope - libcontainer container 84541fd79cc279bdca8cefde9efcc12666d6eb9a4dd05945c8fb943c58e8849d. Apr 30 12:40:47.100924 systemd[1]: Started cri-containerd-2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906.scope - libcontainer container 2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906. Apr 30 12:40:47.138132 containerd[1514]: time="2025-04-30T12:40:47.137981519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zjg6h,Uid:58720998-e616-4570-ac60-294fc3eef92c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\"" Apr 30 12:40:47.138973 kubelet[2615]: E0430 12:40:47.138937 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.143977 containerd[1514]: time="2025-04-30T12:40:47.141393194Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 12:40:47.145621 containerd[1514]: time="2025-04-30T12:40:47.145412111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b987k,Uid:c972eaa0-45ef-4e80-b0e8-82d62e7f0e3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"84541fd79cc279bdca8cefde9efcc12666d6eb9a4dd05945c8fb943c58e8849d\"" Apr 30 12:40:47.146471 kubelet[2615]: E0430 12:40:47.146420 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.150379 containerd[1514]: time="2025-04-30T12:40:47.150221214Z" level=info msg="CreateContainer within sandbox \"84541fd79cc279bdca8cefde9efcc12666d6eb9a4dd05945c8fb943c58e8849d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 12:40:47.160371 containerd[1514]: time="2025-04-30T12:40:47.160250910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kn8fj,Uid:13bb02f6-a7fa-4882-bbf5-108ef90e13bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\"" Apr 30 12:40:47.162519 kubelet[2615]: E0430 12:40:47.161302 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.177291 containerd[1514]: time="2025-04-30T12:40:47.177213696Z" level=info msg="CreateContainer within sandbox \"84541fd79cc279bdca8cefde9efcc12666d6eb9a4dd05945c8fb943c58e8849d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b22915c2d29ae9b2448dbd94accfb131bb37f054545139b86f6bcbe33c25f1ab\"" Apr 30 12:40:47.178462 containerd[1514]: time="2025-04-30T12:40:47.178373393Z" level=info msg="StartContainer for \"b22915c2d29ae9b2448dbd94accfb131bb37f054545139b86f6bcbe33c25f1ab\"" Apr 30 12:40:47.213649 systemd[1]: Started cri-containerd-b22915c2d29ae9b2448dbd94accfb131bb37f054545139b86f6bcbe33c25f1ab.scope - libcontainer container b22915c2d29ae9b2448dbd94accfb131bb37f054545139b86f6bcbe33c25f1ab. Apr 30 12:40:47.251083 containerd[1514]: time="2025-04-30T12:40:47.250877012Z" level=info msg="StartContainer for \"b22915c2d29ae9b2448dbd94accfb131bb37f054545139b86f6bcbe33c25f1ab\" returns successfully" Apr 30 12:40:47.918629 kubelet[2615]: E0430 12:40:47.918564 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.928129 kubelet[2615]: I0430 12:40:47.928051 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b987k" podStartSLOduration=1.928028522 podStartE2EDuration="1.928028522s" podCreationTimestamp="2025-04-30 12:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:40:47.92790589 +0000 UTC m=+8.145694139" watchObservedRunningTime="2025-04-30 12:40:47.928028522 +0000 UTC m=+8.145816771" Apr 30 12:40:48.325752 kubelet[2615]: E0430 12:40:48.325697 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:48.921016 kubelet[2615]: E0430 12:40:48.920979 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:52.612023 kubelet[2615]: E0430 12:40:52.611913 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:53.230012 kubelet[2615]: E0430 12:40:53.229901 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:55.296572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2536508902.mount: Deactivated successfully. Apr 30 12:41:05.214050 containerd[1514]: time="2025-04-30T12:41:05.213939095Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:41:05.216109 containerd[1514]: time="2025-04-30T12:41:05.215983482Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 12:41:05.218174 containerd[1514]: time="2025-04-30T12:41:05.218058817Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:41:05.219789 containerd[1514]: time="2025-04-30T12:41:05.219741844Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.078314385s" Apr 30 12:41:05.219789 containerd[1514]: time="2025-04-30T12:41:05.219783403Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 12:41:05.224824 containerd[1514]: time="2025-04-30T12:41:05.224770648Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 12:41:05.244256 containerd[1514]: time="2025-04-30T12:41:05.244184410Z" level=info msg="CreateContainer within sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:41:05.312787 containerd[1514]: time="2025-04-30T12:41:05.312471250Z" level=info msg="CreateContainer within sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb\"" Apr 30 12:41:05.315848 containerd[1514]: time="2025-04-30T12:41:05.315759968Z" level=info msg="StartContainer for \"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb\"" Apr 30 12:41:05.354799 systemd[1]: Started cri-containerd-82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb.scope - libcontainer container 82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb. Apr 30 12:41:05.395808 containerd[1514]: time="2025-04-30T12:41:05.395612733Z" level=info msg="StartContainer for \"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb\" returns successfully" Apr 30 12:41:05.410277 systemd[1]: cri-containerd-82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb.scope: Deactivated successfully. Apr 30 12:41:05.963996 kubelet[2615]: E0430 12:41:05.963937 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:06.043497 containerd[1514]: time="2025-04-30T12:41:06.043388331Z" level=info msg="shim disconnected" id=82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb namespace=k8s.io Apr 30 12:41:06.043497 containerd[1514]: time="2025-04-30T12:41:06.043490663Z" level=warning msg="cleaning up after shim disconnected" id=82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb namespace=k8s.io Apr 30 12:41:06.043497 containerd[1514]: time="2025-04-30T12:41:06.043501443Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:41:06.295237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb-rootfs.mount: Deactivated successfully. Apr 30 12:41:06.964539 kubelet[2615]: E0430 12:41:06.964485 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:06.966703 containerd[1514]: time="2025-04-30T12:41:06.966555457Z" level=info msg="CreateContainer within sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:41:06.989228 containerd[1514]: time="2025-04-30T12:41:06.989166497Z" level=info msg="CreateContainer within sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8\"" Apr 30 12:41:06.989807 containerd[1514]: time="2025-04-30T12:41:06.989760334Z" level=info msg="StartContainer for \"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8\"" Apr 30 12:41:07.022590 systemd[1]: Started cri-containerd-9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8.scope - libcontainer container 9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8. Apr 30 12:41:07.050071 containerd[1514]: time="2025-04-30T12:41:07.050027654Z" level=info msg="StartContainer for \"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8\" returns successfully" Apr 30 12:41:07.065201 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:41:07.065532 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:41:07.066035 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:41:07.074850 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:41:07.075128 systemd[1]: cri-containerd-9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8.scope: Deactivated successfully. Apr 30 12:41:07.100250 containerd[1514]: time="2025-04-30T12:41:07.100168681Z" level=info msg="shim disconnected" id=9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8 namespace=k8s.io Apr 30 12:41:07.100250 containerd[1514]: time="2025-04-30T12:41:07.100244443Z" level=warning msg="cleaning up after shim disconnected" id=9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8 namespace=k8s.io Apr 30 12:41:07.100250 containerd[1514]: time="2025-04-30T12:41:07.100253810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:41:07.108563 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:41:07.295296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8-rootfs.mount: Deactivated successfully. Apr 30 12:41:07.620053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355375070.mount: Deactivated successfully. Apr 30 12:41:07.968021 kubelet[2615]: E0430 12:41:07.967883 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:07.970295 containerd[1514]: time="2025-04-30T12:41:07.969993871Z" level=info msg="CreateContainer within sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:41:08.563391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1991983854.mount: Deactivated successfully. Apr 30 12:41:08.568871 containerd[1514]: time="2025-04-30T12:41:08.568730927Z" level=info msg="CreateContainer within sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611\"" Apr 30 12:41:08.572028 containerd[1514]: time="2025-04-30T12:41:08.570922349Z" level=info msg="StartContainer for \"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611\"" Apr 30 12:41:08.605761 systemd[1]: Started cri-containerd-caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611.scope - libcontainer container caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611. Apr 30 12:41:08.646791 systemd[1]: cri-containerd-caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611.scope: Deactivated successfully. Apr 30 12:41:08.739957 containerd[1514]: time="2025-04-30T12:41:08.739898933Z" level=info msg="StartContainer for \"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611\" returns successfully" Apr 30 12:41:08.772161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611-rootfs.mount: Deactivated successfully. Apr 30 12:41:08.981094 containerd[1514]: time="2025-04-30T12:41:08.980895846Z" level=info msg="shim disconnected" id=caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611 namespace=k8s.io Apr 30 12:41:08.981094 containerd[1514]: time="2025-04-30T12:41:08.980964205Z" level=warning msg="cleaning up after shim disconnected" id=caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611 namespace=k8s.io Apr 30 12:41:08.981094 containerd[1514]: time="2025-04-30T12:41:08.980975857Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:41:09.011181 kubelet[2615]: E0430 12:41:09.011138 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:09.015828 containerd[1514]: time="2025-04-30T12:41:09.015555427Z" level=info msg="CreateContainer within sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:41:09.019504 containerd[1514]: time="2025-04-30T12:41:09.019448908Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:41:09.021816 containerd[1514]: time="2025-04-30T12:41:09.021351256Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 12:41:09.023848 containerd[1514]: time="2025-04-30T12:41:09.023814667Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:41:09.026343 containerd[1514]: time="2025-04-30T12:41:09.026290383Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.801466155s" Apr 30 12:41:09.026343 containerd[1514]: time="2025-04-30T12:41:09.026335167Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 12:41:09.028807 containerd[1514]: time="2025-04-30T12:41:09.028762080Z" level=info msg="CreateContainer within sandbox \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 12:41:09.032126 containerd[1514]: time="2025-04-30T12:41:09.032060222Z" level=info msg="CreateContainer within sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e\"" Apr 30 12:41:09.034199 containerd[1514]: time="2025-04-30T12:41:09.033156915Z" level=info msg="StartContainer for \"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e\"" Apr 30 12:41:09.064607 systemd[1]: Started cri-containerd-ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e.scope - libcontainer container ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e. Apr 30 12:41:09.091573 systemd[1]: cri-containerd-ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e.scope: Deactivated successfully. Apr 30 12:41:09.137523 containerd[1514]: time="2025-04-30T12:41:09.137470495Z" level=info msg="StartContainer for \"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e\" returns successfully" Apr 30 12:41:09.143058 containerd[1514]: time="2025-04-30T12:41:09.143015071Z" level=info msg="CreateContainer within sandbox \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\"" Apr 30 12:41:09.143727 containerd[1514]: time="2025-04-30T12:41:09.143696342Z" level=info msg="StartContainer for \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\"" Apr 30 12:41:09.164672 containerd[1514]: time="2025-04-30T12:41:09.164306196Z" level=info msg="shim disconnected" id=ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e namespace=k8s.io Apr 30 12:41:09.164672 containerd[1514]: time="2025-04-30T12:41:09.164377400Z" level=warning msg="cleaning up after shim disconnected" id=ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e namespace=k8s.io Apr 30 12:41:09.164672 containerd[1514]: time="2025-04-30T12:41:09.164389153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:41:09.169587 systemd[1]: Started cri-containerd-099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb.scope - libcontainer container 099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb. Apr 30 12:41:09.198871 containerd[1514]: time="2025-04-30T12:41:09.198803835Z" level=info msg="StartContainer for \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\" returns successfully" Apr 30 12:41:10.013204 kubelet[2615]: E0430 12:41:10.013164 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:10.015917 kubelet[2615]: E0430 12:41:10.015887 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:10.017787 containerd[1514]: time="2025-04-30T12:41:10.017751423Z" level=info msg="CreateContainer within sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:41:10.022604 kubelet[2615]: I0430 12:41:10.022538 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-kn8fj" podStartSLOduration=2.15738944 podStartE2EDuration="24.022518395s" podCreationTimestamp="2025-04-30 12:40:46 +0000 UTC" firstStartedPulling="2025-04-30 12:40:47.1619603 +0000 UTC m=+7.379748559" lastFinishedPulling="2025-04-30 12:41:09.027089265 +0000 UTC m=+29.244877514" observedRunningTime="2025-04-30 12:41:10.022267604 +0000 UTC m=+30.240055853" watchObservedRunningTime="2025-04-30 12:41:10.022518395 +0000 UTC m=+30.240306644" Apr 30 12:41:10.046939 containerd[1514]: time="2025-04-30T12:41:10.046730035Z" level=info msg="CreateContainer within sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\"" Apr 30 12:41:10.049665 containerd[1514]: time="2025-04-30T12:41:10.048651198Z" level=info msg="StartContainer for \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\"" Apr 30 12:41:10.104603 systemd[1]: Started cri-containerd-dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356.scope - libcontainer container dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356. Apr 30 12:41:10.149195 containerd[1514]: time="2025-04-30T12:41:10.149131169Z" level=info msg="StartContainer for \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\" returns successfully" Apr 30 12:41:10.388208 kubelet[2615]: I0430 12:41:10.388135 2615 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 12:41:10.708570 systemd[1]: Created slice kubepods-burstable-pod6f636c3a_8e4b_4be7_a89a_cc9aef64c6cd.slice - libcontainer container kubepods-burstable-pod6f636c3a_8e4b_4be7_a89a_cc9aef64c6cd.slice. Apr 30 12:41:10.803258 systemd[1]: Created slice kubepods-burstable-pod3aa56430_a33e_409e_8517_3a0426540c86.slice - libcontainer container kubepods-burstable-pod3aa56430_a33e_409e_8517_3a0426540c86.slice. Apr 30 12:41:10.886961 kubelet[2615]: I0430 12:41:10.886898 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwmhr\" (UniqueName: \"kubernetes.io/projected/6f636c3a-8e4b-4be7-a89a-cc9aef64c6cd-kube-api-access-gwmhr\") pod \"coredns-6f6b679f8f-8f56g\" (UID: \"6f636c3a-8e4b-4be7-a89a-cc9aef64c6cd\") " pod="kube-system/coredns-6f6b679f8f-8f56g" Apr 30 12:41:10.886961 kubelet[2615]: I0430 12:41:10.886954 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f636c3a-8e4b-4be7-a89a-cc9aef64c6cd-config-volume\") pod \"coredns-6f6b679f8f-8f56g\" (UID: \"6f636c3a-8e4b-4be7-a89a-cc9aef64c6cd\") " pod="kube-system/coredns-6f6b679f8f-8f56g" Apr 30 12:41:10.987734 kubelet[2615]: I0430 12:41:10.987570 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3aa56430-a33e-409e-8517-3a0426540c86-config-volume\") pod \"coredns-6f6b679f8f-l2v8z\" (UID: \"3aa56430-a33e-409e-8517-3a0426540c86\") " pod="kube-system/coredns-6f6b679f8f-l2v8z" Apr 30 12:41:10.987734 kubelet[2615]: I0430 12:41:10.987649 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26xfc\" (UniqueName: \"kubernetes.io/projected/3aa56430-a33e-409e-8517-3a0426540c86-kube-api-access-26xfc\") pod \"coredns-6f6b679f8f-l2v8z\" (UID: \"3aa56430-a33e-409e-8517-3a0426540c86\") " pod="kube-system/coredns-6f6b679f8f-l2v8z" Apr 30 12:41:11.020009 kubelet[2615]: E0430 12:41:11.019972 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:11.020009 kubelet[2615]: E0430 12:41:11.020009 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:11.311379 kubelet[2615]: E0430 12:41:11.311317 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:11.318826 containerd[1514]: time="2025-04-30T12:41:11.318771150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8f56g,Uid:6f636c3a-8e4b-4be7-a89a-cc9aef64c6cd,Namespace:kube-system,Attempt:0,}" Apr 30 12:41:11.366925 kubelet[2615]: I0430 12:41:11.366816 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zjg6h" podStartSLOduration=7.283538093 podStartE2EDuration="25.366789385s" podCreationTimestamp="2025-04-30 12:40:46 +0000 UTC" firstStartedPulling="2025-04-30 12:40:47.14086293 +0000 UTC m=+7.358651179" lastFinishedPulling="2025-04-30 12:41:05.224114222 +0000 UTC m=+25.441902471" observedRunningTime="2025-04-30 12:41:11.365264569 +0000 UTC m=+31.583052818" watchObservedRunningTime="2025-04-30 12:41:11.366789385 +0000 UTC m=+31.584577634" Apr 30 12:41:11.407964 kubelet[2615]: E0430 12:41:11.407882 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:11.408628 containerd[1514]: time="2025-04-30T12:41:11.408583973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l2v8z,Uid:3aa56430-a33e-409e-8517-3a0426540c86,Namespace:kube-system,Attempt:0,}" Apr 30 12:41:12.021579 kubelet[2615]: E0430 12:41:12.021533 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:12.843781 systemd-networkd[1430]: cilium_host: Link UP Apr 30 12:41:12.844015 systemd-networkd[1430]: cilium_net: Link UP Apr 30 12:41:12.844020 systemd-networkd[1430]: cilium_net: Gained carrier Apr 30 12:41:12.844288 systemd-networkd[1430]: cilium_host: Gained carrier Apr 30 12:41:12.845417 systemd-networkd[1430]: cilium_host: Gained IPv6LL Apr 30 12:41:12.977685 systemd-networkd[1430]: cilium_vxlan: Link UP Apr 30 12:41:12.977699 systemd-networkd[1430]: cilium_vxlan: Gained carrier Apr 30 12:41:13.024411 kubelet[2615]: E0430 12:41:13.024335 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:13.228460 kernel: NET: Registered PF_ALG protocol family Apr 30 12:41:13.864572 systemd-networkd[1430]: cilium_net: Gained IPv6LL Apr 30 12:41:13.988160 systemd-networkd[1430]: lxc_health: Link UP Apr 30 12:41:14.002576 systemd-networkd[1430]: lxc_health: Gained carrier Apr 30 12:41:14.026460 kubelet[2615]: E0430 12:41:14.026240 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:14.247636 systemd-networkd[1430]: cilium_vxlan: Gained IPv6LL Apr 30 12:41:14.443471 kernel: eth0: renamed from tmpbdb75 Apr 30 12:41:14.447444 systemd-networkd[1430]: lxc2669317ffbac: Link UP Apr 30 12:41:14.447918 systemd-networkd[1430]: lxc2669317ffbac: Gained carrier Apr 30 12:41:14.485478 kernel: eth0: renamed from tmp4360f Apr 30 12:41:14.493959 systemd-networkd[1430]: lxc502f0e4df906: Link UP Apr 30 12:41:14.494321 systemd-networkd[1430]: lxc502f0e4df906: Gained carrier Apr 30 12:41:15.028555 kubelet[2615]: E0430 12:41:15.028357 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:15.143673 systemd-networkd[1430]: lxc_health: Gained IPv6LL Apr 30 12:41:16.030265 kubelet[2615]: E0430 12:41:16.030216 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:16.231703 systemd-networkd[1430]: lxc2669317ffbac: Gained IPv6LL Apr 30 12:41:16.551664 systemd-networkd[1430]: lxc502f0e4df906: Gained IPv6LL Apr 30 12:41:18.216482 containerd[1514]: time="2025-04-30T12:41:18.216345568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:41:18.216482 containerd[1514]: time="2025-04-30T12:41:18.216408737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:41:18.216482 containerd[1514]: time="2025-04-30T12:41:18.216423796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:41:18.217077 containerd[1514]: time="2025-04-30T12:41:18.216558468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:41:18.254736 systemd[1]: Started cri-containerd-4360f5b1ded41843be5dfef1d2530f21020c287da5f3813aa6ef00c1f5b3667f.scope - libcontainer container 4360f5b1ded41843be5dfef1d2530f21020c287da5f3813aa6ef00c1f5b3667f. Apr 30 12:41:18.262382 containerd[1514]: time="2025-04-30T12:41:18.262261213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:41:18.262602 containerd[1514]: time="2025-04-30T12:41:18.262340662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:41:18.262602 containerd[1514]: time="2025-04-30T12:41:18.262355570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:41:18.262602 containerd[1514]: time="2025-04-30T12:41:18.262502536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:41:18.273734 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 12:41:18.291627 systemd[1]: Started cri-containerd-bdb7519980f55e7c958db5542e1e6688883900b8119bc85261153e3133c1cf6d.scope - libcontainer container bdb7519980f55e7c958db5542e1e6688883900b8119bc85261153e3133c1cf6d. Apr 30 12:41:18.307405 containerd[1514]: time="2025-04-30T12:41:18.306134602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l2v8z,Uid:3aa56430-a33e-409e-8517-3a0426540c86,Namespace:kube-system,Attempt:0,} returns sandbox id \"4360f5b1ded41843be5dfef1d2530f21020c287da5f3813aa6ef00c1f5b3667f\"" Apr 30 12:41:18.307630 kubelet[2615]: E0430 12:41:18.307320 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:18.310507 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 12:41:18.311814 containerd[1514]: time="2025-04-30T12:41:18.311765248Z" level=info msg="CreateContainer within sandbox \"4360f5b1ded41843be5dfef1d2530f21020c287da5f3813aa6ef00c1f5b3667f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:41:18.336897 containerd[1514]: time="2025-04-30T12:41:18.336855095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8f56g,Uid:6f636c3a-8e4b-4be7-a89a-cc9aef64c6cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdb7519980f55e7c958db5542e1e6688883900b8119bc85261153e3133c1cf6d\"" Apr 30 12:41:18.337596 kubelet[2615]: E0430 12:41:18.337575 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:18.339298 containerd[1514]: time="2025-04-30T12:41:18.339267116Z" level=info msg="CreateContainer within sandbox \"bdb7519980f55e7c958db5542e1e6688883900b8119bc85261153e3133c1cf6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:41:19.490870 containerd[1514]: time="2025-04-30T12:41:19.490798903Z" level=info msg="CreateContainer within sandbox \"4360f5b1ded41843be5dfef1d2530f21020c287da5f3813aa6ef00c1f5b3667f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26e8c9dda28e0d182e684affeacb3b2fc9395755669f775207aa1407673d888d\"" Apr 30 12:41:19.491560 containerd[1514]: time="2025-04-30T12:41:19.491524616Z" level=info msg="StartContainer for \"26e8c9dda28e0d182e684affeacb3b2fc9395755669f775207aa1407673d888d\"" Apr 30 12:41:19.504125 containerd[1514]: time="2025-04-30T12:41:19.503830178Z" level=info msg="CreateContainer within sandbox \"bdb7519980f55e7c958db5542e1e6688883900b8119bc85261153e3133c1cf6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"458c0f3e40195ae5b9f96af14bc059a2dc3ffe92b722fb141533076d7e0e6661\"" Apr 30 12:41:19.504544 containerd[1514]: time="2025-04-30T12:41:19.504513942Z" level=info msg="StartContainer for \"458c0f3e40195ae5b9f96af14bc059a2dc3ffe92b722fb141533076d7e0e6661\"" Apr 30 12:41:19.543634 systemd[1]: Started cri-containerd-26e8c9dda28e0d182e684affeacb3b2fc9395755669f775207aa1407673d888d.scope - libcontainer container 26e8c9dda28e0d182e684affeacb3b2fc9395755669f775207aa1407673d888d. Apr 30 12:41:19.545216 systemd[1]: Started cri-containerd-458c0f3e40195ae5b9f96af14bc059a2dc3ffe92b722fb141533076d7e0e6661.scope - libcontainer container 458c0f3e40195ae5b9f96af14bc059a2dc3ffe92b722fb141533076d7e0e6661. Apr 30 12:41:20.028580 containerd[1514]: time="2025-04-30T12:41:20.028502936Z" level=info msg="StartContainer for \"26e8c9dda28e0d182e684affeacb3b2fc9395755669f775207aa1407673d888d\" returns successfully" Apr 30 12:41:20.028580 containerd[1514]: time="2025-04-30T12:41:20.028503007Z" level=info msg="StartContainer for \"458c0f3e40195ae5b9f96af14bc059a2dc3ffe92b722fb141533076d7e0e6661\" returns successfully" Apr 30 12:41:20.040903 kubelet[2615]: E0430 12:41:20.040222 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:20.041444 kubelet[2615]: E0430 12:41:20.041413 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:20.406546 kubelet[2615]: I0430 12:41:20.405222 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8f56g" podStartSLOduration=34.405201111 podStartE2EDuration="34.405201111s" podCreationTimestamp="2025-04-30 12:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:41:20.385275981 +0000 UTC m=+40.603064230" watchObservedRunningTime="2025-04-30 12:41:20.405201111 +0000 UTC m=+40.622989360" Apr 30 12:41:20.406546 kubelet[2615]: I0430 12:41:20.405328 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-l2v8z" podStartSLOduration=34.40532315 podStartE2EDuration="34.40532315s" podCreationTimestamp="2025-04-30 12:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:41:20.40484932 +0000 UTC m=+40.622637569" watchObservedRunningTime="2025-04-30 12:41:20.40532315 +0000 UTC m=+40.623111399" Apr 30 12:41:21.043751 kubelet[2615]: E0430 12:41:21.043307 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:21.045087 kubelet[2615]: E0430 12:41:21.044045 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:21.435864 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:35932.service - OpenSSH per-connection server daemon (10.0.0.1:35932). Apr 30 12:41:21.480864 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 35932 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:21.483198 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:21.488117 systemd-logind[1496]: New session 8 of user core. Apr 30 12:41:21.497654 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 12:41:21.770323 sshd[4025]: Connection closed by 10.0.0.1 port 35932 Apr 30 12:41:21.770713 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:21.775222 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:35932.service: Deactivated successfully. Apr 30 12:41:21.777531 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 12:41:21.778345 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Apr 30 12:41:21.779222 systemd-logind[1496]: Removed session 8. Apr 30 12:41:22.044849 kubelet[2615]: E0430 12:41:22.044713 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:22.044849 kubelet[2615]: E0430 12:41:22.044772 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:23.047779 kubelet[2615]: E0430 12:41:23.047704 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:23.048375 kubelet[2615]: E0430 12:41:23.048248 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:26.783921 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:33598.service - OpenSSH per-connection server daemon (10.0.0.1:33598). Apr 30 12:41:26.828452 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 33598 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:26.830293 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:26.834898 systemd-logind[1496]: New session 9 of user core. Apr 30 12:41:26.841577 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 12:41:26.969531 sshd[4042]: Connection closed by 10.0.0.1 port 33598 Apr 30 12:41:26.970002 sshd-session[4040]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:26.975124 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:33598.service: Deactivated successfully. Apr 30 12:41:26.978076 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 12:41:26.979092 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Apr 30 12:41:26.980136 systemd-logind[1496]: Removed session 9. Apr 30 12:41:32.000025 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:33608.service - OpenSSH per-connection server daemon (10.0.0.1:33608). Apr 30 12:41:32.043308 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 33608 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:32.045586 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:32.050952 systemd-logind[1496]: New session 10 of user core. Apr 30 12:41:32.058632 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 12:41:32.174499 sshd[4058]: Connection closed by 10.0.0.1 port 33608 Apr 30 12:41:32.175008 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:32.180102 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:33608.service: Deactivated successfully. Apr 30 12:41:32.182525 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 12:41:32.183319 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Apr 30 12:41:32.184300 systemd-logind[1496]: Removed session 10. Apr 30 12:41:37.189177 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:50206.service - OpenSSH per-connection server daemon (10.0.0.1:50206). Apr 30 12:41:37.233986 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 50206 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:37.235889 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:37.241355 systemd-logind[1496]: New session 11 of user core. Apr 30 12:41:37.249601 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 12:41:37.376851 sshd[4075]: Connection closed by 10.0.0.1 port 50206 Apr 30 12:41:37.377282 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:37.381942 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:50206.service: Deactivated successfully. Apr 30 12:41:37.384281 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 12:41:37.385196 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Apr 30 12:41:37.386184 systemd-logind[1496]: Removed session 11. Apr 30 12:41:42.392572 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:50214.service - OpenSSH per-connection server daemon (10.0.0.1:50214). Apr 30 12:41:42.450211 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 50214 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:42.452181 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:42.457877 systemd-logind[1496]: New session 12 of user core. Apr 30 12:41:42.468617 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 12:41:42.595997 sshd[4094]: Connection closed by 10.0.0.1 port 50214 Apr 30 12:41:42.596653 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:42.606507 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:50214.service: Deactivated successfully. Apr 30 12:41:42.609337 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 12:41:42.611374 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Apr 30 12:41:42.621974 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:50222.service - OpenSSH per-connection server daemon (10.0.0.1:50222). Apr 30 12:41:42.623261 systemd-logind[1496]: Removed session 12. Apr 30 12:41:42.665142 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 50222 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:42.666619 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:42.671119 systemd-logind[1496]: New session 13 of user core. Apr 30 12:41:42.686550 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 12:41:42.886005 sshd[4111]: Connection closed by 10.0.0.1 port 50222 Apr 30 12:41:42.886491 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:42.901109 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:50222.service: Deactivated successfully. Apr 30 12:41:42.903995 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 12:41:42.905679 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Apr 30 12:41:42.914864 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:50226.service - OpenSSH per-connection server daemon (10.0.0.1:50226). Apr 30 12:41:42.915607 systemd-logind[1496]: Removed session 13. Apr 30 12:41:42.955263 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 50226 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:42.957282 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:42.962694 systemd-logind[1496]: New session 14 of user core. Apr 30 12:41:42.971603 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 12:41:43.095632 sshd[4125]: Connection closed by 10.0.0.1 port 50226 Apr 30 12:41:43.096099 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:43.101722 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:50226.service: Deactivated successfully. Apr 30 12:41:43.104648 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 12:41:43.105684 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Apr 30 12:41:43.106857 systemd-logind[1496]: Removed session 14. Apr 30 12:41:48.109554 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:54046.service - OpenSSH per-connection server daemon (10.0.0.1:54046). Apr 30 12:41:48.152053 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 54046 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:48.154373 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:48.159221 systemd-logind[1496]: New session 15 of user core. Apr 30 12:41:48.166599 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 12:41:48.284310 sshd[4142]: Connection closed by 10.0.0.1 port 54046 Apr 30 12:41:48.284811 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:48.290357 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:54046.service: Deactivated successfully. Apr 30 12:41:48.293280 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 12:41:48.294153 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Apr 30 12:41:48.295156 systemd-logind[1496]: Removed session 15. Apr 30 12:41:52.892978 kubelet[2615]: E0430 12:41:52.892835 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:53.298128 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:54050.service - OpenSSH per-connection server daemon (10.0.0.1:54050). Apr 30 12:41:53.341882 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 54050 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:53.343833 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:53.349090 systemd-logind[1496]: New session 16 of user core. Apr 30 12:41:53.356579 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 12:41:53.474936 sshd[4157]: Connection closed by 10.0.0.1 port 54050 Apr 30 12:41:53.475474 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:53.484541 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:54050.service: Deactivated successfully. Apr 30 12:41:53.486712 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 12:41:53.488768 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Apr 30 12:41:53.494699 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:54060.service - OpenSSH per-connection server daemon (10.0.0.1:54060). Apr 30 12:41:53.496105 systemd-logind[1496]: Removed session 16. Apr 30 12:41:53.534836 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 54060 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:53.536739 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:53.542209 systemd-logind[1496]: New session 17 of user core. Apr 30 12:41:53.556733 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 12:41:53.960360 sshd[4172]: Connection closed by 10.0.0.1 port 54060 Apr 30 12:41:53.960854 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:53.975063 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:54060.service: Deactivated successfully. Apr 30 12:41:53.977321 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 12:41:53.979388 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Apr 30 12:41:53.986802 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:54076.service - OpenSSH per-connection server daemon (10.0.0.1:54076). Apr 30 12:41:53.988088 systemd-logind[1496]: Removed session 17. Apr 30 12:41:54.031773 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 54076 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:54.033675 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:54.039849 systemd-logind[1496]: New session 18 of user core. Apr 30 12:41:54.050748 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 12:41:56.160968 sshd[4185]: Connection closed by 10.0.0.1 port 54076 Apr 30 12:41:56.162064 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:56.172417 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:54076.service: Deactivated successfully. Apr 30 12:41:56.175205 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 12:41:56.177705 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Apr 30 12:41:56.183042 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:38398.service - OpenSSH per-connection server daemon (10.0.0.1:38398). Apr 30 12:41:56.184632 systemd-logind[1496]: Removed session 18. Apr 30 12:41:56.228311 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 38398 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:56.230463 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:56.235954 systemd-logind[1496]: New session 19 of user core. Apr 30 12:41:56.245673 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 12:41:56.509950 sshd[4206]: Connection closed by 10.0.0.1 port 38398 Apr 30 12:41:56.511686 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:56.524386 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:38398.service: Deactivated successfully. Apr 30 12:41:56.526907 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 12:41:56.528958 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Apr 30 12:41:56.538714 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:38402.service - OpenSSH per-connection server daemon (10.0.0.1:38402). Apr 30 12:41:56.539831 systemd-logind[1496]: Removed session 19. Apr 30 12:41:56.579456 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 38402 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:56.581720 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:56.586965 systemd-logind[1496]: New session 20 of user core. Apr 30 12:41:56.604716 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 12:41:56.777842 sshd[4219]: Connection closed by 10.0.0.1 port 38402 Apr 30 12:41:56.778463 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:56.783960 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:38402.service: Deactivated successfully. Apr 30 12:41:56.786476 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 12:41:56.787350 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Apr 30 12:41:56.788633 systemd-logind[1496]: Removed session 20. Apr 30 12:42:01.804832 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:38412.service - OpenSSH per-connection server daemon (10.0.0.1:38412). Apr 30 12:42:01.846471 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 38412 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:01.848395 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:01.853371 systemd-logind[1496]: New session 21 of user core. Apr 30 12:42:01.865734 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 12:42:01.989966 sshd[4235]: Connection closed by 10.0.0.1 port 38412 Apr 30 12:42:01.990445 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:01.995534 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:38412.service: Deactivated successfully. Apr 30 12:42:01.997781 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 12:42:01.998740 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. Apr 30 12:42:01.999788 systemd-logind[1496]: Removed session 21. Apr 30 12:42:07.005095 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:60742.service - OpenSSH per-connection server daemon (10.0.0.1:60742). Apr 30 12:42:07.047724 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 60742 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:07.049666 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:07.054530 systemd-logind[1496]: New session 22 of user core. Apr 30 12:42:07.062589 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 12:42:07.194117 sshd[4253]: Connection closed by 10.0.0.1 port 60742 Apr 30 12:42:07.194652 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:07.200081 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:60742.service: Deactivated successfully. Apr 30 12:42:07.203019 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 12:42:07.204112 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. Apr 30 12:42:07.205152 systemd-logind[1496]: Removed session 22. Apr 30 12:42:12.208383 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:60746.service - OpenSSH per-connection server daemon (10.0.0.1:60746). Apr 30 12:42:12.251919 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 60746 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:12.253611 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:12.258559 systemd-logind[1496]: New session 23 of user core. Apr 30 12:42:12.268711 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 12:42:12.403063 sshd[4268]: Connection closed by 10.0.0.1 port 60746 Apr 30 12:42:12.403471 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:12.408221 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:60746.service: Deactivated successfully. Apr 30 12:42:12.410649 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 12:42:12.411376 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. Apr 30 12:42:12.412725 systemd-logind[1496]: Removed session 23. Apr 30 12:42:14.892764 kubelet[2615]: E0430 12:42:14.892685 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:17.417717 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:48102.service - OpenSSH per-connection server daemon (10.0.0.1:48102). Apr 30 12:42:17.462417 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 48102 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:17.464247 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:17.468618 systemd-logind[1496]: New session 24 of user core. Apr 30 12:42:17.478604 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 12:42:17.624783 sshd[4286]: Connection closed by 10.0.0.1 port 48102 Apr 30 12:42:17.625233 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:17.639121 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:48102.service: Deactivated successfully. Apr 30 12:42:17.641451 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 12:42:17.643681 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. Apr 30 12:42:17.658730 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:48104.service - OpenSSH per-connection server daemon (10.0.0.1:48104). Apr 30 12:42:17.659890 systemd-logind[1496]: Removed session 24. Apr 30 12:42:17.698363 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 48104 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:17.700599 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:17.707382 systemd-logind[1496]: New session 25 of user core. Apr 30 12:42:17.715643 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 12:42:19.313022 containerd[1514]: time="2025-04-30T12:42:19.312951013Z" level=info msg="StopContainer for \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\" with timeout 30 (s)" Apr 30 12:42:19.324340 containerd[1514]: time="2025-04-30T12:42:19.324277351Z" level=info msg="Stop container \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\" with signal terminated" Apr 30 12:42:19.340446 systemd[1]: cri-containerd-099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb.scope: Deactivated successfully. Apr 30 12:42:19.356280 containerd[1514]: time="2025-04-30T12:42:19.356070110Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:42:19.366856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb-rootfs.mount: Deactivated successfully. Apr 30 12:42:19.367221 containerd[1514]: time="2025-04-30T12:42:19.366993123Z" level=info msg="StopContainer for \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\" with timeout 2 (s)" Apr 30 12:42:19.367332 containerd[1514]: time="2025-04-30T12:42:19.367245483Z" level=info msg="Stop container \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\" with signal terminated" Apr 30 12:42:19.378606 systemd-networkd[1430]: lxc_health: Link DOWN Apr 30 12:42:19.378617 systemd-networkd[1430]: lxc_health: Lost carrier Apr 30 12:42:19.379367 containerd[1514]: time="2025-04-30T12:42:19.378672321Z" level=info msg="shim disconnected" id=099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb namespace=k8s.io Apr 30 12:42:19.379367 containerd[1514]: time="2025-04-30T12:42:19.378734770Z" level=warning msg="cleaning up after shim disconnected" id=099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb namespace=k8s.io Apr 30 12:42:19.379367 containerd[1514]: time="2025-04-30T12:42:19.378751341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:19.402088 systemd[1]: cri-containerd-dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356.scope: Deactivated successfully. Apr 30 12:42:19.403128 containerd[1514]: time="2025-04-30T12:42:19.402469068Z" level=info msg="StopContainer for \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\" returns successfully" Apr 30 12:42:19.403180 systemd[1]: cri-containerd-dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356.scope: Consumed 7.794s CPU time, 126M memory peak, 460K read from disk, 13.3M written to disk. Apr 30 12:42:19.409008 containerd[1514]: time="2025-04-30T12:42:19.408943374Z" level=info msg="StopPodSandbox for \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\"" Apr 30 12:42:19.420771 containerd[1514]: time="2025-04-30T12:42:19.409019479Z" level=info msg="Container to stop \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:19.424719 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906-shm.mount: Deactivated successfully. Apr 30 12:42:19.430533 systemd[1]: cri-containerd-2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906.scope: Deactivated successfully. Apr 30 12:42:19.434763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356-rootfs.mount: Deactivated successfully. Apr 30 12:42:19.448080 containerd[1514]: time="2025-04-30T12:42:19.447977376Z" level=info msg="shim disconnected" id=dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356 namespace=k8s.io Apr 30 12:42:19.448536 containerd[1514]: time="2025-04-30T12:42:19.448512430Z" level=warning msg="cleaning up after shim disconnected" id=dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356 namespace=k8s.io Apr 30 12:42:19.448622 containerd[1514]: time="2025-04-30T12:42:19.448604645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:19.459071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906-rootfs.mount: Deactivated successfully. Apr 30 12:42:19.473587 containerd[1514]: time="2025-04-30T12:42:19.473492259Z" level=info msg="shim disconnected" id=2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906 namespace=k8s.io Apr 30 12:42:19.473587 containerd[1514]: time="2025-04-30T12:42:19.473579936Z" level=warning msg="cleaning up after shim disconnected" id=2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906 namespace=k8s.io Apr 30 12:42:19.473587 containerd[1514]: time="2025-04-30T12:42:19.473593030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:19.476804 containerd[1514]: time="2025-04-30T12:42:19.476133759Z" level=info msg="StopContainer for \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\" returns successfully" Apr 30 12:42:19.476804 containerd[1514]: time="2025-04-30T12:42:19.476671108Z" level=info msg="StopPodSandbox for \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\"" Apr 30 12:42:19.476804 containerd[1514]: time="2025-04-30T12:42:19.476701245Z" level=info msg="Container to stop \"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:19.476804 containerd[1514]: time="2025-04-30T12:42:19.476738736Z" level=info msg="Container to stop \"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:19.476804 containerd[1514]: time="2025-04-30T12:42:19.476750489Z" level=info msg="Container to stop \"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:19.476804 containerd[1514]: time="2025-04-30T12:42:19.476760427Z" level=info msg="Container to stop \"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:19.476804 containerd[1514]: time="2025-04-30T12:42:19.476770576Z" level=info msg="Container to stop \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:19.487213 systemd[1]: cri-containerd-6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41.scope: Deactivated successfully. Apr 30 12:42:19.494454 containerd[1514]: time="2025-04-30T12:42:19.494396100Z" level=info msg="TearDown network for sandbox \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\" successfully" Apr 30 12:42:19.494660 containerd[1514]: time="2025-04-30T12:42:19.494638470Z" level=info msg="StopPodSandbox for \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\" returns successfully" Apr 30 12:42:19.534130 containerd[1514]: time="2025-04-30T12:42:19.534037112Z" level=info msg="shim disconnected" id=6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41 namespace=k8s.io Apr 30 12:42:19.534130 containerd[1514]: time="2025-04-30T12:42:19.534123125Z" level=warning msg="cleaning up after shim disconnected" id=6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41 namespace=k8s.io Apr 30 12:42:19.534130 containerd[1514]: time="2025-04-30T12:42:19.534136901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:19.562397 containerd[1514]: time="2025-04-30T12:42:19.562323061Z" level=info msg="TearDown network for sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" successfully" Apr 30 12:42:19.562397 containerd[1514]: time="2025-04-30T12:42:19.562380089Z" level=info msg="StopPodSandbox for \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" returns successfully" Apr 30 12:42:19.697780 kubelet[2615]: I0430 12:42:19.697587 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-etc-cni-netd\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.697780 kubelet[2615]: I0430 12:42:19.697668 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccjpk\" (UniqueName: \"kubernetes.io/projected/13bb02f6-a7fa-4882-bbf5-108ef90e13bb-kube-api-access-ccjpk\") pod \"13bb02f6-a7fa-4882-bbf5-108ef90e13bb\" (UID: \"13bb02f6-a7fa-4882-bbf5-108ef90e13bb\") " Apr 30 12:42:19.697780 kubelet[2615]: I0430 12:42:19.697694 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13bb02f6-a7fa-4882-bbf5-108ef90e13bb-cilium-config-path\") pod \"13bb02f6-a7fa-4882-bbf5-108ef90e13bb\" (UID: \"13bb02f6-a7fa-4882-bbf5-108ef90e13bb\") " Apr 30 12:42:19.697780 kubelet[2615]: I0430 12:42:19.697711 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-bpf-maps\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.697780 kubelet[2615]: I0430 12:42:19.697731 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86zrf\" (UniqueName: \"kubernetes.io/projected/58720998-e616-4570-ac60-294fc3eef92c-kube-api-access-86zrf\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.697780 kubelet[2615]: I0430 12:42:19.697747 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-host-proc-sys-net\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.698406 kubelet[2615]: I0430 12:42:19.697766 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58720998-e616-4570-ac60-294fc3eef92c-cilium-config-path\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.698406 kubelet[2615]: I0430 12:42:19.697791 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58720998-e616-4570-ac60-294fc3eef92c-hubble-tls\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.698406 kubelet[2615]: I0430 12:42:19.697814 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cni-path\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.698406 kubelet[2615]: I0430 12:42:19.697840 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58720998-e616-4570-ac60-294fc3eef92c-clustermesh-secrets\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.698406 kubelet[2615]: I0430 12:42:19.697857 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-xtables-lock\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.698406 kubelet[2615]: I0430 12:42:19.697875 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cilium-cgroup\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.698731 kubelet[2615]: I0430 12:42:19.697892 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-lib-modules\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.698731 kubelet[2615]: I0430 12:42:19.697913 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cilium-run\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.698731 kubelet[2615]: I0430 12:42:19.697954 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-hostproc\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.698731 kubelet[2615]: I0430 12:42:19.697977 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-host-proc-sys-kernel\") pod \"58720998-e616-4570-ac60-294fc3eef92c\" (UID: \"58720998-e616-4570-ac60-294fc3eef92c\") " Apr 30 12:42:19.701295 kubelet[2615]: I0430 12:42:19.697743 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:42:19.701546 kubelet[2615]: I0430 12:42:19.697784 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:42:19.701546 kubelet[2615]: I0430 12:42:19.698070 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:42:19.701546 kubelet[2615]: I0430 12:42:19.701209 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:42:19.701546 kubelet[2615]: I0430 12:42:19.701257 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:42:19.701546 kubelet[2615]: I0430 12:42:19.701268 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:42:19.701768 kubelet[2615]: I0430 12:42:19.701722 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13bb02f6-a7fa-4882-bbf5-108ef90e13bb-kube-api-access-ccjpk" (OuterVolumeSpecName: "kube-api-access-ccjpk") pod "13bb02f6-a7fa-4882-bbf5-108ef90e13bb" (UID: "13bb02f6-a7fa-4882-bbf5-108ef90e13bb"). InnerVolumeSpecName "kube-api-access-ccjpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:42:19.703541 kubelet[2615]: I0430 12:42:19.701739 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:42:19.703624 kubelet[2615]: I0430 12:42:19.701862 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:42:19.703678 kubelet[2615]: I0430 12:42:19.701877 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-hostproc" (OuterVolumeSpecName: "hostproc") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:42:19.703730 kubelet[2615]: I0430 12:42:19.702225 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cni-path" (OuterVolumeSpecName: "cni-path") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:42:19.703804 kubelet[2615]: I0430 12:42:19.703588 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58720998-e616-4570-ac60-294fc3eef92c-kube-api-access-86zrf" (OuterVolumeSpecName: "kube-api-access-86zrf") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "kube-api-access-86zrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:42:19.704287 kubelet[2615]: I0430 12:42:19.704245 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58720998-e616-4570-ac60-294fc3eef92c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 12:42:19.704578 kubelet[2615]: I0430 12:42:19.704547 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13bb02f6-a7fa-4882-bbf5-108ef90e13bb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "13bb02f6-a7fa-4882-bbf5-108ef90e13bb" (UID: "13bb02f6-a7fa-4882-bbf5-108ef90e13bb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:42:19.705638 kubelet[2615]: I0430 12:42:19.705593 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58720998-e616-4570-ac60-294fc3eef92c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:42:19.707384 kubelet[2615]: I0430 12:42:19.707317 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58720998-e616-4570-ac60-294fc3eef92c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "58720998-e616-4570-ac60-294fc3eef92c" (UID: "58720998-e616-4570-ac60-294fc3eef92c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:42:19.799043 kubelet[2615]: I0430 12:42:19.798807 2615 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799043 kubelet[2615]: I0430 12:42:19.798863 2615 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799043 kubelet[2615]: I0430 12:42:19.798879 2615 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799043 kubelet[2615]: I0430 12:42:19.798891 2615 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ccjpk\" (UniqueName: \"kubernetes.io/projected/13bb02f6-a7fa-4882-bbf5-108ef90e13bb-kube-api-access-ccjpk\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799043 kubelet[2615]: I0430 12:42:19.798903 2615 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13bb02f6-a7fa-4882-bbf5-108ef90e13bb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799043 kubelet[2615]: I0430 12:42:19.798915 2615 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799043 kubelet[2615]: I0430 12:42:19.798927 2615 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-86zrf\" (UniqueName: \"kubernetes.io/projected/58720998-e616-4570-ac60-294fc3eef92c-kube-api-access-86zrf\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799043 kubelet[2615]: I0430 12:42:19.798938 2615 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799511 kubelet[2615]: I0430 12:42:19.798951 2615 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58720998-e616-4570-ac60-294fc3eef92c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799511 kubelet[2615]: I0430 12:42:19.798962 2615 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58720998-e616-4570-ac60-294fc3eef92c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799511 kubelet[2615]: I0430 12:42:19.798975 2615 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799511 kubelet[2615]: I0430 12:42:19.798987 2615 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799511 kubelet[2615]: I0430 12:42:19.798998 2615 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58720998-e616-4570-ac60-294fc3eef92c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799511 kubelet[2615]: I0430 12:42:19.799009 2615 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799511 kubelet[2615]: I0430 12:42:19.799019 2615 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.799511 kubelet[2615]: I0430 12:42:19.799043 2615 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58720998-e616-4570-ac60-294fc3eef92c-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:19.895883 kubelet[2615]: E0430 12:42:19.894668 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:19.904960 systemd[1]: Removed slice kubepods-burstable-pod58720998_e616_4570_ac60_294fc3eef92c.slice - libcontainer container kubepods-burstable-pod58720998_e616_4570_ac60_294fc3eef92c.slice. Apr 30 12:42:19.905127 systemd[1]: kubepods-burstable-pod58720998_e616_4570_ac60_294fc3eef92c.slice: Consumed 7.916s CPU time, 126.3M memory peak, 480K read from disk, 13.3M written to disk. Apr 30 12:42:19.906706 systemd[1]: Removed slice kubepods-besteffort-pod13bb02f6_a7fa_4882_bbf5_108ef90e13bb.slice - libcontainer container kubepods-besteffort-pod13bb02f6_a7fa_4882_bbf5_108ef90e13bb.slice. Apr 30 12:42:19.957891 kubelet[2615]: E0430 12:42:19.957720 2615 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:42:20.173377 kubelet[2615]: I0430 12:42:20.173325 2615 scope.go:117] "RemoveContainer" containerID="dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356" Apr 30 12:42:20.175744 containerd[1514]: time="2025-04-30T12:42:20.175635888Z" level=info msg="RemoveContainer for \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\"" Apr 30 12:42:20.279350 containerd[1514]: time="2025-04-30T12:42:20.279271919Z" level=info msg="RemoveContainer for \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\" returns successfully" Apr 30 12:42:20.279946 kubelet[2615]: I0430 12:42:20.279840 2615 scope.go:117] "RemoveContainer" containerID="ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e" Apr 30 12:42:20.281468 containerd[1514]: time="2025-04-30T12:42:20.281410783Z" level=info msg="RemoveContainer for \"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e\"" Apr 30 12:42:20.330958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41-rootfs.mount: Deactivated successfully. Apr 30 12:42:20.331137 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41-shm.mount: Deactivated successfully. Apr 30 12:42:20.331360 systemd[1]: var-lib-kubelet-pods-13bb02f6\x2da7fa\x2d4882\x2dbbf5\x2d108ef90e13bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccjpk.mount: Deactivated successfully. Apr 30 12:42:20.331518 systemd[1]: var-lib-kubelet-pods-58720998\x2de616\x2d4570\x2dac60\x2d294fc3eef92c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d86zrf.mount: Deactivated successfully. Apr 30 12:42:20.331644 systemd[1]: var-lib-kubelet-pods-58720998\x2de616\x2d4570\x2dac60\x2d294fc3eef92c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 12:42:20.331772 systemd[1]: var-lib-kubelet-pods-58720998\x2de616\x2d4570\x2dac60\x2d294fc3eef92c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 12:42:20.344292 containerd[1514]: time="2025-04-30T12:42:20.344203120Z" level=info msg="RemoveContainer for \"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e\" returns successfully" Apr 30 12:42:20.344822 kubelet[2615]: I0430 12:42:20.344645 2615 scope.go:117] "RemoveContainer" containerID="caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611" Apr 30 12:42:20.346205 containerd[1514]: time="2025-04-30T12:42:20.346148548Z" level=info msg="RemoveContainer for \"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611\"" Apr 30 12:42:20.428004 containerd[1514]: time="2025-04-30T12:42:20.427954202Z" level=info msg="RemoveContainer for \"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611\" returns successfully" Apr 30 12:42:20.428349 kubelet[2615]: I0430 12:42:20.428308 2615 scope.go:117] "RemoveContainer" containerID="9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8" Apr 30 12:42:20.429839 containerd[1514]: time="2025-04-30T12:42:20.429798378Z" level=info msg="RemoveContainer for \"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8\"" Apr 30 12:42:20.518282 containerd[1514]: time="2025-04-30T12:42:20.518208273Z" level=info msg="RemoveContainer for \"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8\" returns successfully" Apr 30 12:42:20.518656 kubelet[2615]: I0430 12:42:20.518609 2615 scope.go:117] "RemoveContainer" containerID="82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb" Apr 30 12:42:20.520268 containerd[1514]: time="2025-04-30T12:42:20.520215689Z" level=info msg="RemoveContainer for \"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb\"" Apr 30 12:42:20.599312 containerd[1514]: time="2025-04-30T12:42:20.599122547Z" level=info msg="RemoveContainer for \"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb\" returns successfully" Apr 30 12:42:20.599565 kubelet[2615]: I0430 12:42:20.599531 2615 scope.go:117] "RemoveContainer" containerID="dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356" Apr 30 12:42:20.599988 containerd[1514]: time="2025-04-30T12:42:20.599907585Z" level=error msg="ContainerStatus for \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\": not found" Apr 30 12:42:20.608529 kubelet[2615]: E0430 12:42:20.608421 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\": not found" containerID="dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356" Apr 30 12:42:20.608672 kubelet[2615]: I0430 12:42:20.608526 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356"} err="failed to get container status \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\": rpc error: code = NotFound desc = an error occurred when try to find container \"dcda04b65d9930fac04c5ab4fa1b320f24afedd16883e230f9af4407734b8356\": not found" Apr 30 12:42:20.608672 kubelet[2615]: I0430 12:42:20.608639 2615 scope.go:117] "RemoveContainer" containerID="ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e" Apr 30 12:42:20.609050 containerd[1514]: time="2025-04-30T12:42:20.608981698Z" level=error msg="ContainerStatus for \"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e\": not found" Apr 30 12:42:20.609307 kubelet[2615]: E0430 12:42:20.609261 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e\": not found" containerID="ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e" Apr 30 12:42:20.609358 kubelet[2615]: I0430 12:42:20.609313 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e"} err="failed to get container status \"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca20cf68cc4511e26594336c9a7fc54370ee0275c44abdf2242ddf9a9432436e\": not found" Apr 30 12:42:20.609358 kubelet[2615]: I0430 12:42:20.609347 2615 scope.go:117] "RemoveContainer" containerID="caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611" Apr 30 12:42:20.609622 containerd[1514]: time="2025-04-30T12:42:20.609580774Z" level=error msg="ContainerStatus for \"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611\": not found" Apr 30 12:42:20.609797 kubelet[2615]: E0430 12:42:20.609759 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611\": not found" containerID="caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611" Apr 30 12:42:20.609850 kubelet[2615]: I0430 12:42:20.609804 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611"} err="failed to get container status \"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611\": rpc error: code = NotFound desc = an error occurred when try to find container \"caac254a94e9e7816da24be7bc8f662be981be7255fc4c2a9079c7382ac54611\": not found" Apr 30 12:42:20.609850 kubelet[2615]: I0430 12:42:20.609839 2615 scope.go:117] "RemoveContainer" containerID="9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8" Apr 30 12:42:20.610211 containerd[1514]: time="2025-04-30T12:42:20.610157628Z" level=error msg="ContainerStatus for \"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8\": not found" Apr 30 12:42:20.610393 kubelet[2615]: E0430 12:42:20.610345 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8\": not found" containerID="9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8" Apr 30 12:42:20.610393 kubelet[2615]: I0430 12:42:20.610383 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8"} err="failed to get container status \"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"9de75f0083bf07efe237115023df97e338e39fc4ece0f8f35418a0869057b5e8\": not found" Apr 30 12:42:20.610393 kubelet[2615]: I0430 12:42:20.610401 2615 scope.go:117] "RemoveContainer" containerID="82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb" Apr 30 12:42:20.610665 containerd[1514]: time="2025-04-30T12:42:20.610598143Z" level=error msg="ContainerStatus for \"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb\": not found" Apr 30 12:42:20.610775 kubelet[2615]: E0430 12:42:20.610731 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb\": not found" containerID="82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb" Apr 30 12:42:20.610809 kubelet[2615]: I0430 12:42:20.610774 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb"} err="failed to get container status \"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb\": rpc error: code = NotFound desc = an error occurred when try to find container \"82354d6f9e45647013d3a85d3785c8053f3ce2e88513ca64ad813f6fcef85bbb\": not found" Apr 30 12:42:20.610809 kubelet[2615]: I0430 12:42:20.610797 2615 scope.go:117] "RemoveContainer" containerID="099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb" Apr 30 12:42:20.612463 containerd[1514]: time="2025-04-30T12:42:20.612091484Z" level=info msg="RemoveContainer for \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\"" Apr 30 12:42:20.745480 containerd[1514]: time="2025-04-30T12:42:20.745390934Z" level=info msg="RemoveContainer for \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\" returns successfully" Apr 30 12:42:20.746004 kubelet[2615]: I0430 12:42:20.745789 2615 scope.go:117] "RemoveContainer" containerID="099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb" Apr 30 12:42:20.746557 kubelet[2615]: E0430 12:42:20.746346 2615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\": not found" containerID="099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb" Apr 30 12:42:20.746557 kubelet[2615]: I0430 12:42:20.746379 2615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb"} err="failed to get container status \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\": not found" Apr 30 12:42:20.746643 containerd[1514]: time="2025-04-30T12:42:20.746140274Z" level=error msg="ContainerStatus for \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"099ea4b345118966123b2ef18bf18870c2f5ccbf3d30766f5933dd8b95fbe6fb\": not found" Apr 30 12:42:21.258698 sshd[4301]: Connection closed by 10.0.0.1 port 48104 Apr 30 12:42:21.258858 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:21.268589 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:48104.service: Deactivated successfully. Apr 30 12:42:21.271179 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 12:42:21.273226 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. Apr 30 12:42:21.282914 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:48120.service - OpenSSH per-connection server daemon (10.0.0.1:48120). Apr 30 12:42:21.284732 systemd-logind[1496]: Removed session 25. Apr 30 12:42:21.328324 sshd[4463]: Accepted publickey for core from 10.0.0.1 port 48120 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:21.330402 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:21.336660 systemd-logind[1496]: New session 26 of user core. Apr 30 12:42:21.343689 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 12:42:21.893713 kubelet[2615]: E0430 12:42:21.893653 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:21.896136 kubelet[2615]: I0430 12:42:21.896053 2615 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13bb02f6-a7fa-4882-bbf5-108ef90e13bb" path="/var/lib/kubelet/pods/13bb02f6-a7fa-4882-bbf5-108ef90e13bb/volumes" Apr 30 12:42:21.897272 kubelet[2615]: I0430 12:42:21.896876 2615 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58720998-e616-4570-ac60-294fc3eef92c" path="/var/lib/kubelet/pods/58720998-e616-4570-ac60-294fc3eef92c/volumes" Apr 30 12:42:21.937460 sshd[4466]: Connection closed by 10.0.0.1 port 48120 Apr 30 12:42:21.939593 sshd-session[4463]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:21.953253 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:48120.service: Deactivated successfully. Apr 30 12:42:21.956971 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 12:42:21.959608 systemd-logind[1496]: Session 26 logged out. Waiting for processes to exit. Apr 30 12:42:21.966913 kubelet[2615]: E0430 12:42:21.966861 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58720998-e616-4570-ac60-294fc3eef92c" containerName="apply-sysctl-overwrites" Apr 30 12:42:21.966913 kubelet[2615]: E0430 12:42:21.966897 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58720998-e616-4570-ac60-294fc3eef92c" containerName="mount-bpf-fs" Apr 30 12:42:21.966913 kubelet[2615]: E0430 12:42:21.966905 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58720998-e616-4570-ac60-294fc3eef92c" containerName="mount-cgroup" Apr 30 12:42:21.966913 kubelet[2615]: E0430 12:42:21.966911 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13bb02f6-a7fa-4882-bbf5-108ef90e13bb" containerName="cilium-operator" Apr 30 12:42:21.966913 kubelet[2615]: E0430 12:42:21.966917 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58720998-e616-4570-ac60-294fc3eef92c" containerName="cilium-agent" Apr 30 12:42:21.966913 kubelet[2615]: E0430 12:42:21.966924 2615 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58720998-e616-4570-ac60-294fc3eef92c" containerName="clean-cilium-state" Apr 30 12:42:21.967216 kubelet[2615]: I0430 12:42:21.966951 2615 memory_manager.go:354] "RemoveStaleState removing state" podUID="58720998-e616-4570-ac60-294fc3eef92c" containerName="cilium-agent" Apr 30 12:42:21.967216 kubelet[2615]: I0430 12:42:21.966958 2615 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bb02f6-a7fa-4882-bbf5-108ef90e13bb" containerName="cilium-operator" Apr 30 12:42:21.972210 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:48122.service - OpenSSH per-connection server daemon (10.0.0.1:48122). Apr 30 12:42:21.976472 systemd-logind[1496]: Removed session 26. Apr 30 12:42:21.987749 systemd[1]: Created slice kubepods-burstable-pod31c3910c_87ec_48f7_bd18_705d24b6ef01.slice - libcontainer container kubepods-burstable-pod31c3910c_87ec_48f7_bd18_705d24b6ef01.slice. Apr 30 12:42:22.018162 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 48122 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:22.020269 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:22.025368 systemd-logind[1496]: New session 27 of user core. Apr 30 12:42:22.037607 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 12:42:22.093983 sshd[4480]: Connection closed by 10.0.0.1 port 48122 Apr 30 12:42:22.094522 sshd-session[4477]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:22.113297 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:48122.service: Deactivated successfully. Apr 30 12:42:22.113994 kubelet[2615]: I0430 12:42:22.113948 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31c3910c-87ec-48f7-bd18-705d24b6ef01-bpf-maps\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.114339 kubelet[2615]: I0430 12:42:22.114100 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31c3910c-87ec-48f7-bd18-705d24b6ef01-host-proc-sys-kernel\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.114339 kubelet[2615]: I0430 12:42:22.114130 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31c3910c-87ec-48f7-bd18-705d24b6ef01-cni-path\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.114339 kubelet[2615]: I0430 12:42:22.114262 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/31c3910c-87ec-48f7-bd18-705d24b6ef01-cilium-ipsec-secrets\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.114339 kubelet[2615]: I0430 12:42:22.114283 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31c3910c-87ec-48f7-bd18-705d24b6ef01-etc-cni-netd\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.114731 kubelet[2615]: I0430 12:42:22.114300 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31c3910c-87ec-48f7-bd18-705d24b6ef01-clustermesh-secrets\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.114731 kubelet[2615]: I0430 12:42:22.114473 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g92v\" (UniqueName: \"kubernetes.io/projected/31c3910c-87ec-48f7-bd18-705d24b6ef01-kube-api-access-7g92v\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.114731 kubelet[2615]: I0430 12:42:22.114494 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31c3910c-87ec-48f7-bd18-705d24b6ef01-cilium-run\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.115066 kubelet[2615]: I0430 12:42:22.114513 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31c3910c-87ec-48f7-bd18-705d24b6ef01-hostproc\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.115066 kubelet[2615]: I0430 12:42:22.114958 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31c3910c-87ec-48f7-bd18-705d24b6ef01-xtables-lock\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.115066 kubelet[2615]: I0430 12:42:22.114985 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31c3910c-87ec-48f7-bd18-705d24b6ef01-cilium-cgroup\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.115066 kubelet[2615]: I0430 12:42:22.115000 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31c3910c-87ec-48f7-bd18-705d24b6ef01-lib-modules\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.115372 kubelet[2615]: I0430 12:42:22.115136 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31c3910c-87ec-48f7-bd18-705d24b6ef01-cilium-config-path\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.115372 kubelet[2615]: I0430 12:42:22.115159 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31c3910c-87ec-48f7-bd18-705d24b6ef01-hubble-tls\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.115372 kubelet[2615]: I0430 12:42:22.115177 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31c3910c-87ec-48f7-bd18-705d24b6ef01-host-proc-sys-net\") pod \"cilium-g4ztp\" (UID: \"31c3910c-87ec-48f7-bd18-705d24b6ef01\") " pod="kube-system/cilium-g4ztp" Apr 30 12:42:22.116253 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 12:42:22.119505 systemd-logind[1496]: Session 27 logged out. Waiting for processes to exit. Apr 30 12:42:22.124892 systemd[1]: Started sshd@27-10.0.0.13:22-10.0.0.1:48126.service - OpenSSH per-connection server daemon (10.0.0.1:48126). Apr 30 12:42:22.126304 systemd-logind[1496]: Removed session 27. Apr 30 12:42:22.168666 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 48126 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:22.171305 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:22.179791 systemd-logind[1496]: New session 28 of user core. Apr 30 12:42:22.185668 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 12:42:22.293762 kubelet[2615]: E0430 12:42:22.293682 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:22.294597 containerd[1514]: time="2025-04-30T12:42:22.294409267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g4ztp,Uid:31c3910c-87ec-48f7-bd18-705d24b6ef01,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:22.490863 kubelet[2615]: I0430 12:42:22.490695 2615 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T12:42:22Z","lastTransitionTime":"2025-04-30T12:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 12:42:22.579824 containerd[1514]: time="2025-04-30T12:42:22.579598174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:42:22.579824 containerd[1514]: time="2025-04-30T12:42:22.579685559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:42:22.579824 containerd[1514]: time="2025-04-30T12:42:22.579707200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:22.580690 containerd[1514]: time="2025-04-30T12:42:22.580609650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:22.605795 systemd[1]: Started cri-containerd-d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107.scope - libcontainer container d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107. Apr 30 12:42:22.633040 containerd[1514]: time="2025-04-30T12:42:22.632934307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g4ztp,Uid:31c3910c-87ec-48f7-bd18-705d24b6ef01,Namespace:kube-system,Attempt:0,} returns sandbox id \"d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107\"" Apr 30 12:42:22.634175 kubelet[2615]: E0430 12:42:22.633774 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:22.636377 containerd[1514]: time="2025-04-30T12:42:22.636333839Z" level=info msg="CreateContainer within sandbox \"d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:42:23.298719 containerd[1514]: time="2025-04-30T12:42:23.298615325Z" level=info msg="CreateContainer within sandbox \"d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b4bd09ae794805622e39f12bfc91924588c363cb5635ca779059b27ac653b5cd\"" Apr 30 12:42:23.300188 containerd[1514]: time="2025-04-30T12:42:23.299270997Z" level=info msg="StartContainer for \"b4bd09ae794805622e39f12bfc91924588c363cb5635ca779059b27ac653b5cd\"" Apr 30 12:42:23.337617 systemd[1]: Started cri-containerd-b4bd09ae794805622e39f12bfc91924588c363cb5635ca779059b27ac653b5cd.scope - libcontainer container b4bd09ae794805622e39f12bfc91924588c363cb5635ca779059b27ac653b5cd. Apr 30 12:42:23.396555 containerd[1514]: time="2025-04-30T12:42:23.396111054Z" level=info msg="StartContainer for \"b4bd09ae794805622e39f12bfc91924588c363cb5635ca779059b27ac653b5cd\" returns successfully" Apr 30 12:42:23.410318 systemd[1]: cri-containerd-b4bd09ae794805622e39f12bfc91924588c363cb5635ca779059b27ac653b5cd.scope: Deactivated successfully. Apr 30 12:42:23.440535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4bd09ae794805622e39f12bfc91924588c363cb5635ca779059b27ac653b5cd-rootfs.mount: Deactivated successfully. Apr 30 12:42:23.466173 containerd[1514]: time="2025-04-30T12:42:23.465371058Z" level=info msg="shim disconnected" id=b4bd09ae794805622e39f12bfc91924588c363cb5635ca779059b27ac653b5cd namespace=k8s.io Apr 30 12:42:23.466173 containerd[1514]: time="2025-04-30T12:42:23.466176304Z" level=warning msg="cleaning up after shim disconnected" id=b4bd09ae794805622e39f12bfc91924588c363cb5635ca779059b27ac653b5cd namespace=k8s.io Apr 30 12:42:23.466545 containerd[1514]: time="2025-04-30T12:42:23.466194378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:24.190095 kubelet[2615]: E0430 12:42:24.190036 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:24.191980 containerd[1514]: time="2025-04-30T12:42:24.191859335Z" level=info msg="CreateContainer within sandbox \"d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:42:24.393378 containerd[1514]: time="2025-04-30T12:42:24.392875824Z" level=info msg="CreateContainer within sandbox \"d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c11ec5116af19990c388344899232e75b3f698a76cfc3560a54bf26ac75f54f1\"" Apr 30 12:42:24.394889 containerd[1514]: time="2025-04-30T12:42:24.394850865Z" level=info msg="StartContainer for \"c11ec5116af19990c388344899232e75b3f698a76cfc3560a54bf26ac75f54f1\"" Apr 30 12:42:24.432732 systemd[1]: Started cri-containerd-c11ec5116af19990c388344899232e75b3f698a76cfc3560a54bf26ac75f54f1.scope - libcontainer container c11ec5116af19990c388344899232e75b3f698a76cfc3560a54bf26ac75f54f1. Apr 30 12:42:24.482170 systemd[1]: cri-containerd-c11ec5116af19990c388344899232e75b3f698a76cfc3560a54bf26ac75f54f1.scope: Deactivated successfully. Apr 30 12:42:24.607976 containerd[1514]: time="2025-04-30T12:42:24.607885949Z" level=info msg="StartContainer for \"c11ec5116af19990c388344899232e75b3f698a76cfc3560a54bf26ac75f54f1\" returns successfully" Apr 30 12:42:24.635724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c11ec5116af19990c388344899232e75b3f698a76cfc3560a54bf26ac75f54f1-rootfs.mount: Deactivated successfully. Apr 30 12:42:24.767198 containerd[1514]: time="2025-04-30T12:42:24.766971670Z" level=info msg="shim disconnected" id=c11ec5116af19990c388344899232e75b3f698a76cfc3560a54bf26ac75f54f1 namespace=k8s.io Apr 30 12:42:24.767198 containerd[1514]: time="2025-04-30T12:42:24.767054878Z" level=warning msg="cleaning up after shim disconnected" id=c11ec5116af19990c388344899232e75b3f698a76cfc3560a54bf26ac75f54f1 namespace=k8s.io Apr 30 12:42:24.767198 containerd[1514]: time="2025-04-30T12:42:24.767065467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:24.958864 kubelet[2615]: E0430 12:42:24.958810 2615 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:42:25.193647 kubelet[2615]: E0430 12:42:25.193586 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:25.195292 containerd[1514]: time="2025-04-30T12:42:25.195234560Z" level=info msg="CreateContainer within sandbox \"d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:42:25.396200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2996181748.mount: Deactivated successfully. Apr 30 12:42:26.049363 containerd[1514]: time="2025-04-30T12:42:26.049271537Z" level=info msg="CreateContainer within sandbox \"d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"80db24088d26052b58682142e603938b9179bf8bd14cda7e0a5783a5d71763f5\"" Apr 30 12:42:26.050250 containerd[1514]: time="2025-04-30T12:42:26.050143407Z" level=info msg="StartContainer for \"80db24088d26052b58682142e603938b9179bf8bd14cda7e0a5783a5d71763f5\"" Apr 30 12:42:26.090591 systemd[1]: Started cri-containerd-80db24088d26052b58682142e603938b9179bf8bd14cda7e0a5783a5d71763f5.scope - libcontainer container 80db24088d26052b58682142e603938b9179bf8bd14cda7e0a5783a5d71763f5. Apr 30 12:42:26.382328 systemd[1]: cri-containerd-80db24088d26052b58682142e603938b9179bf8bd14cda7e0a5783a5d71763f5.scope: Deactivated successfully. Apr 30 12:42:26.400965 containerd[1514]: time="2025-04-30T12:42:26.400901127Z" level=info msg="StartContainer for \"80db24088d26052b58682142e603938b9179bf8bd14cda7e0a5783a5d71763f5\" returns successfully" Apr 30 12:42:26.404641 kubelet[2615]: E0430 12:42:26.404576 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:26.424724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80db24088d26052b58682142e603938b9179bf8bd14cda7e0a5783a5d71763f5-rootfs.mount: Deactivated successfully. Apr 30 12:42:26.625759 containerd[1514]: time="2025-04-30T12:42:26.625673540Z" level=info msg="shim disconnected" id=80db24088d26052b58682142e603938b9179bf8bd14cda7e0a5783a5d71763f5 namespace=k8s.io Apr 30 12:42:26.625759 containerd[1514]: time="2025-04-30T12:42:26.625749614Z" level=warning msg="cleaning up after shim disconnected" id=80db24088d26052b58682142e603938b9179bf8bd14cda7e0a5783a5d71763f5 namespace=k8s.io Apr 30 12:42:26.625759 containerd[1514]: time="2025-04-30T12:42:26.625760785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:27.408676 kubelet[2615]: E0430 12:42:27.408629 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:27.410261 containerd[1514]: time="2025-04-30T12:42:27.410222065Z" level=info msg="CreateContainer within sandbox \"d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:42:27.906012 containerd[1514]: time="2025-04-30T12:42:27.905902016Z" level=info msg="CreateContainer within sandbox \"d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2c65e3bccc8c262a9fe7fcd0db7966ac16bfa308dc778e6351b7a4035e8d9845\"" Apr 30 12:42:27.906607 containerd[1514]: time="2025-04-30T12:42:27.906561975Z" level=info msg="StartContainer for \"2c65e3bccc8c262a9fe7fcd0db7966ac16bfa308dc778e6351b7a4035e8d9845\"" Apr 30 12:42:27.946616 systemd[1]: Started cri-containerd-2c65e3bccc8c262a9fe7fcd0db7966ac16bfa308dc778e6351b7a4035e8d9845.scope - libcontainer container 2c65e3bccc8c262a9fe7fcd0db7966ac16bfa308dc778e6351b7a4035e8d9845. Apr 30 12:42:27.977825 systemd[1]: cri-containerd-2c65e3bccc8c262a9fe7fcd0db7966ac16bfa308dc778e6351b7a4035e8d9845.scope: Deactivated successfully. Apr 30 12:42:28.144058 containerd[1514]: time="2025-04-30T12:42:28.143959110Z" level=info msg="StartContainer for \"2c65e3bccc8c262a9fe7fcd0db7966ac16bfa308dc778e6351b7a4035e8d9845\" returns successfully" Apr 30 12:42:28.166397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c65e3bccc8c262a9fe7fcd0db7966ac16bfa308dc778e6351b7a4035e8d9845-rootfs.mount: Deactivated successfully. Apr 30 12:42:28.412764 kubelet[2615]: E0430 12:42:28.412730 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:28.710354 containerd[1514]: time="2025-04-30T12:42:28.710274186Z" level=info msg="shim disconnected" id=2c65e3bccc8c262a9fe7fcd0db7966ac16bfa308dc778e6351b7a4035e8d9845 namespace=k8s.io Apr 30 12:42:28.710354 containerd[1514]: time="2025-04-30T12:42:28.710340622Z" level=warning msg="cleaning up after shim disconnected" id=2c65e3bccc8c262a9fe7fcd0db7966ac16bfa308dc778e6351b7a4035e8d9845 namespace=k8s.io Apr 30 12:42:28.710354 containerd[1514]: time="2025-04-30T12:42:28.710350111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:29.417885 kubelet[2615]: E0430 12:42:29.417828 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:29.419809 containerd[1514]: time="2025-04-30T12:42:29.419767097Z" level=info msg="CreateContainer within sandbox \"d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:42:29.708136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3298786500.mount: Deactivated successfully. Apr 30 12:42:29.857240 containerd[1514]: time="2025-04-30T12:42:29.857142534Z" level=info msg="CreateContainer within sandbox \"d813e23ae7865e26f3960b751c12ec9338c703ec6ff66d984508eca6736f5107\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d8377b32c6040b93e60a204180a447ec9923ddd9f153f00a979811070c7d26d9\"" Apr 30 12:42:29.858493 containerd[1514]: time="2025-04-30T12:42:29.858375418Z" level=info msg="StartContainer for \"d8377b32c6040b93e60a204180a447ec9923ddd9f153f00a979811070c7d26d9\"" Apr 30 12:42:29.894787 systemd[1]: Started cri-containerd-d8377b32c6040b93e60a204180a447ec9923ddd9f153f00a979811070c7d26d9.scope - libcontainer container d8377b32c6040b93e60a204180a447ec9923ddd9f153f00a979811070c7d26d9. Apr 30 12:42:29.957671 containerd[1514]: time="2025-04-30T12:42:29.957602276Z" level=info msg="StartContainer for \"d8377b32c6040b93e60a204180a447ec9923ddd9f153f00a979811070c7d26d9\" returns successfully" Apr 30 12:42:29.961398 kubelet[2615]: E0430 12:42:29.961274 2615 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:42:30.422801 kubelet[2615]: E0430 12:42:30.422745 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:30.596472 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 12:42:30.731979 kubelet[2615]: I0430 12:42:30.731032 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g4ztp" podStartSLOduration=9.731004017 podStartE2EDuration="9.731004017s" podCreationTimestamp="2025-04-30 12:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:42:30.730411615 +0000 UTC m=+110.948199864" watchObservedRunningTime="2025-04-30 12:42:30.731004017 +0000 UTC m=+110.948792276" Apr 30 12:42:31.425199 kubelet[2615]: E0430 12:42:31.425126 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:31.892346 kubelet[2615]: E0430 12:42:31.892259 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-l2v8z" podUID="3aa56430-a33e-409e-8517-3a0426540c86" Apr 30 12:42:32.429557 kubelet[2615]: E0430 12:42:32.429483 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:33.896260 kubelet[2615]: E0430 12:42:33.893417 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-l2v8z" podUID="3aa56430-a33e-409e-8517-3a0426540c86" Apr 30 12:42:34.539259 systemd-networkd[1430]: lxc_health: Link UP Apr 30 12:42:34.540744 systemd-networkd[1430]: lxc_health: Gained carrier Apr 30 12:42:35.784134 systemd-networkd[1430]: lxc_health: Gained IPv6LL Apr 30 12:42:35.892780 kubelet[2615]: E0430 12:42:35.892710 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:36.296115 kubelet[2615]: E0430 12:42:36.296063 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:36.444046 kubelet[2615]: E0430 12:42:36.443993 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:37.445796 kubelet[2615]: E0430 12:42:37.445721 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:39.880367 containerd[1514]: time="2025-04-30T12:42:39.880306326Z" level=info msg="StopPodSandbox for \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\"" Apr 30 12:42:39.880948 containerd[1514]: time="2025-04-30T12:42:39.880473711Z" level=info msg="TearDown network for sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" successfully" Apr 30 12:42:39.880948 containerd[1514]: time="2025-04-30T12:42:39.880523907Z" level=info msg="StopPodSandbox for \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" returns successfully" Apr 30 12:42:39.880948 containerd[1514]: time="2025-04-30T12:42:39.880902281Z" level=info msg="RemovePodSandbox for \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\"" Apr 30 12:42:39.880948 containerd[1514]: time="2025-04-30T12:42:39.880942618Z" level=info msg="Forcibly stopping sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\"" Apr 30 12:42:39.881072 containerd[1514]: time="2025-04-30T12:42:39.880995908Z" level=info msg="TearDown network for sandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" successfully" Apr 30 12:42:40.006411 containerd[1514]: time="2025-04-30T12:42:40.006316322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:42:40.006618 containerd[1514]: time="2025-04-30T12:42:40.006415400Z" level=info msg="RemovePodSandbox \"6646d9a72cf155e95ad323aa21186e38361a4bccfecfc75920990e95350d3d41\" returns successfully" Apr 30 12:42:40.007167 containerd[1514]: time="2025-04-30T12:42:40.007113057Z" level=info msg="StopPodSandbox for \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\"" Apr 30 12:42:40.007337 containerd[1514]: time="2025-04-30T12:42:40.007249205Z" level=info msg="TearDown network for sandbox \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\" successfully" Apr 30 12:42:40.007337 containerd[1514]: time="2025-04-30T12:42:40.007267339Z" level=info msg="StopPodSandbox for \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\" returns successfully" Apr 30 12:42:40.009594 containerd[1514]: time="2025-04-30T12:42:40.007650183Z" level=info msg="RemovePodSandbox for \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\"" Apr 30 12:42:40.009594 containerd[1514]: time="2025-04-30T12:42:40.007682163Z" level=info msg="Forcibly stopping sandbox \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\"" Apr 30 12:42:40.009594 containerd[1514]: time="2025-04-30T12:42:40.007771471Z" level=info msg="TearDown network for sandbox \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\" successfully" Apr 30 12:42:40.305271 systemd[1]: run-containerd-runc-k8s.io-d8377b32c6040b93e60a204180a447ec9923ddd9f153f00a979811070c7d26d9-runc.rwIFoG.mount: Deactivated successfully. Apr 30 12:42:40.314865 containerd[1514]: time="2025-04-30T12:42:40.310995398Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:42:40.314865 containerd[1514]: time="2025-04-30T12:42:40.311068205Z" level=info msg="RemovePodSandbox \"2ab29e01118d70972304322574e5bf40c7e3665acf6d2861c12c169a58b07906\" returns successfully" Apr 30 12:42:40.367102 sshd[4490]: Connection closed by 10.0.0.1 port 48126 Apr 30 12:42:40.368079 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:40.373533 systemd[1]: sshd@27-10.0.0.13:22-10.0.0.1:48126.service: Deactivated successfully. Apr 30 12:42:40.375855 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 12:42:40.376880 systemd-logind[1496]: Session 28 logged out. Waiting for processes to exit. Apr 30 12:42:40.378074 systemd-logind[1496]: Removed session 28.