May 14 00:01:58.358071 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 14 00:01:58.358097 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:01:58.358108 kernel: BIOS-provided physical RAM map: May 14 00:01:58.358115 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 00:01:58.358122 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 00:01:58.358128 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 00:01:58.358136 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 14 00:01:58.358143 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 00:01:58.358149 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 00:01:58.358156 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 00:01:58.358163 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 14 00:01:58.358172 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 00:01:58.358182 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 00:01:58.358189 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 00:01:58.358200 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 00:01:58.358217 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 00:01:58.358227 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 14 00:01:58.358234 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 14 00:01:58.358242 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 14 00:01:58.358249 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 14 00:01:58.358256 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 00:01:58.358263 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 00:01:58.358270 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 00:01:58.358277 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 00:01:58.358284 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 00:01:58.358291 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 00:01:58.358298 kernel: NX (Execute Disable) protection: active May 14 00:01:58.358308 kernel: APIC: Static calls initialized May 14 00:01:58.358315 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 14 00:01:58.358322 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 14 00:01:58.358329 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 14 00:01:58.358336 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 14 00:01:58.358343 kernel: extended physical RAM map: May 14 00:01:58.358351 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 00:01:58.358358 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 00:01:58.358365 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 00:01:58.358372 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 14 00:01:58.358379 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 00:01:58.358386 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 00:01:58.358397 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 00:01:58.358408 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 14 00:01:58.358415 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 14 00:01:58.358422 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 14 00:01:58.358430 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 14 00:01:58.358437 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 14 00:01:58.358451 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 00:01:58.358458 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 00:01:58.358466 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 00:01:58.358473 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 00:01:58.358480 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 00:01:58.358488 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 14 00:01:58.358495 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 14 00:01:58.358503 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 14 00:01:58.358510 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 14 00:01:58.358517 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 00:01:58.358527 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 00:01:58.358535 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 00:01:58.358542 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 00:01:58.358552 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 00:01:58.358559 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 00:01:58.358566 kernel: efi: EFI v2.7 by EDK II May 14 00:01:58.358574 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 14 00:01:58.358582 kernel: random: crng init done May 14 00:01:58.358589 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 14 00:01:58.358597 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 14 00:01:58.358606 kernel: secureboot: Secure boot disabled May 14 00:01:58.358616 kernel: SMBIOS 2.8 present. May 14 00:01:58.358624 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 14 00:01:58.358631 kernel: Hypervisor detected: KVM May 14 00:01:58.358638 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 00:01:58.358646 kernel: kvm-clock: using sched offset of 3525220365 cycles May 14 00:01:58.358653 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 00:01:58.358661 kernel: tsc: Detected 2794.748 MHz processor May 14 00:01:58.358669 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 00:01:58.358677 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 00:01:58.358684 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 14 00:01:58.358695 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 14 00:01:58.358702 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 00:01:58.358710 kernel: Using GB pages for direct mapping May 14 00:01:58.358741 kernel: ACPI: Early table checksum verification disabled May 14 00:01:58.358749 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 14 00:01:58.358757 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 14 00:01:58.358764 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:58.358772 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:58.358780 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 14 00:01:58.358791 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:58.358798 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:58.358806 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:58.358813 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:01:58.358821 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 14 00:01:58.358829 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 14 00:01:58.358836 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 14 00:01:58.358844 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 14 00:01:58.358851 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 14 00:01:58.358861 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 14 00:01:58.358869 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 14 00:01:58.358876 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 14 00:01:58.358884 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 14 00:01:58.358891 kernel: No NUMA configuration found May 14 00:01:58.358899 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 14 00:01:58.358906 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 14 00:01:58.358914 kernel: Zone ranges: May 14 00:01:58.358922 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 00:01:58.358931 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 14 00:01:58.358939 kernel: Normal empty May 14 00:01:58.358949 kernel: Movable zone start for each node May 14 00:01:58.358956 kernel: Early memory node ranges May 14 00:01:58.358974 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 14 00:01:58.358982 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 14 00:01:58.358989 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 14 00:01:58.358999 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 14 00:01:58.359007 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 14 00:01:58.359014 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 14 00:01:58.359024 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 14 00:01:58.359032 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 14 00:01:58.359039 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 14 00:01:58.359047 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 00:01:58.359054 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 14 00:01:58.359070 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 14 00:01:58.359080 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 00:01:58.359087 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 14 00:01:58.359095 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 14 00:01:58.359103 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 14 00:01:58.359114 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 14 00:01:58.359124 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 14 00:01:58.359132 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 00:01:58.359140 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 00:01:58.359148 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 00:01:58.359155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 00:01:58.359166 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 00:01:58.359173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 00:01:58.359181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 00:01:58.359189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 00:01:58.359197 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 00:01:58.359205 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 00:01:58.359212 kernel: TSC deadline timer available May 14 00:01:58.359220 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 14 00:01:58.359228 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 00:01:58.359238 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 00:01:58.359246 kernel: kvm-guest: setup PV sched yield May 14 00:01:58.359254 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 14 00:01:58.359261 kernel: Booting paravirtualized kernel on KVM May 14 00:01:58.359269 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 00:01:58.359277 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 00:01:58.359285 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 14 00:01:58.359293 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 14 00:01:58.359300 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 00:01:58.359310 kernel: kvm-guest: PV spinlocks enabled May 14 00:01:58.359318 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 00:01:58.359327 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:01:58.359336 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:01:58.359343 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:01:58.359354 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:01:58.359361 kernel: Fallback order for Node 0: 0 May 14 00:01:58.359369 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 14 00:01:58.359379 kernel: Policy zone: DMA32 May 14 00:01:58.359387 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:01:58.359395 kernel: Memory: 2385672K/2565800K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 179872K reserved, 0K cma-reserved) May 14 00:01:58.359403 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:01:58.359411 kernel: ftrace: allocating 37993 entries in 149 pages May 14 00:01:58.359419 kernel: ftrace: allocated 149 pages with 4 groups May 14 00:01:58.359427 kernel: Dynamic Preempt: voluntary May 14 00:01:58.359435 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:01:58.359443 kernel: rcu: RCU event tracing is enabled. May 14 00:01:58.359455 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:01:58.359466 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:01:58.359477 kernel: Rude variant of Tasks RCU enabled. May 14 00:01:58.359487 kernel: Tracing variant of Tasks RCU enabled. May 14 00:01:58.359519 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:01:58.359553 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:01:58.359565 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 00:01:58.359575 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 00:01:58.359586 kernel: Console: colour dummy device 80x25 May 14 00:01:58.359597 kernel: printk: console [ttyS0] enabled May 14 00:01:58.359613 kernel: ACPI: Core revision 20230628 May 14 00:01:58.359624 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 00:01:58.359640 kernel: APIC: Switch to symmetric I/O mode setup May 14 00:01:58.359652 kernel: x2apic enabled May 14 00:01:58.359662 kernel: APIC: Switched APIC routing to: physical x2apic May 14 00:01:58.359677 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 00:01:58.359687 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 00:01:58.359697 kernel: kvm-guest: setup PV IPIs May 14 00:01:58.359708 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 00:01:58.359853 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 00:01:58.359861 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 14 00:01:58.359869 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 00:01:58.359877 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 00:01:58.359885 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 00:01:58.359893 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 00:01:58.359900 kernel: Spectre V2 : Mitigation: Retpolines May 14 00:01:58.359908 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 14 00:01:58.359916 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 00:01:58.359927 kernel: RETBleed: Mitigation: untrained return thunk May 14 00:01:58.359935 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 00:01:58.359943 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 00:01:58.359951 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 00:01:58.359969 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 00:01:58.359977 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 00:01:58.359988 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 00:01:58.359996 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 00:01:58.360008 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 00:01:58.360015 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 00:01:58.360023 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 00:01:58.360031 kernel: Freeing SMP alternatives memory: 32K May 14 00:01:58.360039 kernel: pid_max: default: 32768 minimum: 301 May 14 00:01:58.360049 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 00:01:58.360057 kernel: landlock: Up and running. May 14 00:01:58.360065 kernel: SELinux: Initializing. May 14 00:01:58.360074 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:01:58.360086 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:01:58.360095 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 00:01:58.360104 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:01:58.360112 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:01:58.360122 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 00:01:58.360133 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 00:01:58.360143 kernel: ... version: 0 May 14 00:01:58.360153 kernel: ... bit width: 48 May 14 00:01:58.360164 kernel: ... generic registers: 6 May 14 00:01:58.360178 kernel: ... value mask: 0000ffffffffffff May 14 00:01:58.360188 kernel: ... max period: 00007fffffffffff May 14 00:01:58.360198 kernel: ... fixed-purpose events: 0 May 14 00:01:58.360208 kernel: ... event mask: 000000000000003f May 14 00:01:58.360218 kernel: signal: max sigframe size: 1776 May 14 00:01:58.360229 kernel: rcu: Hierarchical SRCU implementation. May 14 00:01:58.360241 kernel: rcu: Max phase no-delay instances is 400. May 14 00:01:58.360253 kernel: smp: Bringing up secondary CPUs ... May 14 00:01:58.360265 kernel: smpboot: x86: Booting SMP configuration: May 14 00:01:58.360279 kernel: .... node #0, CPUs: #1 #2 #3 May 14 00:01:58.360289 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:01:58.360299 kernel: smpboot: Max logical packages: 1 May 14 00:01:58.360310 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 14 00:01:58.360320 kernel: devtmpfs: initialized May 14 00:01:58.360332 kernel: x86/mm: Memory block size: 128MB May 14 00:01:58.360343 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 14 00:01:58.360353 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 14 00:01:58.360364 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 14 00:01:58.360378 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 14 00:01:58.360390 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 14 00:01:58.360402 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 14 00:01:58.360414 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:01:58.360426 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:01:58.360445 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:01:58.360457 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:01:58.360469 kernel: audit: initializing netlink subsys (disabled) May 14 00:01:58.360481 kernel: audit: type=2000 audit(1747180917.202:1): state=initialized audit_enabled=0 res=1 May 14 00:01:58.360496 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:01:58.360508 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 00:01:58.360520 kernel: cpuidle: using governor menu May 14 00:01:58.360531 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:01:58.360542 kernel: dca service started, version 1.12.1 May 14 00:01:58.360554 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 14 00:01:58.360566 kernel: PCI: Using configuration type 1 for base access May 14 00:01:58.360578 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 00:01:58.360593 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:01:58.360604 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 00:01:58.360615 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:01:58.360625 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 00:01:58.360635 kernel: ACPI: Added _OSI(Module Device) May 14 00:01:58.360645 kernel: ACPI: Added _OSI(Processor Device) May 14 00:01:58.360656 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:01:58.360665 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:01:58.360676 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:01:58.360686 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 14 00:01:58.360700 kernel: ACPI: Interpreter enabled May 14 00:01:58.360710 kernel: ACPI: PM: (supports S0 S3 S5) May 14 00:01:58.360738 kernel: ACPI: Using IOAPIC for interrupt routing May 14 00:01:58.360749 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 00:01:58.360759 kernel: PCI: Using E820 reservations for host bridge windows May 14 00:01:58.360770 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 00:01:58.360801 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:01:58.361042 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:01:58.361185 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 00:01:58.361314 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 00:01:58.361324 kernel: PCI host bridge to bus 0000:00 May 14 00:01:58.361454 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 00:01:58.361571 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 00:01:58.361688 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 00:01:58.361820 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 14 00:01:58.361940 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 14 00:01:58.362072 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 14 00:01:58.362190 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:01:58.362333 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 14 00:01:58.362475 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 14 00:01:58.362602 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 14 00:01:58.362753 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 14 00:01:58.362926 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 14 00:01:58.363120 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 14 00:01:58.363293 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 00:01:58.363478 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:01:58.363655 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 14 00:01:58.363857 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 14 00:01:58.364044 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 14 00:01:58.364227 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 14 00:01:58.364401 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 14 00:01:58.364573 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 14 00:01:58.364778 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 14 00:01:58.364923 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 14 00:01:58.365071 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 14 00:01:58.365199 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 14 00:01:58.365326 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 14 00:01:58.365452 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 14 00:01:58.365585 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 14 00:01:58.365711 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 00:01:58.365887 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 14 00:01:58.366033 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 14 00:01:58.366170 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 14 00:01:58.366305 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 14 00:01:58.366432 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 14 00:01:58.366443 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 00:01:58.366451 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 00:01:58.366459 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 00:01:58.366467 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 00:01:58.366479 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 00:01:58.366488 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 00:01:58.366495 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 00:01:58.366503 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 00:01:58.366511 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 00:01:58.366519 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 00:01:58.366527 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 00:01:58.366535 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 00:01:58.366543 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 00:01:58.366554 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 00:01:58.366562 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 00:01:58.366570 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 00:01:58.366578 kernel: iommu: Default domain type: Translated May 14 00:01:58.366586 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 00:01:58.366594 kernel: efivars: Registered efivars operations May 14 00:01:58.366601 kernel: PCI: Using ACPI for IRQ routing May 14 00:01:58.366609 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 00:01:58.366617 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 14 00:01:58.366628 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 14 00:01:58.366636 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 14 00:01:58.366644 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 14 00:01:58.366651 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 14 00:01:58.366659 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 14 00:01:58.366667 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 14 00:01:58.366675 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 14 00:01:58.366824 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 00:01:58.366955 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 00:01:58.367098 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 00:01:58.367109 kernel: vgaarb: loaded May 14 00:01:58.367118 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 00:01:58.367126 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 00:01:58.367134 kernel: clocksource: Switched to clocksource kvm-clock May 14 00:01:58.367142 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:01:58.367150 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:01:58.367158 kernel: pnp: PnP ACPI init May 14 00:01:58.367308 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 14 00:01:58.367320 kernel: pnp: PnP ACPI: found 6 devices May 14 00:01:58.367328 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 00:01:58.367336 kernel: NET: Registered PF_INET protocol family May 14 00:01:58.367344 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:01:58.367371 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:01:58.367381 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:01:58.367390 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:01:58.367401 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 00:01:58.367409 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:01:58.367417 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:01:58.367425 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:01:58.367434 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:01:58.367442 kernel: NET: Registered PF_XDP protocol family May 14 00:01:58.367572 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 14 00:01:58.367700 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 14 00:01:58.367836 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 00:01:58.367953 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 00:01:58.368085 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 00:01:58.368206 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 14 00:01:58.368322 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 14 00:01:58.368438 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 14 00:01:58.368448 kernel: PCI: CLS 0 bytes, default 64 May 14 00:01:58.368457 kernel: Initialise system trusted keyrings May 14 00:01:58.368465 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:01:58.368478 kernel: Key type asymmetric registered May 14 00:01:58.368486 kernel: Asymmetric key parser 'x509' registered May 14 00:01:58.368494 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 14 00:01:58.368503 kernel: io scheduler mq-deadline registered May 14 00:01:58.368511 kernel: io scheduler kyber registered May 14 00:01:58.368519 kernel: io scheduler bfq registered May 14 00:01:58.368530 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 00:01:58.368539 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 00:01:58.368548 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 00:01:58.368558 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 00:01:58.368569 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:01:58.368578 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 00:01:58.368586 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 00:01:58.368595 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 00:01:58.368606 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 00:01:58.368752 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 00:01:58.368765 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 00:01:58.368885 kernel: rtc_cmos 00:04: registered as rtc0 May 14 00:01:58.369023 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T00:01:57 UTC (1747180917) May 14 00:01:58.369146 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 14 00:01:58.369157 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 00:01:58.369166 kernel: efifb: probing for efifb May 14 00:01:58.369178 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 14 00:01:58.369186 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 14 00:01:58.369194 kernel: efifb: scrolling: redraw May 14 00:01:58.369203 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 00:01:58.369211 kernel: Console: switching to colour frame buffer device 160x50 May 14 00:01:58.369219 kernel: fb0: EFI VGA frame buffer device May 14 00:01:58.369228 kernel: pstore: Using crash dump compression: deflate May 14 00:01:58.369236 kernel: pstore: Registered efi_pstore as persistent store backend May 14 00:01:58.369245 kernel: NET: Registered PF_INET6 protocol family May 14 00:01:58.369255 kernel: Segment Routing with IPv6 May 14 00:01:58.369263 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:01:58.369271 kernel: NET: Registered PF_PACKET protocol family May 14 00:01:58.369280 kernel: Key type dns_resolver registered May 14 00:01:58.369288 kernel: IPI shorthand broadcast: enabled May 14 00:01:58.369296 kernel: sched_clock: Marking stable (936002941, 171916320)->(1145770528, -37851267) May 14 00:01:58.369304 kernel: registered taskstats version 1 May 14 00:01:58.369313 kernel: Loading compiled-in X.509 certificates May 14 00:01:58.369321 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 14 00:01:58.369332 kernel: Key type .fscrypt registered May 14 00:01:58.369340 kernel: Key type fscrypt-provisioning registered May 14 00:01:58.369349 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:01:58.369357 kernel: ima: Allocated hash algorithm: sha1 May 14 00:01:58.369365 kernel: ima: No architecture policies found May 14 00:01:58.369373 kernel: clk: Disabling unused clocks May 14 00:01:58.369381 kernel: Freeing unused kernel image (initmem) memory: 43604K May 14 00:01:58.369389 kernel: Write protecting the kernel read-only data: 40960k May 14 00:01:58.369398 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 14 00:01:58.369408 kernel: Run /init as init process May 14 00:01:58.369417 kernel: with arguments: May 14 00:01:58.369425 kernel: /init May 14 00:01:58.369433 kernel: with environment: May 14 00:01:58.369441 kernel: HOME=/ May 14 00:01:58.369449 kernel: TERM=linux May 14 00:01:58.369457 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:01:58.369466 systemd[1]: Successfully made /usr/ read-only. May 14 00:01:58.369478 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:01:58.369490 systemd[1]: Detected virtualization kvm. May 14 00:01:58.369498 systemd[1]: Detected architecture x86-64. May 14 00:01:58.369507 systemd[1]: Running in initrd. May 14 00:01:58.369516 systemd[1]: No hostname configured, using default hostname. May 14 00:01:58.369525 systemd[1]: Hostname set to . May 14 00:01:58.369533 systemd[1]: Initializing machine ID from VM UUID. May 14 00:01:58.369542 systemd[1]: Queued start job for default target initrd.target. May 14 00:01:58.369553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:01:58.369562 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:01:58.369572 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 00:01:58.369580 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:01:58.369589 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 00:01:58.369599 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 00:01:58.369609 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 00:01:58.369621 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 00:01:58.369630 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:01:58.369639 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:01:58.369647 systemd[1]: Reached target paths.target - Path Units. May 14 00:01:58.369656 systemd[1]: Reached target slices.target - Slice Units. May 14 00:01:58.369665 systemd[1]: Reached target swap.target - Swaps. May 14 00:01:58.369673 systemd[1]: Reached target timers.target - Timer Units. May 14 00:01:58.369682 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:01:58.369691 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:01:58.369702 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 00:01:58.369711 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 00:01:58.369733 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:01:58.369742 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:01:58.369750 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:01:58.369759 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:01:58.369768 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 00:01:58.369776 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:01:58.369788 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 00:01:58.369797 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:01:58.369805 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:01:58.369814 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:01:58.369823 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:01:58.369832 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 00:01:58.369841 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:01:58.369852 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:01:58.369885 systemd-journald[193]: Collecting audit messages is disabled. May 14 00:01:58.369912 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:01:58.369921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:01:58.369930 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:01:58.369940 systemd-journald[193]: Journal started May 14 00:01:58.369971 systemd-journald[193]: Runtime Journal (/run/log/journal/0852e44c47e24bc39eb9bbfc843108ca) is 6M, max 48.2M, 42.2M free. May 14 00:01:58.358267 systemd-modules-load[195]: Inserted module 'overlay' May 14 00:01:58.374219 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:01:58.381111 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:01:58.387507 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:01:58.391249 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:01:58.392756 kernel: Bridge firewalling registered May 14 00:01:58.392013 systemd-modules-load[195]: Inserted module 'br_netfilter' May 14 00:01:58.392895 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:01:58.401128 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:01:58.403032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:01:58.409279 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:01:58.411768 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 00:01:58.416405 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:01:58.416823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:01:58.433326 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:01:58.435190 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:01:58.450010 dracut-cmdline[226]: dracut-dracut-053 May 14 00:01:58.608339 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:01:58.680413 systemd-resolved[232]: Positive Trust Anchors: May 14 00:01:58.680427 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:01:58.680459 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:01:58.693500 systemd-resolved[232]: Defaulting to hostname 'linux'. May 14 00:01:58.695582 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:01:58.695751 kernel: SCSI subsystem initialized May 14 00:01:58.697861 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:01:58.762764 kernel: Loading iSCSI transport class v2.0-870. May 14 00:01:58.833759 kernel: iscsi: registered transport (tcp) May 14 00:01:58.885764 kernel: iscsi: registered transport (qla4xxx) May 14 00:01:58.885830 kernel: QLogic iSCSI HBA Driver May 14 00:01:58.940133 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 00:01:58.941687 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 00:01:58.987976 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:01:58.988058 kernel: device-mapper: uevent: version 1.0.3 May 14 00:01:58.988071 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 00:01:59.038763 kernel: raid6: avx2x4 gen() 29973 MB/s May 14 00:01:59.055757 kernel: raid6: avx2x2 gen() 30574 MB/s May 14 00:01:59.080380 kernel: raid6: avx2x1 gen() 25415 MB/s May 14 00:01:59.080428 kernel: raid6: using algorithm avx2x2 gen() 30574 MB/s May 14 00:01:59.122744 kernel: raid6: .... xor() 19240 MB/s, rmw enabled May 14 00:01:59.122772 kernel: raid6: using avx2x2 recovery algorithm May 14 00:01:59.143752 kernel: xor: automatically using best checksumming function avx May 14 00:01:59.328772 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 00:01:59.345075 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 00:01:59.419281 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:01:59.447515 systemd-udevd[415]: Using default interface naming scheme 'v255'. May 14 00:01:59.453066 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:01:59.484360 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 00:01:59.517665 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation May 14 00:01:59.553911 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:01:59.586037 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:01:59.672622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:01:59.710193 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 00:01:59.743747 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 00:01:59.745976 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:01:59.750245 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:01:59.753131 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 00:01:59.757191 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:01:59.782279 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:01:59.782336 kernel: GPT:9289727 != 19775487 May 14 00:01:59.782354 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:01:59.782398 kernel: GPT:9289727 != 19775487 May 14 00:01:59.782419 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:01:59.782440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:01:59.773072 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:01:59.775015 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:01:59.782389 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 00:01:59.791018 kernel: libata version 3.00 loaded. May 14 00:01:59.794228 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:01:59.795471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:01:59.799906 kernel: AVX2 version of gcm_enc/dec engaged. May 14 00:01:59.800032 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:01:59.805512 kernel: ahci 0000:00:1f.2: version 3.0 May 14 00:01:59.805747 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 00:01:59.805764 kernel: AES CTR mode by8 optimization enabled May 14 00:01:59.802592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:01:59.810346 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 14 00:01:59.810547 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 00:01:59.802935 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:01:59.809167 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:01:59.816323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:01:59.822882 kernel: scsi host0: ahci May 14 00:01:59.821570 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 00:01:59.828252 kernel: scsi host1: ahci May 14 00:01:59.832743 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (462) May 14 00:01:59.835581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:01:59.841529 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (461) May 14 00:01:59.841559 kernel: scsi host2: ahci May 14 00:01:59.835763 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:01:59.838548 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 00:01:59.848859 kernel: scsi host3: ahci May 14 00:01:59.851746 kernel: scsi host4: ahci May 14 00:01:59.855772 kernel: scsi host5: ahci May 14 00:01:59.855992 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 May 14 00:01:59.856008 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 May 14 00:01:59.857059 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 00:01:59.863413 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 May 14 00:01:59.863432 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 May 14 00:01:59.863447 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 May 14 00:01:59.863460 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 May 14 00:01:59.884834 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 00:01:59.902706 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 00:01:59.954084 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 00:01:59.955763 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 00:01:59.958115 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 00:01:59.960575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:01:59.985588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:01:59.987849 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:02:00.018070 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:02:00.143339 disk-uuid[554]: Primary Header is updated. May 14 00:02:00.143339 disk-uuid[554]: Secondary Entries is updated. May 14 00:02:00.143339 disk-uuid[554]: Secondary Header is updated. May 14 00:02:00.147376 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:02:00.153744 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:02:00.165753 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 00:02:00.173757 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 00:02:00.175428 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 00:02:00.175452 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 00:02:00.175736 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 00:02:00.179652 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 00:02:00.182280 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 00:02:00.182304 kernel: ata3.00: applying bridge limits May 14 00:02:00.182316 kernel: ata3.00: configured for UDMA/100 May 14 00:02:00.183744 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 00:02:00.244231 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 00:02:00.244487 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 00:02:00.269767 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 00:02:01.158762 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:02:01.159279 disk-uuid[567]: The operation has completed successfully. May 14 00:02:01.191389 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:02:01.191502 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 00:02:01.224247 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 00:02:01.239764 sh[594]: Success May 14 00:02:01.253746 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 14 00:02:01.288169 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 00:02:01.303160 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 00:02:01.316555 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 00:02:01.332462 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 14 00:02:01.332490 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 00:02:01.332501 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 00:02:01.334297 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 00:02:01.334321 kernel: BTRFS info (device dm-0): using free space tree May 14 00:02:01.338831 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 00:02:01.339510 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 00:02:01.340466 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 00:02:01.341557 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 00:02:01.392010 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:02:01.392061 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:02:01.392075 kernel: BTRFS info (device vda6): using free space tree May 14 00:02:01.394740 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:02:01.403733 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:02:01.464574 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:02:01.480556 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:02:01.521258 systemd-networkd[770]: lo: Link UP May 14 00:02:01.521268 systemd-networkd[770]: lo: Gained carrier May 14 00:02:01.522903 systemd-networkd[770]: Enumeration completed May 14 00:02:01.522996 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:02:01.523289 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:02:01.523293 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:02:01.529334 systemd-networkd[770]: eth0: Link UP May 14 00:02:01.529338 systemd-networkd[770]: eth0: Gained carrier May 14 00:02:01.529346 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:02:01.530976 systemd[1]: Reached target network.target - Network. May 14 00:02:01.559520 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 00:02:01.560620 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 00:02:01.580821 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:02:01.632055 ignition[774]: Ignition 2.20.0 May 14 00:02:01.632067 ignition[774]: Stage: fetch-offline May 14 00:02:01.632115 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 14 00:02:01.632126 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:02:01.632233 ignition[774]: parsed url from cmdline: "" May 14 00:02:01.632239 ignition[774]: no config URL provided May 14 00:02:01.632246 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:02:01.632257 ignition[774]: no config at "/usr/lib/ignition/user.ign" May 14 00:02:01.632288 ignition[774]: op(1): [started] loading QEMU firmware config module May 14 00:02:01.632294 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:02:01.684325 ignition[774]: op(1): [finished] loading QEMU firmware config module May 14 00:02:01.749111 ignition[774]: parsing config with SHA512: 973e1180f083aba7310685ca494506900467ff39f6e9056e124df54358701297b3dfacd3e1e95359bc31a12a464cc50a7303cadec1e2c7b9bbde841b50612faa May 14 00:02:01.755902 unknown[774]: fetched base config from "system" May 14 00:02:01.755916 unknown[774]: fetched user config from "qemu" May 14 00:02:01.764672 ignition[774]: fetch-offline: fetch-offline passed May 14 00:02:01.764809 ignition[774]: Ignition finished successfully May 14 00:02:01.764768 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.106 May 14 00:02:01.764779 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. May 14 00:02:01.771140 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:02:01.771428 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:02:01.772456 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 00:02:01.820154 ignition[785]: Ignition 2.20.0 May 14 00:02:01.820166 ignition[785]: Stage: kargs May 14 00:02:01.820319 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 14 00:02:01.820332 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:02:01.821220 ignition[785]: kargs: kargs passed May 14 00:02:01.821274 ignition[785]: Ignition finished successfully May 14 00:02:01.825758 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 00:02:01.829102 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 00:02:01.859040 ignition[794]: Ignition 2.20.0 May 14 00:02:01.859052 ignition[794]: Stage: disks May 14 00:02:01.859195 ignition[794]: no configs at "/usr/lib/ignition/base.d" May 14 00:02:01.859207 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:02:01.870153 ignition[794]: disks: disks passed May 14 00:02:01.870203 ignition[794]: Ignition finished successfully May 14 00:02:01.873553 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 00:02:01.874870 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 00:02:01.876825 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 00:02:01.878207 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:02:01.880393 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:02:01.895471 systemd[1]: Reached target basic.target - Basic System. May 14 00:02:01.898748 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 00:02:01.944905 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 00:02:02.097756 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 00:02:02.102479 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 00:02:02.236758 kernel: EXT4-fs (vda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 14 00:02:02.237710 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 00:02:02.239182 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 00:02:02.242207 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:02:02.244112 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 00:02:02.245442 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 00:02:02.245494 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:02:02.245523 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:02:02.268219 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 00:02:02.269612 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 00:02:02.315776 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) May 14 00:02:02.315804 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:02:02.315819 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:02:02.315832 kernel: BTRFS info (device vda6): using free space tree May 14 00:02:02.319733 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:02:02.321463 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:02:02.347459 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:02:02.357662 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory May 14 00:02:02.362287 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:02:02.367164 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:02:02.454239 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 00:02:02.474996 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 00:02:02.476604 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 00:02:02.498477 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 00:02:02.515017 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:02:02.546563 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 00:02:02.554207 ignition[928]: INFO : Ignition 2.20.0 May 14 00:02:02.554207 ignition[928]: INFO : Stage: mount May 14 00:02:02.555965 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:02:02.555965 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:02:02.591399 ignition[928]: INFO : mount: mount passed May 14 00:02:02.592233 ignition[928]: INFO : Ignition finished successfully May 14 00:02:02.595319 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 00:02:02.597545 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 00:02:02.649565 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:02:02.686760 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) May 14 00:02:02.690394 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:02:02.690416 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:02:02.690435 kernel: BTRFS info (device vda6): using free space tree May 14 00:02:02.693749 kernel: BTRFS info (device vda6): auto enabling async discard May 14 00:02:02.695185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:02:02.763676 ignition[956]: INFO : Ignition 2.20.0 May 14 00:02:02.763676 ignition[956]: INFO : Stage: files May 14 00:02:02.831153 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:02:02.831153 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:02:02.831153 ignition[956]: DEBUG : files: compiled without relabeling support, skipping May 14 00:02:02.831153 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:02:02.831153 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:02:02.782878 systemd-networkd[770]: eth0: Gained IPv6LL May 14 00:02:02.838777 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:02:02.838777 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:02:02.838777 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:02:02.838777 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 00:02:02.838777 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 14 00:02:02.832695 unknown[956]: wrote ssh authorized keys file for user: core May 14 00:02:02.929308 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:02:03.243464 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 00:02:03.243464 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:02:03.248607 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 14 00:02:03.612557 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 00:02:03.714977 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 00:02:03.730145 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 14 00:02:04.142930 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 00:02:04.450150 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 00:02:04.450150 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 00:02:04.514690 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:02:04.514690 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:02:04.514690 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 00:02:04.514690 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 00:02:04.514690 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:02:04.514690 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:02:04.514690 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 00:02:04.514690 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:02:04.613957 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:02:04.648407 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:02:04.650434 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:02:04.650434 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 00:02:04.650434 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:02:04.650434 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:02:04.650434 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:02:04.650434 ignition[956]: INFO : files: files passed May 14 00:02:04.650434 ignition[956]: INFO : Ignition finished successfully May 14 00:02:04.664068 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 00:02:04.666737 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 00:02:04.667445 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 00:02:04.682036 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:02:04.682162 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 00:02:04.725479 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory May 14 00:02:04.729224 initrd-setup-root-after-ignition[992]: grep: May 14 00:02:04.730548 initrd-setup-root-after-ignition[988]: grep: May 14 00:02:04.731551 initrd-setup-root-after-ignition[992]: /sysroot/etc/flatcar/enabled-sysext.conf May 14 00:02:04.732976 initrd-setup-root-after-ignition[988]: /sysroot/etc/flatcar/enabled-sysext.conf May 14 00:02:04.732976 initrd-setup-root-after-ignition[992]: : No such file or directory May 14 00:02:04.735588 initrd-setup-root-after-ignition[988]: : No such file or directory May 14 00:02:04.735588 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 00:02:04.735292 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:02:04.738113 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 00:02:04.740438 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 00:02:04.871437 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:02:04.871595 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 00:02:04.907965 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 00:02:04.910384 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 00:02:04.912638 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 00:02:04.913807 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 00:02:04.943285 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:02:04.996304 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 00:02:05.027284 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 00:02:05.027466 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:02:05.031175 systemd[1]: Stopped target timers.target - Timer Units. May 14 00:02:05.033490 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:02:05.033622 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:02:05.036813 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 00:02:05.037178 systemd[1]: Stopped target basic.target - Basic System. May 14 00:02:05.037520 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 00:02:05.038086 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:02:05.038428 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 00:02:05.038817 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 00:02:05.106208 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:02:05.107175 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 00:02:05.107492 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 00:02:05.108000 systemd[1]: Stopped target swap.target - Swaps. May 14 00:02:05.108314 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:02:05.108458 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 00:02:05.114998 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 00:02:05.115535 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:02:05.116013 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 00:02:05.121339 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:02:05.124798 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:02:05.124931 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 00:02:05.127970 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:02:05.128091 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:02:05.129208 systemd[1]: Stopped target paths.target - Path Units. May 14 00:02:05.186228 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:02:05.191849 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:02:05.193257 systemd[1]: Stopped target slices.target - Slice Units. May 14 00:02:05.195593 systemd[1]: Stopped target sockets.target - Socket Units. May 14 00:02:05.196554 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:02:05.196664 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:02:05.199273 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:02:05.199362 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:02:05.200221 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:02:05.200356 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:02:05.202071 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:02:05.202180 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 00:02:05.204995 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 00:02:05.232833 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:02:05.232958 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:02:05.250300 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 00:02:05.250394 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:02:05.304399 ignition[1012]: INFO : Ignition 2.20.0 May 14 00:02:05.304399 ignition[1012]: INFO : Stage: umount May 14 00:02:05.304399 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:02:05.304399 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:02:05.304399 ignition[1012]: INFO : umount: umount passed May 14 00:02:05.304399 ignition[1012]: INFO : Ignition finished successfully May 14 00:02:05.250525 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:02:05.303495 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:02:05.303619 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:02:05.308384 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:02:05.308511 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 00:02:05.312146 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:02:05.312272 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 00:02:05.315570 systemd[1]: Stopped target network.target - Network. May 14 00:02:05.316046 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:02:05.316117 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 00:02:05.316414 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:02:05.316459 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 00:02:05.316749 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:02:05.316820 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 00:02:05.317252 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 00:02:05.317295 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 00:02:05.317693 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 00:02:05.318082 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 00:02:05.329634 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:02:05.329928 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 00:02:05.335119 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 00:02:05.335438 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 00:02:05.335488 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:02:05.373804 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 00:02:05.374102 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:02:05.374236 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 00:02:05.378040 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 00:02:05.378564 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:02:05.378638 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 00:02:05.382445 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 00:02:05.383101 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:02:05.383155 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:02:05.383522 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:02:05.383583 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:02:05.388931 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:02:05.388983 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 00:02:05.432128 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:02:05.439707 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:02:05.452913 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:02:05.453134 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:02:05.491575 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:02:05.491654 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 00:02:05.494176 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:02:05.494220 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:02:05.495248 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:02:05.495310 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 00:02:05.496994 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:02:05.497048 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 00:02:05.497690 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:02:05.497765 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:02:05.499550 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 00:02:05.584685 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 00:02:05.584772 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:02:05.587197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:02:05.587252 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:02:05.592049 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 00:02:05.592119 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 00:02:05.592522 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:02:05.592646 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 00:02:05.597374 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:02:05.597515 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 00:02:06.334648 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:02:06.353089 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:02:06.353215 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 00:02:06.355533 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 00:02:06.356966 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:02:06.357043 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 00:02:06.381459 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 00:02:06.409058 systemd[1]: Switching root. May 14 00:02:06.446538 systemd-journald[193]: Journal stopped May 14 00:02:08.859290 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 14 00:02:08.859364 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:02:08.859386 kernel: SELinux: policy capability open_perms=1 May 14 00:02:08.859402 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:02:08.859416 kernel: SELinux: policy capability always_check_network=0 May 14 00:02:08.859443 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:02:08.859455 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:02:08.859467 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:02:08.859478 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:02:08.860775 kernel: audit: type=1403 audit(1747180927.568:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:02:08.860815 systemd[1]: Successfully loaded SELinux policy in 58.416ms. May 14 00:02:08.860845 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.337ms. May 14 00:02:08.860864 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:02:08.860882 systemd[1]: Detected virtualization kvm. May 14 00:02:08.860905 systemd[1]: Detected architecture x86-64. May 14 00:02:08.860921 systemd[1]: Detected first boot. May 14 00:02:08.860938 systemd[1]: Initializing machine ID from VM UUID. May 14 00:02:08.860954 zram_generator::config[1060]: No configuration found. May 14 00:02:08.860973 kernel: Guest personality initialized and is inactive May 14 00:02:08.860988 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 00:02:08.861003 kernel: Initialized host personality May 14 00:02:08.861027 kernel: NET: Registered PF_VSOCK protocol family May 14 00:02:08.861045 systemd[1]: Populated /etc with preset unit settings. May 14 00:02:08.861064 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 00:02:08.861081 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:02:08.861099 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 00:02:08.861115 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:02:08.861132 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 00:02:08.861148 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 00:02:08.861165 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 00:02:08.861182 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 00:02:08.861204 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 00:02:08.861221 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 00:02:08.861238 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 00:02:08.861255 systemd[1]: Created slice user.slice - User and Session Slice. May 14 00:02:08.861272 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:02:08.861289 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:02:08.861306 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 00:02:08.861323 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 00:02:08.861343 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 00:02:08.861361 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:02:08.861378 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 00:02:08.861396 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:02:08.861413 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 00:02:08.861440 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 00:02:08.861464 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 00:02:08.861480 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 00:02:08.861500 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:02:08.861516 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:02:08.861533 systemd[1]: Reached target slices.target - Slice Units. May 14 00:02:08.861550 systemd[1]: Reached target swap.target - Swaps. May 14 00:02:08.861567 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 00:02:08.861584 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 00:02:08.861600 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 00:02:08.861617 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:02:08.861634 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:02:08.861664 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:02:08.861682 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 00:02:08.861699 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 00:02:08.861732 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 00:02:08.861847 systemd[1]: Mounting media.mount - External Media Directory... May 14 00:02:08.861884 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:08.861900 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 00:02:08.861917 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 00:02:08.861933 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 00:02:08.861960 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:02:08.861979 systemd[1]: Reached target machines.target - Containers. May 14 00:02:08.861996 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 00:02:08.862013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:02:08.862030 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:02:08.862046 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 00:02:08.862063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:02:08.862080 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:02:08.862100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:02:08.862117 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 00:02:08.862134 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:02:08.862199 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:02:08.862222 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:02:08.862239 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 00:02:08.862256 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:02:08.862273 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:02:08.862290 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:02:08.862311 kernel: loop: module loaded May 14 00:02:08.862327 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:02:08.862343 kernel: fuse: init (API version 7.39) May 14 00:02:08.862358 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:02:08.862375 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 00:02:08.862394 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 00:02:08.862411 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 00:02:08.862427 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:02:08.862448 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:02:08.862465 kernel: ACPI: bus type drm_connector registered May 14 00:02:08.862481 systemd[1]: Stopped verity-setup.service. May 14 00:02:08.862498 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:08.862515 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 00:02:08.862536 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 00:02:08.862553 systemd[1]: Mounted media.mount - External Media Directory. May 14 00:02:08.862569 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 00:02:08.862625 systemd-journald[1128]: Collecting audit messages is disabled. May 14 00:02:08.862667 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 00:02:08.862685 systemd-journald[1128]: Journal started May 14 00:02:08.862735 systemd-journald[1128]: Runtime Journal (/run/log/journal/0852e44c47e24bc39eb9bbfc843108ca) is 6M, max 48.2M, 42.2M free. May 14 00:02:08.313230 systemd[1]: Queued start job for default target multi-user.target. May 14 00:02:08.326196 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 00:02:08.326788 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:02:08.327237 systemd[1]: systemd-journald.service: Consumed 1.320s CPU time. May 14 00:02:08.865150 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:02:08.866061 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 00:02:08.889772 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:02:08.891523 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:02:08.891854 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 00:02:08.893455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:02:08.893759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:02:08.895544 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:02:08.895855 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:02:08.897369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:02:08.897644 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:02:08.918529 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:02:08.918820 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 00:02:08.920301 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:02:08.920544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:02:08.922046 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:02:08.923698 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 00:02:08.925944 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 00:02:08.927594 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 00:02:08.943088 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 00:02:08.949475 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 00:02:08.951813 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 00:02:08.953266 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:02:08.953310 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:02:08.956043 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 00:02:08.976785 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 00:02:08.981009 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 00:02:08.982341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:02:08.984390 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 00:02:08.988128 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 00:02:08.990559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:02:08.993975 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 00:02:08.996968 systemd-journald[1128]: Time spent on flushing to /var/log/journal/0852e44c47e24bc39eb9bbfc843108ca is 15.278ms for 1061 entries. May 14 00:02:08.996968 systemd-journald[1128]: System Journal (/var/log/journal/0852e44c47e24bc39eb9bbfc843108ca) is 8M, max 195.6M, 187.6M free. May 14 00:02:09.213186 systemd-journald[1128]: Received client request to flush runtime journal. May 14 00:02:09.213246 kernel: loop0: detected capacity change from 0 to 109808 May 14 00:02:09.213271 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:02:08.995583 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:02:08.997192 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:02:08.999326 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 00:02:09.034430 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 00:02:09.077584 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:02:09.079160 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 00:02:09.080429 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 00:02:09.081945 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 00:02:09.089405 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 00:02:09.112675 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 00:02:09.121938 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:02:09.132741 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 00:02:09.181848 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 00:02:09.198559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:02:09.216454 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 00:02:09.235560 kernel: loop1: detected capacity change from 0 to 218376 May 14 00:02:09.234861 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 00:02:09.237110 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 00:02:09.255089 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 00:02:09.258131 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. May 14 00:02:09.258154 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. May 14 00:02:09.266664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:02:09.340776 kernel: loop2: detected capacity change from 0 to 151640 May 14 00:02:09.452762 kernel: loop3: detected capacity change from 0 to 109808 May 14 00:02:09.482741 kernel: loop4: detected capacity change from 0 to 218376 May 14 00:02:09.493756 kernel: loop5: detected capacity change from 0 to 151640 May 14 00:02:09.499391 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:02:09.500251 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 00:02:09.524826 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 00:02:09.525580 (sd-merge)[1203]: Merged extensions into '/usr'. May 14 00:02:09.548755 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... May 14 00:02:09.548779 systemd[1]: Reloading... May 14 00:02:09.636758 zram_generator::config[1232]: No configuration found. May 14 00:02:09.679571 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:02:09.766313 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:02:09.834855 systemd[1]: Reloading finished in 285 ms. May 14 00:02:09.854979 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 00:02:09.857280 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 00:02:09.879551 systemd[1]: Starting ensure-sysext.service... May 14 00:02:09.891300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:02:09.910520 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... May 14 00:02:09.910539 systemd[1]: Reloading... May 14 00:02:09.919961 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:02:09.920265 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 00:02:09.921406 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:02:09.922022 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 14 00:02:09.922133 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 14 00:02:09.926367 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:02:09.926382 systemd-tmpfiles[1270]: Skipping /boot May 14 00:02:09.939674 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:02:09.939691 systemd-tmpfiles[1270]: Skipping /boot May 14 00:02:09.983764 zram_generator::config[1302]: No configuration found. May 14 00:02:10.235187 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:02:10.302170 systemd[1]: Reloading finished in 391 ms. May 14 00:02:10.314195 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 00:02:10.316439 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:02:10.336637 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:02:10.339296 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 00:02:10.352112 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 00:02:10.396594 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:02:10.407690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:02:10.455930 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 00:02:10.466846 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 00:02:10.473646 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:10.474261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:02:10.478855 augenrules[1364]: No rules May 14 00:02:10.485460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:02:10.495093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:02:10.496519 systemd-udevd[1348]: Using default interface naming scheme 'v255'. May 14 00:02:10.547520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:02:10.549108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:02:10.549496 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:02:10.549799 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:10.553815 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:02:10.554349 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:02:10.591114 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 00:02:10.612915 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:02:10.615287 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 00:02:10.619562 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 00:02:10.623265 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:02:10.623493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:02:10.648028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:02:10.648255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:02:10.654930 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:02:10.655510 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:02:10.687143 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 00:02:10.709083 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 00:02:10.723964 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:10.727839 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:02:10.729232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:02:10.748939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:02:10.774405 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:02:10.779929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:02:10.806446 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 14 00:02:10.807893 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:02:10.809318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:02:10.809374 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:02:10.812240 kernel: ACPI: button: Power Button [PWRF] May 14 00:02:10.813607 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:02:10.817801 augenrules[1405]: /sbin/augenrules: No change May 14 00:02:10.822988 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 00:02:10.824527 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:02:10.824873 augenrules[1433]: No rules May 14 00:02:10.824565 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:02:10.832611 systemd[1]: Finished ensure-sysext.service. May 14 00:02:10.834156 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:02:10.834503 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:02:10.837847 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1389) May 14 00:02:10.840752 systemd-resolved[1341]: Positive Trust Anchors: May 14 00:02:10.840991 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:02:10.841034 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:02:10.847017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:02:10.847279 systemd-resolved[1341]: Defaulting to hostname 'linux'. May 14 00:02:10.847372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:02:10.877829 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:02:10.879742 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:02:10.880029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:02:10.882013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:02:10.882329 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:02:10.885322 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:02:10.885644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:02:10.887034 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 14 00:02:10.887372 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 00:02:10.887566 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 14 00:02:10.889298 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 00:02:10.890759 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 00:02:10.900790 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 14 00:02:10.936784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 00:02:10.945790 kernel: mousedev: PS/2 mouse device common for all mice May 14 00:02:10.986887 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:02:11.004466 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 00:02:11.005936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:02:11.006046 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:02:11.012918 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 00:02:11.025891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:02:11.075212 kernel: kvm_amd: TSC scaling supported May 14 00:02:11.075282 kernel: kvm_amd: Nested Virtualization enabled May 14 00:02:11.075326 kernel: kvm_amd: Nested Paging enabled May 14 00:02:11.075344 kernel: kvm_amd: LBR virtualization supported May 14 00:02:11.075360 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 00:02:11.075394 kernel: kvm_amd: Virtual GIF supported May 14 00:02:11.098784 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 00:02:11.118853 kernel: EDAC MC: Ver: 3.0.0 May 14 00:02:11.126826 systemd-networkd[1424]: lo: Link UP May 14 00:02:11.126837 systemd-networkd[1424]: lo: Gained carrier May 14 00:02:11.128797 systemd-networkd[1424]: Enumeration completed May 14 00:02:11.128947 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:02:11.129649 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:02:11.129655 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:02:11.130398 systemd[1]: Reached target network.target - Network. May 14 00:02:11.131959 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 00:02:11.132458 systemd-networkd[1424]: eth0: Link UP May 14 00:02:11.132465 systemd-networkd[1424]: eth0: Gained carrier May 14 00:02:11.132494 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:02:11.133457 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 00:02:11.147804 systemd-networkd[1424]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:02:11.149970 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 00:02:11.169446 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 00:02:11.172849 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 00:02:11.185548 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 00:02:11.185839 systemd[1]: Reached target time-set.target - System Time Set. May 14 00:02:11.805696 systemd-resolved[1341]: Clock change detected. Flushing caches. May 14 00:02:11.805755 systemd-timesyncd[1452]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:02:11.805803 systemd-timesyncd[1452]: Initial clock synchronization to Wed 2025-05-14 00:02:11.805643 UTC. May 14 00:02:11.806864 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:02:11.817310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:02:11.866869 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 00:02:11.879666 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:02:11.881069 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:02:11.882521 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 00:02:11.884006 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 00:02:11.885744 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 00:02:11.887139 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 00:02:11.888877 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 00:02:11.916906 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:02:11.916955 systemd[1]: Reached target paths.target - Path Units. May 14 00:02:11.918190 systemd[1]: Reached target timers.target - Timer Units. May 14 00:02:11.920437 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 00:02:11.923987 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 00:02:11.953900 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 00:02:11.955662 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 00:02:11.985730 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 00:02:11.991247 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 00:02:12.027414 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 00:02:12.030757 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 00:02:12.032899 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 00:02:12.058303 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:02:12.059481 systemd[1]: Reached target basic.target - Basic System. May 14 00:02:12.060612 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 00:02:12.060653 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 00:02:12.061991 systemd[1]: Starting containerd.service - containerd container runtime... May 14 00:02:12.062362 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:02:12.064367 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 00:02:12.097410 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 00:02:12.101434 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 00:02:12.102721 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 00:02:12.104458 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 00:02:12.139464 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 00:02:12.144415 jq[1475]: false May 14 00:02:12.144431 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 00:02:12.170975 dbus-daemon[1474]: [system] SELinux support is enabled May 14 00:02:12.172883 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 00:02:12.181621 extend-filesystems[1476]: Found loop3 May 14 00:02:12.206395 extend-filesystems[1476]: Found loop4 May 14 00:02:12.206395 extend-filesystems[1476]: Found loop5 May 14 00:02:12.206395 extend-filesystems[1476]: Found sr0 May 14 00:02:12.206395 extend-filesystems[1476]: Found vda May 14 00:02:12.206395 extend-filesystems[1476]: Found vda1 May 14 00:02:12.206395 extend-filesystems[1476]: Found vda2 May 14 00:02:12.206395 extend-filesystems[1476]: Found vda3 May 14 00:02:12.206395 extend-filesystems[1476]: Found usr May 14 00:02:12.206395 extend-filesystems[1476]: Found vda4 May 14 00:02:12.206395 extend-filesystems[1476]: Found vda6 May 14 00:02:12.206395 extend-filesystems[1476]: Found vda7 May 14 00:02:12.206395 extend-filesystems[1476]: Found vda9 May 14 00:02:12.206395 extend-filesystems[1476]: Checking size of /dev/vda9 May 14 00:02:12.246936 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 00:02:12.249537 extend-filesystems[1476]: Resized partition /dev/vda9 May 14 00:02:12.249688 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:02:12.250516 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:02:12.252396 systemd[1]: Starting update-engine.service - Update Engine... May 14 00:02:12.258148 extend-filesystems[1493]: resize2fs 1.47.2 (1-Jan-2025) May 14 00:02:12.288216 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1379) May 14 00:02:12.262037 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 00:02:12.289615 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 00:02:12.292702 jq[1497]: true May 14 00:02:12.294219 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 00:02:12.323777 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:02:12.324237 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 00:02:12.327663 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:02:12.327948 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 00:02:12.332798 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:02:12.333045 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 00:02:12.338302 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:02:12.349296 update_engine[1495]: I20250514 00:02:12.346726 1495 main.cc:92] Flatcar Update Engine starting May 14 00:02:12.352068 update_engine[1495]: I20250514 00:02:12.352015 1495 update_check_scheduler.cc:74] Next update check in 2m43s May 14 00:02:12.363632 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 00:02:12.369228 jq[1501]: true May 14 00:02:12.383255 systemd[1]: Started update-engine.service - Update Engine. May 14 00:02:12.409616 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:02:12.409648 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 00:02:12.411115 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:02:12.411146 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 00:02:12.414153 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 00:02:12.496755 tar[1500]: linux-amd64/LICENSE May 14 00:02:12.500339 tar[1500]: linux-amd64/helm May 14 00:02:12.540365 sshd_keygen[1496]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:02:12.565538 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:02:12.578787 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 00:02:13.250811 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:02:12.581229 systemd-logind[1494]: Watching system buttons on /dev/input/event1 (Power Button) May 14 00:02:13.261450 extend-filesystems[1493]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:02:13.261450 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:02:13.261450 extend-filesystems[1493]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:02:12.581255 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 00:02:13.295400 extend-filesystems[1476]: Resized filesystem in /dev/vda9 May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13.295017090Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13.313981332Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.443µs" May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13.314010787Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13.314030955Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13.314225330Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13.314246008Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13.314289710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13.314353610Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13.314364250Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13.314628255Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:02:13.321714 containerd[1502]: time="2025-05-14T00:02:13.314641801Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:02:12.581515 systemd-logind[1494]: New seat seat0. May 14 00:02:13.322352 containerd[1502]: time="2025-05-14T00:02:13.314651409Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:02:13.322352 containerd[1502]: time="2025-05-14T00:02:13.314659374Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 00:02:13.322352 containerd[1502]: time="2025-05-14T00:02:13.314754963Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 00:02:13.322352 containerd[1502]: time="2025-05-14T00:02:13.315006224Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:02:13.322352 containerd[1502]: time="2025-05-14T00:02:13.315041671Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:02:13.322352 containerd[1502]: time="2025-05-14T00:02:13.315052181Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 00:02:13.322352 containerd[1502]: time="2025-05-14T00:02:13.315075594Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 00:02:13.322352 containerd[1502]: time="2025-05-14T00:02:13.315340501Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 00:02:13.322352 containerd[1502]: time="2025-05-14T00:02:13.315405734Z" level=info msg="metadata content store policy set" policy=shared May 14 00:02:12.584443 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 00:02:12.586635 systemd[1]: Started systemd-logind.service - User Login Management. May 14 00:02:12.611696 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:02:12.611973 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 00:02:12.624708 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 00:02:12.675686 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 00:02:12.712473 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 00:02:12.715699 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 00:02:12.734667 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 00:02:12.758221 systemd[1]: Reached target getty.target - Login Prompts. May 14 00:02:12.761157 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:36676.service - OpenSSH per-connection server daemon (10.0.0.1:36676). May 14 00:02:13.114475 systemd-networkd[1424]: eth0: Gained IPv6LL May 14 00:02:13.118755 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 00:02:13.124137 systemd[1]: Reached target network-online.target - Network is Online. May 14 00:02:13.127177 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 00:02:13.132371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:13.144185 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 00:02:13.258132 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:02:13.258409 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 00:02:13.260257 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 00:02:13.260526 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 00:02:13.288620 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 00:02:13.337876 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 00:02:13.538850 bash[1527]: Updated "/home/core/.ssh/authorized_keys" May 14 00:02:13.540158 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 00:02:13.544725 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 00:02:13.577477 containerd[1502]: time="2025-05-14T00:02:13.577391905Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 00:02:13.577477 containerd[1502]: time="2025-05-14T00:02:13.577497483Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 00:02:13.577647 containerd[1502]: time="2025-05-14T00:02:13.577523211Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 00:02:13.577647 containerd[1502]: time="2025-05-14T00:02:13.577537127Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 00:02:13.577647 containerd[1502]: time="2025-05-14T00:02:13.577552526Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 00:02:13.577647 containerd[1502]: time="2025-05-14T00:02:13.577563797Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 00:02:13.577647 containerd[1502]: time="2025-05-14T00:02:13.577577233Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 00:02:13.577647 containerd[1502]: time="2025-05-14T00:02:13.577589205Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 00:02:13.577647 containerd[1502]: time="2025-05-14T00:02:13.577601959Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 00:02:13.577647 containerd[1502]: time="2025-05-14T00:02:13.577612889Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 00:02:13.577647 containerd[1502]: time="2025-05-14T00:02:13.577622738Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 00:02:13.577647 containerd[1502]: time="2025-05-14T00:02:13.577635101Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 00:02:13.577920 containerd[1502]: time="2025-05-14T00:02:13.577826650Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 00:02:13.577920 containerd[1502]: time="2025-05-14T00:02:13.577846357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 00:02:13.577920 containerd[1502]: time="2025-05-14T00:02:13.577857969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 00:02:13.577920 containerd[1502]: time="2025-05-14T00:02:13.577869601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 00:02:13.577920 containerd[1502]: time="2025-05-14T00:02:13.577880882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 00:02:13.577920 containerd[1502]: time="2025-05-14T00:02:13.577891232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 00:02:13.577920 containerd[1502]: time="2025-05-14T00:02:13.577903454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 00:02:13.577920 containerd[1502]: time="2025-05-14T00:02:13.577913654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 00:02:13.577920 containerd[1502]: time="2025-05-14T00:02:13.577924985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 00:02:13.578154 containerd[1502]: time="2025-05-14T00:02:13.577937037Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 00:02:13.578154 containerd[1502]: time="2025-05-14T00:02:13.577948559Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 00:02:13.578154 containerd[1502]: time="2025-05-14T00:02:13.578025704Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 00:02:13.578154 containerd[1502]: time="2025-05-14T00:02:13.578038868Z" level=info msg="Start snapshots syncer" May 14 00:02:13.578154 containerd[1502]: time="2025-05-14T00:02:13.578065208Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 00:02:13.578429 containerd[1502]: time="2025-05-14T00:02:13.578322871Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 00:02:13.578429 containerd[1502]: time="2025-05-14T00:02:13.578386981Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578440602Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578542683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578561048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578572670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578582298Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578600913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578611903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578622493Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578644184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578655986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578664853Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578694097Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578706932Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:02:13.578775 containerd[1502]: time="2025-05-14T00:02:13.578715548Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:02:13.579400 containerd[1502]: time="2025-05-14T00:02:13.578725156Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:02:13.579400 containerd[1502]: time="2025-05-14T00:02:13.578733622Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 00:02:13.579400 containerd[1502]: time="2025-05-14T00:02:13.578743260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 00:02:13.579400 containerd[1502]: time="2025-05-14T00:02:13.578758979Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 00:02:13.579400 containerd[1502]: time="2025-05-14T00:02:13.578776763Z" level=info msg="runtime interface created" May 14 00:02:13.579400 containerd[1502]: time="2025-05-14T00:02:13.578783084Z" level=info msg="created NRI interface" May 14 00:02:13.579400 containerd[1502]: time="2025-05-14T00:02:13.578791390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 00:02:13.579400 containerd[1502]: time="2025-05-14T00:02:13.578802290Z" level=info msg="Connect containerd service" May 14 00:02:13.579400 containerd[1502]: time="2025-05-14T00:02:13.578824462Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 00:02:13.579731 containerd[1502]: time="2025-05-14T00:02:13.579508565Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:02:13.673329 sshd[1549]: Connection closed by authenticating user core 10.0.0.1 port 36676 [preauth] May 14 00:02:13.676342 systemd[1]: sshd@0-10.0.0.106:22-10.0.0.1:36676.service: Deactivated successfully. May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.680650849Z" level=info msg="Start subscribing containerd event" May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.680712375Z" level=info msg="Start recovering state" May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.680753351Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.680821068Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.680852427Z" level=info msg="Start event monitor" May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.680873016Z" level=info msg="Start cni network conf syncer for default" May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.680884457Z" level=info msg="Start streaming server" May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.680899816Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.680909043Z" level=info msg="runtime interface starting up..." May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.680915766Z" level=info msg="starting plugins..." May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.680931445Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 00:02:13.681223 containerd[1502]: time="2025-05-14T00:02:13.681098629Z" level=info msg="containerd successfully booted in 0.388898s" May 14 00:02:13.682381 systemd[1]: Started containerd.service - containerd container runtime. May 14 00:02:13.726966 tar[1500]: linux-amd64/README.md May 14 00:02:13.758564 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 00:02:14.218824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:14.220536 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 00:02:14.223427 systemd[1]: Startup finished in 1.255s (kernel) + 9.625s (initrd) + 6.108s (userspace) = 16.989s. May 14 00:02:14.224798 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:02:14.620893 kubelet[1607]: E0514 00:02:14.620796 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:02:14.624776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:02:14.624980 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:02:14.625372 systemd[1]: kubelet.service: Consumed 1.046s CPU time, 253.4M memory peak. May 14 00:02:23.687263 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:33574.service - OpenSSH per-connection server daemon (10.0.0.1:33574). May 14 00:02:23.735161 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 33574 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:02:23.737158 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:23.743825 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 00:02:23.745120 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 00:02:23.751370 systemd-logind[1494]: New session 1 of user core. May 14 00:02:23.771850 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 00:02:23.775507 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 00:02:23.799905 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:02:23.802731 systemd-logind[1494]: New session c1 of user core. May 14 00:02:23.952715 systemd[1624]: Queued start job for default target default.target. May 14 00:02:23.968709 systemd[1624]: Created slice app.slice - User Application Slice. May 14 00:02:23.968740 systemd[1624]: Reached target paths.target - Paths. May 14 00:02:23.968806 systemd[1624]: Reached target timers.target - Timers. May 14 00:02:23.970515 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 00:02:23.982822 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 00:02:23.982983 systemd[1624]: Reached target sockets.target - Sockets. May 14 00:02:23.983040 systemd[1624]: Reached target basic.target - Basic System. May 14 00:02:23.983097 systemd[1624]: Reached target default.target - Main User Target. May 14 00:02:23.983140 systemd[1624]: Startup finished in 173ms. May 14 00:02:23.983446 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 00:02:23.985267 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 00:02:24.051921 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:33576.service - OpenSSH per-connection server daemon (10.0.0.1:33576). May 14 00:02:24.105574 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 33576 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:02:24.107047 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:24.111898 systemd-logind[1494]: New session 2 of user core. May 14 00:02:24.125605 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 00:02:24.181971 sshd[1637]: Connection closed by 10.0.0.1 port 33576 May 14 00:02:24.182458 sshd-session[1635]: pam_unix(sshd:session): session closed for user core May 14 00:02:24.194649 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:33576.service: Deactivated successfully. May 14 00:02:24.196766 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:02:24.198667 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. May 14 00:02:24.199963 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:33592.service - OpenSSH per-connection server daemon (10.0.0.1:33592). May 14 00:02:24.200760 systemd-logind[1494]: Removed session 2. May 14 00:02:24.254542 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 33592 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:02:24.256771 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:24.261919 systemd-logind[1494]: New session 3 of user core. May 14 00:02:24.271408 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 00:02:24.323568 sshd[1645]: Connection closed by 10.0.0.1 port 33592 May 14 00:02:24.324224 sshd-session[1642]: pam_unix(sshd:session): session closed for user core May 14 00:02:24.338996 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:33592.service: Deactivated successfully. May 14 00:02:24.340934 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:02:24.342819 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. May 14 00:02:24.344220 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:33596.service - OpenSSH per-connection server daemon (10.0.0.1:33596). May 14 00:02:24.345231 systemd-logind[1494]: Removed session 3. May 14 00:02:24.401035 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 33596 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:02:24.403101 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:24.408532 systemd-logind[1494]: New session 4 of user core. May 14 00:02:24.421615 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 00:02:24.477807 sshd[1653]: Connection closed by 10.0.0.1 port 33596 May 14 00:02:24.478212 sshd-session[1650]: pam_unix(sshd:session): session closed for user core May 14 00:02:24.488952 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:33596.service: Deactivated successfully. May 14 00:02:24.491620 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:02:24.493516 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. May 14 00:02:24.495234 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:33604.service - OpenSSH per-connection server daemon (10.0.0.1:33604). May 14 00:02:24.496443 systemd-logind[1494]: Removed session 4. May 14 00:02:24.560493 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 33604 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:02:24.562642 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:24.567785 systemd-logind[1494]: New session 5 of user core. May 14 00:02:24.583440 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 00:02:24.644689 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 00:02:24.645028 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:02:24.646031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:02:24.647795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:24.663891 sudo[1662]: pam_unix(sudo:session): session closed for user root May 14 00:02:24.665706 sshd[1661]: Connection closed by 10.0.0.1 port 33604 May 14 00:02:24.666414 sshd-session[1658]: pam_unix(sshd:session): session closed for user core May 14 00:02:24.685097 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:33604.service: Deactivated successfully. May 14 00:02:24.687545 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:02:24.690037 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. May 14 00:02:24.691781 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:33608.service - OpenSSH per-connection server daemon (10.0.0.1:33608). May 14 00:02:24.692752 systemd-logind[1494]: Removed session 5. May 14 00:02:24.749604 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 33608 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:02:24.751833 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:24.757329 systemd-logind[1494]: New session 6 of user core. May 14 00:02:24.767603 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 00:02:24.822938 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 00:02:24.823261 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:02:24.827938 sudo[1675]: pam_unix(sudo:session): session closed for user root May 14 00:02:24.835703 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 00:02:24.836126 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:02:24.871116 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:02:24.872812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:24.885922 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:02:24.926781 augenrules[1710]: No rules May 14 00:02:24.928314 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:02:24.928623 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:02:24.930289 sudo[1674]: pam_unix(sudo:session): session closed for user root May 14 00:02:24.931804 sshd[1673]: Connection closed by 10.0.0.1 port 33608 May 14 00:02:24.932213 sshd-session[1670]: pam_unix(sshd:session): session closed for user core May 14 00:02:24.946681 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:33608.service: Deactivated successfully. May 14 00:02:24.949475 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:02:24.951851 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. May 14 00:02:24.953985 systemd[1]: Started sshd@7-10.0.0.106:22-10.0.0.1:33616.service - OpenSSH per-connection server daemon (10.0.0.1:33616). May 14 00:02:24.955144 systemd-logind[1494]: Removed session 6. May 14 00:02:24.973615 kubelet[1683]: E0514 00:02:24.973547 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:02:24.981489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:02:24.981723 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:02:24.982129 systemd[1]: kubelet.service: Consumed 277ms CPU time, 105.7M memory peak. May 14 00:02:25.007476 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 33616 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:02:25.009214 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:25.015051 systemd-logind[1494]: New session 7 of user core. May 14 00:02:25.024471 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 00:02:25.081019 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:02:25.081405 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:02:25.754493 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 00:02:25.768996 (dockerd)[1743]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 00:02:26.573358 dockerd[1743]: time="2025-05-14T00:02:26.573255080Z" level=info msg="Starting up" May 14 00:02:26.581091 dockerd[1743]: time="2025-05-14T00:02:26.580690126Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 00:02:26.773347 dockerd[1743]: time="2025-05-14T00:02:26.771606640Z" level=info msg="Loading containers: start." May 14 00:02:27.125325 kernel: Initializing XFRM netlink socket May 14 00:02:27.259136 systemd-networkd[1424]: docker0: Link UP May 14 00:02:27.380717 dockerd[1743]: time="2025-05-14T00:02:27.379987038Z" level=info msg="Loading containers: done." May 14 00:02:27.509662 dockerd[1743]: time="2025-05-14T00:02:27.509074690Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:02:27.509662 dockerd[1743]: time="2025-05-14T00:02:27.509208471Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 00:02:27.509662 dockerd[1743]: time="2025-05-14T00:02:27.509379371Z" level=info msg="Daemon has completed initialization" May 14 00:02:27.686046 dockerd[1743]: time="2025-05-14T00:02:27.683958562Z" level=info msg="API listen on /run/docker.sock" May 14 00:02:27.684979 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 00:02:29.092198 containerd[1502]: time="2025-05-14T00:02:29.092140619Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 00:02:32.146433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097957211.mount: Deactivated successfully. May 14 00:02:34.657856 containerd[1502]: time="2025-05-14T00:02:34.657776791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:34.659383 containerd[1502]: time="2025-05-14T00:02:34.659320768Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 14 00:02:34.660708 containerd[1502]: time="2025-05-14T00:02:34.660680117Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:34.663507 containerd[1502]: time="2025-05-14T00:02:34.663474699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:34.664671 containerd[1502]: time="2025-05-14T00:02:34.664629635Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 5.572436608s" May 14 00:02:34.664671 containerd[1502]: time="2025-05-14T00:02:34.664665663Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 14 00:02:34.665473 containerd[1502]: time="2025-05-14T00:02:34.665454242Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 00:02:35.232177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:02:35.237306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:35.469469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:35.488673 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:02:35.765576 kubelet[2012]: E0514 00:02:35.765424 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:02:35.770004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:02:35.770217 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:02:35.770651 systemd[1]: kubelet.service: Consumed 295ms CPU time, 106.2M memory peak. May 14 00:02:37.153402 containerd[1502]: time="2025-05-14T00:02:37.153326730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:37.154894 containerd[1502]: time="2025-05-14T00:02:37.154840880Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 14 00:02:37.160709 containerd[1502]: time="2025-05-14T00:02:37.160643785Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:37.163450 containerd[1502]: time="2025-05-14T00:02:37.163402850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:37.164432 containerd[1502]: time="2025-05-14T00:02:37.164378119Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.498896466s" May 14 00:02:37.164432 containerd[1502]: time="2025-05-14T00:02:37.164419717Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 14 00:02:37.164990 containerd[1502]: time="2025-05-14T00:02:37.164956684Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 00:02:39.596012 containerd[1502]: time="2025-05-14T00:02:39.595902754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:39.597044 containerd[1502]: time="2025-05-14T00:02:39.596981858Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 14 00:02:39.598781 containerd[1502]: time="2025-05-14T00:02:39.598700402Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:39.601598 containerd[1502]: time="2025-05-14T00:02:39.601541571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:39.602580 containerd[1502]: time="2025-05-14T00:02:39.602536467Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.437538726s" May 14 00:02:39.602580 containerd[1502]: time="2025-05-14T00:02:39.602577334Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 14 00:02:39.603210 containerd[1502]: time="2025-05-14T00:02:39.603154687Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 00:02:41.072632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558654155.mount: Deactivated successfully. May 14 00:02:42.871525 containerd[1502]: time="2025-05-14T00:02:42.871426541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:42.872254 containerd[1502]: time="2025-05-14T00:02:42.872194712Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 14 00:02:42.873545 containerd[1502]: time="2025-05-14T00:02:42.873461187Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:42.878090 containerd[1502]: time="2025-05-14T00:02:42.878040697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:42.878761 containerd[1502]: time="2025-05-14T00:02:42.878723137Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 3.275529306s" May 14 00:02:42.878815 containerd[1502]: time="2025-05-14T00:02:42.878769714Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 14 00:02:42.879375 containerd[1502]: time="2025-05-14T00:02:42.879351205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 00:02:43.650806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640658779.mount: Deactivated successfully. May 14 00:02:45.095683 containerd[1502]: time="2025-05-14T00:02:45.095611468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:45.097110 containerd[1502]: time="2025-05-14T00:02:45.097023757Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 14 00:02:45.104481 containerd[1502]: time="2025-05-14T00:02:45.104395468Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:45.108449 containerd[1502]: time="2025-05-14T00:02:45.108376558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:45.109964 containerd[1502]: time="2025-05-14T00:02:45.109886508Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.23049915s" May 14 00:02:45.109964 containerd[1502]: time="2025-05-14T00:02:45.109948235Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 14 00:02:45.110632 containerd[1502]: time="2025-05-14T00:02:45.110579273Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 00:02:46.020959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 00:02:46.023010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:46.066631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount729044071.mount: Deactivated successfully. May 14 00:02:46.250446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:46.255061 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:02:46.484043 kubelet[2101]: E0514 00:02:46.483896 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:02:46.488680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:02:46.488907 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:02:46.489256 systemd[1]: kubelet.service: Consumed 241ms CPU time, 106.5M memory peak. May 14 00:02:46.760396 containerd[1502]: time="2025-05-14T00:02:46.760192016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:02:46.761513 containerd[1502]: time="2025-05-14T00:02:46.761431110Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 00:02:46.763212 containerd[1502]: time="2025-05-14T00:02:46.763171519Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:02:46.765264 containerd[1502]: time="2025-05-14T00:02:46.765205587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:02:46.765727 containerd[1502]: time="2025-05-14T00:02:46.765697777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.65507354s" May 14 00:02:46.765727 containerd[1502]: time="2025-05-14T00:02:46.765728091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 00:02:46.766306 containerd[1502]: time="2025-05-14T00:02:46.766252359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 00:02:47.358926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3372695855.mount: Deactivated successfully. May 14 00:02:56.206647 containerd[1502]: time="2025-05-14T00:02:56.206555534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:56.221500 containerd[1502]: time="2025-05-14T00:02:56.220457331Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 14 00:02:56.234178 containerd[1502]: time="2025-05-14T00:02:56.234098745Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:56.257050 containerd[1502]: time="2025-05-14T00:02:56.256949364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:02:56.258402 containerd[1502]: time="2025-05-14T00:02:56.258340835Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 9.492034634s" May 14 00:02:56.258402 containerd[1502]: time="2025-05-14T00:02:56.258384477Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 14 00:02:56.650401 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 00:02:56.652342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:56.876989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:56.891714 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:02:56.939852 kubelet[2194]: E0514 00:02:56.939675 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:02:56.944644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:02:56.944895 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:02:56.945409 systemd[1]: kubelet.service: Consumed 252ms CPU time, 104.5M memory peak. May 14 00:02:57.236811 update_engine[1495]: I20250514 00:02:57.236636 1495 update_attempter.cc:509] Updating boot flags... May 14 00:02:57.772314 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2210) May 14 00:02:57.832310 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2214) May 14 00:02:58.684400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:58.684642 systemd[1]: kubelet.service: Consumed 252ms CPU time, 104.5M memory peak. May 14 00:02:58.687851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:58.718375 systemd[1]: Reload requested from client PID 2224 ('systemctl') (unit session-7.scope)... May 14 00:02:58.718389 systemd[1]: Reloading... May 14 00:02:58.827319 zram_generator::config[2270]: No configuration found. May 14 00:02:59.702254 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:02:59.808896 systemd[1]: Reloading finished in 1090 ms. May 14 00:02:59.872635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:59.875605 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:02:59.877826 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:02:59.878091 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:02:59.878127 systemd[1]: kubelet.service: Consumed 165ms CPU time, 91.8M memory peak. May 14 00:02:59.879756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:03:00.080317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:03:00.092777 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:03:00.146178 kubelet[2317]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:03:00.146178 kubelet[2317]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 00:03:00.146178 kubelet[2317]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:03:00.146645 kubelet[2317]: I0514 00:03:00.146226 2317 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:03:00.561798 kubelet[2317]: I0514 00:03:00.561730 2317 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 00:03:00.561798 kubelet[2317]: I0514 00:03:00.561772 2317 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:03:00.562060 kubelet[2317]: I0514 00:03:00.562032 2317 server.go:954] "Client rotation is on, will bootstrap in background" May 14 00:03:00.693766 kubelet[2317]: E0514 00:03:00.693697 2317 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:00.696363 kubelet[2317]: I0514 00:03:00.696321 2317 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:03:00.711963 kubelet[2317]: I0514 00:03:00.711918 2317 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 00:03:00.718765 kubelet[2317]: I0514 00:03:00.718717 2317 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:03:00.747118 kubelet[2317]: I0514 00:03:00.747016 2317 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:03:00.747436 kubelet[2317]: I0514 00:03:00.747096 2317 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:03:00.747436 kubelet[2317]: I0514 00:03:00.747434 2317 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:03:00.747610 kubelet[2317]: I0514 00:03:00.747449 2317 container_manager_linux.go:304] "Creating device plugin manager" May 14 00:03:00.747666 kubelet[2317]: I0514 00:03:00.747645 2317 state_mem.go:36] "Initialized new in-memory state store" May 14 00:03:00.763436 kubelet[2317]: I0514 00:03:00.763386 2317 kubelet.go:446] "Attempting to sync node with API server" May 14 00:03:00.763436 kubelet[2317]: I0514 00:03:00.763414 2317 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:03:00.763436 kubelet[2317]: I0514 00:03:00.763443 2317 kubelet.go:352] "Adding apiserver pod source" May 14 00:03:00.795243 kubelet[2317]: I0514 00:03:00.795161 2317 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:03:00.841060 kubelet[2317]: I0514 00:03:00.840932 2317 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:03:00.841626 kubelet[2317]: I0514 00:03:00.841588 2317 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:03:00.842593 kubelet[2317]: W0514 00:03:00.842571 2317 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:03:00.842625 kubelet[2317]: W0514 00:03:00.842576 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:00.842687 kubelet[2317]: E0514 00:03:00.842653 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:00.847074 kubelet[2317]: I0514 00:03:00.847010 2317 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 00:03:00.847136 kubelet[2317]: I0514 00:03:00.847106 2317 server.go:1287] "Started kubelet" May 14 00:03:00.847669 kubelet[2317]: I0514 00:03:00.847634 2317 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:03:00.848202 kubelet[2317]: I0514 00:03:00.848146 2317 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:03:00.849056 kubelet[2317]: I0514 00:03:00.849024 2317 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:03:00.850857 kubelet[2317]: I0514 00:03:00.850818 2317 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:03:00.851489 kubelet[2317]: I0514 00:03:00.851456 2317 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:03:00.855829 kubelet[2317]: I0514 00:03:00.855802 2317 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 00:03:00.856943 kubelet[2317]: E0514 00:03:00.856916 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:00.861427 kubelet[2317]: E0514 00:03:00.861383 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="200ms" May 14 00:03:00.861742 kubelet[2317]: I0514 00:03:00.861693 2317 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:03:00.861861 kubelet[2317]: I0514 00:03:00.861771 2317 reconciler.go:26] "Reconciler: start to sync state" May 14 00:03:00.861861 kubelet[2317]: I0514 00:03:00.861787 2317 factory.go:221] Registration of the systemd container factory successfully May 14 00:03:00.861949 kubelet[2317]: I0514 00:03:00.861891 2317 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:03:00.863895 kubelet[2317]: W0514 00:03:00.862552 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:00.863895 kubelet[2317]: E0514 00:03:00.862608 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:00.863895 kubelet[2317]: W0514 00:03:00.862680 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:00.863895 kubelet[2317]: E0514 00:03:00.862706 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:00.864075 kubelet[2317]: E0514 00:03:00.861466 2317 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3bd859b524f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:03:00.847060214 +0000 UTC m=+0.749849836,LastTimestamp:2025-05-14 00:03:00.847060214 +0000 UTC m=+0.749849836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:03:00.864075 kubelet[2317]: I0514 00:03:00.863033 2317 server.go:490] "Adding debug handlers to kubelet server" May 14 00:03:00.864315 kubelet[2317]: I0514 00:03:00.864291 2317 factory.go:221] Registration of the containerd container factory successfully May 14 00:03:00.865004 kubelet[2317]: E0514 00:03:00.864981 2317 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:03:00.880548 kubelet[2317]: I0514 00:03:00.880518 2317 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 00:03:00.880548 kubelet[2317]: I0514 00:03:00.880538 2317 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 00:03:00.880656 kubelet[2317]: I0514 00:03:00.880557 2317 state_mem.go:36] "Initialized new in-memory state store" May 14 00:03:00.899530 kubelet[2317]: I0514 00:03:00.899489 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:03:00.901120 kubelet[2317]: I0514 00:03:00.901046 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:03:00.901120 kubelet[2317]: I0514 00:03:00.901077 2317 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 00:03:00.901120 kubelet[2317]: I0514 00:03:00.901094 2317 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 00:03:00.901120 kubelet[2317]: I0514 00:03:00.901100 2317 kubelet.go:2388] "Starting kubelet main sync loop" May 14 00:03:00.901271 kubelet[2317]: E0514 00:03:00.901143 2317 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:03:00.902383 kubelet[2317]: W0514 00:03:00.901810 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:00.902383 kubelet[2317]: E0514 00:03:00.901839 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:00.961429 kubelet[2317]: E0514 00:03:00.961345 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:01.001792 kubelet[2317]: E0514 00:03:01.001743 2317 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:03:01.062052 kubelet[2317]: E0514 00:03:01.061987 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:01.066342 kubelet[2317]: E0514 00:03:01.066260 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="400ms" May 14 00:03:01.162778 kubelet[2317]: E0514 00:03:01.162716 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:01.202271 kubelet[2317]: E0514 00:03:01.202224 2317 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:03:01.263547 kubelet[2317]: E0514 00:03:01.263497 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:01.364351 kubelet[2317]: E0514 00:03:01.364294 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:01.465135 kubelet[2317]: E0514 00:03:01.464951 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:01.467690 kubelet[2317]: E0514 00:03:01.467635 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="800ms" May 14 00:03:01.566117 kubelet[2317]: E0514 00:03:01.566057 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:01.603439 kubelet[2317]: E0514 00:03:01.603348 2317 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:03:01.666830 kubelet[2317]: E0514 00:03:01.666768 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:01.756940 kubelet[2317]: W0514 00:03:01.756802 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:01.756940 kubelet[2317]: E0514 00:03:01.756851 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:01.767504 kubelet[2317]: E0514 00:03:01.767466 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:01.867834 kubelet[2317]: E0514 00:03:01.867775 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:01.917718 kubelet[2317]: W0514 00:03:01.917657 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:01.917861 kubelet[2317]: E0514 00:03:01.917735 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:01.917861 kubelet[2317]: W0514 00:03:01.917665 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:01.917861 kubelet[2317]: E0514 00:03:01.917789 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:01.968407 kubelet[2317]: E0514 00:03:01.968344 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:02.069134 kubelet[2317]: E0514 00:03:02.068950 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:02.169673 kubelet[2317]: E0514 00:03:02.169608 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:02.268966 kubelet[2317]: E0514 00:03:02.268916 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="1.6s" May 14 00:03:02.270056 kubelet[2317]: E0514 00:03:02.270000 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:02.281719 kubelet[2317]: W0514 00:03:02.281649 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:02.281719 kubelet[2317]: E0514 00:03:02.281710 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:02.370705 kubelet[2317]: E0514 00:03:02.370549 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:02.403814 kubelet[2317]: E0514 00:03:02.403761 2317 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:03:02.471411 kubelet[2317]: E0514 00:03:02.471359 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:02.524032 kubelet[2317]: I0514 00:03:02.523975 2317 policy_none.go:49] "None policy: Start" May 14 00:03:02.524032 kubelet[2317]: I0514 00:03:02.524014 2317 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 00:03:02.524032 kubelet[2317]: I0514 00:03:02.524032 2317 state_mem.go:35] "Initializing new in-memory state store" May 14 00:03:02.547811 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 00:03:02.561682 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 00:03:02.565620 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 00:03:02.572106 kubelet[2317]: E0514 00:03:02.572058 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:02.579697 kubelet[2317]: I0514 00:03:02.579654 2317 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:03:02.579958 kubelet[2317]: I0514 00:03:02.579931 2317 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:03:02.579958 kubelet[2317]: I0514 00:03:02.579946 2317 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:03:02.580769 kubelet[2317]: I0514 00:03:02.580228 2317 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:03:02.581241 kubelet[2317]: E0514 00:03:02.581219 2317 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 00:03:02.581319 kubelet[2317]: E0514 00:03:02.581270 2317 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:03:02.682475 kubelet[2317]: I0514 00:03:02.682433 2317 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 00:03:02.682885 kubelet[2317]: E0514 00:03:02.682829 2317 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" May 14 00:03:02.706453 kubelet[2317]: E0514 00:03:02.706372 2317 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:02.885015 kubelet[2317]: I0514 00:03:02.884971 2317 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 00:03:02.885369 kubelet[2317]: E0514 00:03:02.885343 2317 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" May 14 00:03:03.287576 kubelet[2317]: I0514 00:03:03.287539 2317 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 00:03:03.288107 kubelet[2317]: E0514 00:03:03.287989 2317 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" May 14 00:03:03.739419 kubelet[2317]: W0514 00:03:03.739342 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:03.739419 kubelet[2317]: E0514 00:03:03.739416 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:03.870130 kubelet[2317]: E0514 00:03:03.870052 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="3.2s" May 14 00:03:04.015091 systemd[1]: Created slice kubepods-burstable-pod4dda6e8fc20d0ad07583285be8666834.slice - libcontainer container kubepods-burstable-pod4dda6e8fc20d0ad07583285be8666834.slice. May 14 00:03:04.017004 kubelet[2317]: W0514 00:03:04.016964 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:04.017070 kubelet[2317]: E0514 00:03:04.017015 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:04.038780 kubelet[2317]: E0514 00:03:04.038715 2317 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:03:04.041941 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 14 00:03:04.051336 kubelet[2317]: E0514 00:03:04.051286 2317 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:03:04.054658 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 14 00:03:04.056706 kubelet[2317]: E0514 00:03:04.056665 2317 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:03:04.074787 kubelet[2317]: W0514 00:03:04.074688 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:04.074787 kubelet[2317]: E0514 00:03:04.074778 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:04.082437 kubelet[2317]: I0514 00:03:04.082325 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4dda6e8fc20d0ad07583285be8666834-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4dda6e8fc20d0ad07583285be8666834\") " pod="kube-system/kube-apiserver-localhost" May 14 00:03:04.082437 kubelet[2317]: I0514 00:03:04.082383 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:04.082437 kubelet[2317]: I0514 00:03:04.082440 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:04.082744 kubelet[2317]: I0514 00:03:04.082475 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:04.082744 kubelet[2317]: I0514 00:03:04.082506 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4dda6e8fc20d0ad07583285be8666834-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4dda6e8fc20d0ad07583285be8666834\") " pod="kube-system/kube-apiserver-localhost" May 14 00:03:04.082744 kubelet[2317]: I0514 00:03:04.082533 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:04.082744 kubelet[2317]: I0514 00:03:04.082559 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:04.082744 kubelet[2317]: I0514 00:03:04.082621 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 00:03:04.082907 kubelet[2317]: I0514 00:03:04.082664 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4dda6e8fc20d0ad07583285be8666834-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4dda6e8fc20d0ad07583285be8666834\") " pod="kube-system/kube-apiserver-localhost" May 14 00:03:04.090455 kubelet[2317]: I0514 00:03:04.090409 2317 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 00:03:04.090962 kubelet[2317]: E0514 00:03:04.090884 2317 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" May 14 00:03:04.258111 kubelet[2317]: E0514 00:03:04.257958 2317 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3bd859b524f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:03:00.847060214 +0000 UTC m=+0.749849836,LastTimestamp:2025-05-14 00:03:00.847060214 +0000 UTC m=+0.749849836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:03:04.340308 kubelet[2317]: E0514 00:03:04.340102 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:04.341118 containerd[1502]: time="2025-05-14T00:03:04.341052569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4dda6e8fc20d0ad07583285be8666834,Namespace:kube-system,Attempt:0,}" May 14 00:03:04.352856 kubelet[2317]: E0514 00:03:04.352784 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:04.353653 containerd[1502]: time="2025-05-14T00:03:04.353550102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 14 00:03:04.357997 kubelet[2317]: E0514 00:03:04.357926 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:04.358581 containerd[1502]: time="2025-05-14T00:03:04.358527608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 14 00:03:04.956032 kubelet[2317]: W0514 00:03:04.955956 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused May 14 00:03:04.956032 kubelet[2317]: E0514 00:03:04.956034 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" May 14 00:03:05.061224 containerd[1502]: time="2025-05-14T00:03:05.061152756Z" level=info msg="connecting to shim 7f09922dbfa84a40069ffe89b7be986c49d234ed3f4a29a9ccd21b2d89a1e064" address="unix:///run/containerd/s/2304741da87167b0fb2e1bc41bcab160bf4c61aa48d953f54f1acbf34c8908a1" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:05.115695 systemd[1]: Started cri-containerd-7f09922dbfa84a40069ffe89b7be986c49d234ed3f4a29a9ccd21b2d89a1e064.scope - libcontainer container 7f09922dbfa84a40069ffe89b7be986c49d234ed3f4a29a9ccd21b2d89a1e064. May 14 00:03:05.150192 containerd[1502]: time="2025-05-14T00:03:05.150126986Z" level=info msg="connecting to shim 404c1404a3def3f126101305074f28b8ad6190b73c40ccd043714200a46b620c" address="unix:///run/containerd/s/bf063535f8520ad66e0ebc3eaf3c4832297f0a5bc973e201783b77a3c067e4c4" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:05.196460 systemd[1]: Started cri-containerd-404c1404a3def3f126101305074f28b8ad6190b73c40ccd043714200a46b620c.scope - libcontainer container 404c1404a3def3f126101305074f28b8ad6190b73c40ccd043714200a46b620c. May 14 00:03:05.223354 containerd[1502]: time="2025-05-14T00:03:05.223064692Z" level=info msg="connecting to shim 6b3fbf5f7da2ad57e69ae57849866bc7239614193945cb2cef55d727f83475df" address="unix:///run/containerd/s/8a0e728983f5e2d8b65bd669c00c8bb1afd9c8d9db2bb7fc52a3267b6096ed30" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:05.243975 containerd[1502]: time="2025-05-14T00:03:05.243914594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4dda6e8fc20d0ad07583285be8666834,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f09922dbfa84a40069ffe89b7be986c49d234ed3f4a29a9ccd21b2d89a1e064\"" May 14 00:03:05.245626 kubelet[2317]: E0514 00:03:05.245527 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:05.252794 containerd[1502]: time="2025-05-14T00:03:05.251600913Z" level=info msg="CreateContainer within sandbox \"7f09922dbfa84a40069ffe89b7be986c49d234ed3f4a29a9ccd21b2d89a1e064\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:03:05.269551 systemd[1]: Started cri-containerd-6b3fbf5f7da2ad57e69ae57849866bc7239614193945cb2cef55d727f83475df.scope - libcontainer container 6b3fbf5f7da2ad57e69ae57849866bc7239614193945cb2cef55d727f83475df. May 14 00:03:05.302381 containerd[1502]: time="2025-05-14T00:03:05.302311221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"404c1404a3def3f126101305074f28b8ad6190b73c40ccd043714200a46b620c\"" May 14 00:03:05.303170 kubelet[2317]: E0514 00:03:05.302982 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:05.305018 containerd[1502]: time="2025-05-14T00:03:05.304985483Z" level=info msg="CreateContainer within sandbox \"404c1404a3def3f126101305074f28b8ad6190b73c40ccd043714200a46b620c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:03:05.355690 containerd[1502]: time="2025-05-14T00:03:05.355626569Z" level=info msg="Container de4f9366d4a3aa7f15cd468af406a3940b2f8e63a385505692377a662705915a: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:05.375057 containerd[1502]: time="2025-05-14T00:03:05.375001552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b3fbf5f7da2ad57e69ae57849866bc7239614193945cb2cef55d727f83475df\"" May 14 00:03:05.375706 kubelet[2317]: E0514 00:03:05.375681 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:05.376988 containerd[1502]: time="2025-05-14T00:03:05.376963225Z" level=info msg="CreateContainer within sandbox \"6b3fbf5f7da2ad57e69ae57849866bc7239614193945cb2cef55d727f83475df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:03:05.468773 containerd[1502]: time="2025-05-14T00:03:05.468716072Z" level=info msg="Container b6da1db385b752a947f8bb404bed9d6d6d37703092f0ef171768e3a67af4e383: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:05.532253 containerd[1502]: time="2025-05-14T00:03:05.532112553Z" level=info msg="CreateContainer within sandbox \"7f09922dbfa84a40069ffe89b7be986c49d234ed3f4a29a9ccd21b2d89a1e064\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de4f9366d4a3aa7f15cd468af406a3940b2f8e63a385505692377a662705915a\"" May 14 00:03:05.532876 containerd[1502]: time="2025-05-14T00:03:05.532844237Z" level=info msg="StartContainer for \"de4f9366d4a3aa7f15cd468af406a3940b2f8e63a385505692377a662705915a\"" May 14 00:03:05.534334 containerd[1502]: time="2025-05-14T00:03:05.534312224Z" level=info msg="connecting to shim de4f9366d4a3aa7f15cd468af406a3940b2f8e63a385505692377a662705915a" address="unix:///run/containerd/s/2304741da87167b0fb2e1bc41bcab160bf4c61aa48d953f54f1acbf34c8908a1" protocol=ttrpc version=3 May 14 00:03:05.559604 systemd[1]: Started cri-containerd-de4f9366d4a3aa7f15cd468af406a3940b2f8e63a385505692377a662705915a.scope - libcontainer container de4f9366d4a3aa7f15cd468af406a3940b2f8e63a385505692377a662705915a. May 14 00:03:05.651780 containerd[1502]: time="2025-05-14T00:03:05.651728380Z" level=info msg="StartContainer for \"de4f9366d4a3aa7f15cd468af406a3940b2f8e63a385505692377a662705915a\" returns successfully" May 14 00:03:05.652189 containerd[1502]: time="2025-05-14T00:03:05.651850333Z" level=info msg="CreateContainer within sandbox \"404c1404a3def3f126101305074f28b8ad6190b73c40ccd043714200a46b620c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b6da1db385b752a947f8bb404bed9d6d6d37703092f0ef171768e3a67af4e383\"" May 14 00:03:05.652885 containerd[1502]: time="2025-05-14T00:03:05.652840953Z" level=info msg="StartContainer for \"b6da1db385b752a947f8bb404bed9d6d6d37703092f0ef171768e3a67af4e383\"" May 14 00:03:05.654266 containerd[1502]: time="2025-05-14T00:03:05.654200558Z" level=info msg="connecting to shim b6da1db385b752a947f8bb404bed9d6d6d37703092f0ef171768e3a67af4e383" address="unix:///run/containerd/s/bf063535f8520ad66e0ebc3eaf3c4832297f0a5bc973e201783b77a3c067e4c4" protocol=ttrpc version=3 May 14 00:03:05.655255 containerd[1502]: time="2025-05-14T00:03:05.655154403Z" level=info msg="Container cef1730a9827a0a7f215d92517a282d39a4fd4aa0cd21cfa643bab2be0efe816: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:05.683309 containerd[1502]: time="2025-05-14T00:03:05.681683131Z" level=info msg="CreateContainer within sandbox \"6b3fbf5f7da2ad57e69ae57849866bc7239614193945cb2cef55d727f83475df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cef1730a9827a0a7f215d92517a282d39a4fd4aa0cd21cfa643bab2be0efe816\"" May 14 00:03:05.683309 containerd[1502]: time="2025-05-14T00:03:05.682436333Z" level=info msg="StartContainer for \"cef1730a9827a0a7f215d92517a282d39a4fd4aa0cd21cfa643bab2be0efe816\"" May 14 00:03:05.684176 containerd[1502]: time="2025-05-14T00:03:05.684140743Z" level=info msg="connecting to shim cef1730a9827a0a7f215d92517a282d39a4fd4aa0cd21cfa643bab2be0efe816" address="unix:///run/containerd/s/8a0e728983f5e2d8b65bd669c00c8bb1afd9c8d9db2bb7fc52a3267b6096ed30" protocol=ttrpc version=3 May 14 00:03:05.693470 kubelet[2317]: I0514 00:03:05.693437 2317 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 00:03:05.767591 systemd[1]: Started cri-containerd-cef1730a9827a0a7f215d92517a282d39a4fd4aa0cd21cfa643bab2be0efe816.scope - libcontainer container cef1730a9827a0a7f215d92517a282d39a4fd4aa0cd21cfa643bab2be0efe816. May 14 00:03:05.792590 systemd[1]: Started cri-containerd-b6da1db385b752a947f8bb404bed9d6d6d37703092f0ef171768e3a67af4e383.scope - libcontainer container b6da1db385b752a947f8bb404bed9d6d6d37703092f0ef171768e3a67af4e383. May 14 00:03:05.866168 containerd[1502]: time="2025-05-14T00:03:05.866120159Z" level=info msg="StartContainer for \"cef1730a9827a0a7f215d92517a282d39a4fd4aa0cd21cfa643bab2be0efe816\" returns successfully" May 14 00:03:05.875572 containerd[1502]: time="2025-05-14T00:03:05.875519304Z" level=info msg="StartContainer for \"b6da1db385b752a947f8bb404bed9d6d6d37703092f0ef171768e3a67af4e383\" returns successfully" May 14 00:03:05.915371 kubelet[2317]: E0514 00:03:05.915322 2317 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:03:05.916600 kubelet[2317]: E0514 00:03:05.916574 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:05.922326 kubelet[2317]: E0514 00:03:05.922295 2317 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:03:05.922471 kubelet[2317]: E0514 00:03:05.922450 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:05.925765 kubelet[2317]: E0514 00:03:05.925738 2317 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:03:05.925859 kubelet[2317]: E0514 00:03:05.925840 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:06.929302 kubelet[2317]: E0514 00:03:06.928386 2317 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:03:06.929302 kubelet[2317]: E0514 00:03:06.928465 2317 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:03:06.929302 kubelet[2317]: E0514 00:03:06.928512 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:06.929302 kubelet[2317]: E0514 00:03:06.928606 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:06.929302 kubelet[2317]: E0514 00:03:06.929201 2317 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:03:06.929302 kubelet[2317]: E0514 00:03:06.929317 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:07.355087 kubelet[2317]: E0514 00:03:07.354560 2317 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 00:03:07.521044 kubelet[2317]: I0514 00:03:07.520986 2317 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 00:03:07.521044 kubelet[2317]: E0514 00:03:07.521039 2317 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 00:03:07.576234 kubelet[2317]: E0514 00:03:07.576180 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.676978 kubelet[2317]: E0514 00:03:07.676927 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.777342 kubelet[2317]: E0514 00:03:07.777264 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.878369 kubelet[2317]: E0514 00:03:07.878305 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:07.930248 kubelet[2317]: E0514 00:03:07.930127 2317 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 00:03:07.930248 kubelet[2317]: E0514 00:03:07.930244 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:07.979400 kubelet[2317]: E0514 00:03:07.979333 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:08.080408 kubelet[2317]: E0514 00:03:08.080352 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:08.181196 kubelet[2317]: E0514 00:03:08.181021 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:08.282102 kubelet[2317]: E0514 00:03:08.282024 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:08.382619 kubelet[2317]: E0514 00:03:08.382575 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:08.483509 kubelet[2317]: E0514 00:03:08.483356 2317 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:08.563160 kubelet[2317]: I0514 00:03:08.563064 2317 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 00:03:08.586778 kubelet[2317]: I0514 00:03:08.586743 2317 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 00:03:08.605307 kubelet[2317]: I0514 00:03:08.603296 2317 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 00:03:08.806729 kubelet[2317]: I0514 00:03:08.806428 2317 apiserver.go:52] "Watching apiserver" May 14 00:03:08.809378 kubelet[2317]: E0514 00:03:08.809215 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:08.809378 kubelet[2317]: E0514 00:03:08.809381 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:08.861999 kubelet[2317]: I0514 00:03:08.861925 2317 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:03:08.930761 kubelet[2317]: E0514 00:03:08.930713 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:10.656046 systemd[1]: Reload requested from client PID 2593 ('systemctl') (unit session-7.scope)... May 14 00:03:10.656073 systemd[1]: Reloading... May 14 00:03:10.778869 zram_generator::config[2637]: No configuration found. May 14 00:03:10.926761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:03:10.969007 kubelet[2317]: I0514 00:03:10.968927 2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.968904846 podStartE2EDuration="2.968904846s" podCreationTimestamp="2025-05-14 00:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:03:10.922760317 +0000 UTC m=+10.825549959" watchObservedRunningTime="2025-05-14 00:03:10.968904846 +0000 UTC m=+10.871694478" May 14 00:03:10.969631 kubelet[2317]: I0514 00:03:10.969048 2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.969043695 podStartE2EDuration="2.969043695s" podCreationTimestamp="2025-05-14 00:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:03:10.968730365 +0000 UTC m=+10.871519997" watchObservedRunningTime="2025-05-14 00:03:10.969043695 +0000 UTC m=+10.871833327" May 14 00:03:11.063303 kubelet[2317]: I0514 00:03:11.061485 2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.061461294 podStartE2EDuration="3.061461294s" podCreationTimestamp="2025-05-14 00:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:03:11.061194626 +0000 UTC m=+10.963984258" watchObservedRunningTime="2025-05-14 00:03:11.061461294 +0000 UTC m=+10.964250926" May 14 00:03:11.063191 systemd[1]: Reloading finished in 406 ms. May 14 00:03:11.088041 kubelet[2317]: I0514 00:03:11.087948 2317 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:03:11.088383 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:03:11.108928 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:03:11.109247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:03:11.109322 systemd[1]: kubelet.service: Consumed 1.245s CPU time, 129.8M memory peak. May 14 00:03:11.111635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:03:11.335925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:03:11.344783 (kubelet)[2682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:03:11.392740 kubelet[2682]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:03:11.392740 kubelet[2682]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 00:03:11.392740 kubelet[2682]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:03:11.393167 kubelet[2682]: I0514 00:03:11.392818 2682 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:03:11.401309 kubelet[2682]: I0514 00:03:11.400539 2682 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 00:03:11.401309 kubelet[2682]: I0514 00:03:11.400575 2682 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:03:11.401309 kubelet[2682]: I0514 00:03:11.400866 2682 server.go:954] "Client rotation is on, will bootstrap in background" May 14 00:03:11.402337 kubelet[2682]: I0514 00:03:11.402308 2682 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:03:11.404560 kubelet[2682]: I0514 00:03:11.404513 2682 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:03:11.408609 kubelet[2682]: I0514 00:03:11.408589 2682 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 00:03:11.415882 kubelet[2682]: I0514 00:03:11.415842 2682 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:03:11.416161 kubelet[2682]: I0514 00:03:11.416119 2682 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:03:11.416393 kubelet[2682]: I0514 00:03:11.416160 2682 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:03:11.416523 kubelet[2682]: I0514 00:03:11.416396 2682 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:03:11.416523 kubelet[2682]: I0514 00:03:11.416408 2682 container_manager_linux.go:304] "Creating device plugin manager" May 14 00:03:11.416523 kubelet[2682]: I0514 00:03:11.416454 2682 state_mem.go:36] "Initialized new in-memory state store" May 14 00:03:11.416672 kubelet[2682]: I0514 00:03:11.416649 2682 kubelet.go:446] "Attempting to sync node with API server" May 14 00:03:11.416732 kubelet[2682]: I0514 00:03:11.416675 2682 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:03:11.416732 kubelet[2682]: I0514 00:03:11.416699 2682 kubelet.go:352] "Adding apiserver pod source" May 14 00:03:11.416732 kubelet[2682]: I0514 00:03:11.416711 2682 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:03:11.418014 kubelet[2682]: I0514 00:03:11.417422 2682 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:03:11.418014 kubelet[2682]: I0514 00:03:11.417945 2682 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:03:11.418559 kubelet[2682]: I0514 00:03:11.418521 2682 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 00:03:11.418559 kubelet[2682]: I0514 00:03:11.418559 2682 server.go:1287] "Started kubelet" May 14 00:03:11.419937 kubelet[2682]: I0514 00:03:11.419527 2682 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:03:11.420601 kubelet[2682]: I0514 00:03:11.420585 2682 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:03:11.420816 kubelet[2682]: I0514 00:03:11.420768 2682 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:03:11.421090 kubelet[2682]: I0514 00:03:11.421070 2682 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:03:11.421606 kubelet[2682]: I0514 00:03:11.421568 2682 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:03:11.422981 kubelet[2682]: I0514 00:03:11.422947 2682 server.go:490] "Adding debug handlers to kubelet server" May 14 00:03:11.424167 kubelet[2682]: E0514 00:03:11.424149 2682 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:03:11.424307 kubelet[2682]: I0514 00:03:11.424270 2682 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 00:03:11.424635 kubelet[2682]: I0514 00:03:11.424616 2682 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:03:11.424897 kubelet[2682]: I0514 00:03:11.424881 2682 reconciler.go:26] "Reconciler: start to sync state" May 14 00:03:11.426074 kubelet[2682]: I0514 00:03:11.426055 2682 factory.go:221] Registration of the systemd container factory successfully May 14 00:03:11.426269 kubelet[2682]: I0514 00:03:11.426246 2682 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:03:11.428207 kubelet[2682]: I0514 00:03:11.428188 2682 factory.go:221] Registration of the containerd container factory successfully May 14 00:03:11.429073 kubelet[2682]: E0514 00:03:11.429044 2682 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:03:11.451092 kubelet[2682]: I0514 00:03:11.451035 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:03:11.453060 kubelet[2682]: I0514 00:03:11.453012 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:03:11.453383 kubelet[2682]: I0514 00:03:11.453367 2682 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 00:03:11.453543 kubelet[2682]: I0514 00:03:11.453527 2682 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 00:03:11.453631 kubelet[2682]: I0514 00:03:11.453618 2682 kubelet.go:2388] "Starting kubelet main sync loop" May 14 00:03:11.453776 kubelet[2682]: E0514 00:03:11.453753 2682 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:03:11.484722 kubelet[2682]: I0514 00:03:11.484672 2682 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 00:03:11.484722 kubelet[2682]: I0514 00:03:11.484691 2682 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 00:03:11.484722 kubelet[2682]: I0514 00:03:11.484715 2682 state_mem.go:36] "Initialized new in-memory state store" May 14 00:03:11.484923 kubelet[2682]: I0514 00:03:11.484914 2682 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:03:11.484946 kubelet[2682]: I0514 00:03:11.484925 2682 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:03:11.484946 kubelet[2682]: I0514 00:03:11.484943 2682 policy_none.go:49] "None policy: Start" May 14 00:03:11.485001 kubelet[2682]: I0514 00:03:11.484952 2682 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 00:03:11.485001 kubelet[2682]: I0514 00:03:11.484962 2682 state_mem.go:35] "Initializing new in-memory state store" May 14 00:03:11.485074 kubelet[2682]: I0514 00:03:11.485053 2682 state_mem.go:75] "Updated machine memory state" May 14 00:03:11.491451 kubelet[2682]: I0514 00:03:11.491419 2682 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:03:11.492351 kubelet[2682]: I0514 00:03:11.491992 2682 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:03:11.492351 kubelet[2682]: I0514 00:03:11.492013 2682 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:03:11.492351 kubelet[2682]: I0514 00:03:11.492252 2682 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:03:11.493747 kubelet[2682]: E0514 00:03:11.493286 2682 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 00:03:11.554978 kubelet[2682]: I0514 00:03:11.554670 2682 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 00:03:11.554978 kubelet[2682]: I0514 00:03:11.554763 2682 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 00:03:11.554978 kubelet[2682]: I0514 00:03:11.554805 2682 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 00:03:11.597398 kubelet[2682]: I0514 00:03:11.597043 2682 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 00:03:11.601778 kubelet[2682]: E0514 00:03:11.601716 2682 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 00:03:11.622862 kubelet[2682]: E0514 00:03:11.622811 2682 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 00:03:11.623097 kubelet[2682]: E0514 00:03:11.622920 2682 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 00:03:11.626188 kubelet[2682]: I0514 00:03:11.626144 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4dda6e8fc20d0ad07583285be8666834-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4dda6e8fc20d0ad07583285be8666834\") " pod="kube-system/kube-apiserver-localhost" May 14 00:03:11.626261 kubelet[2682]: I0514 00:03:11.626190 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4dda6e8fc20d0ad07583285be8666834-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4dda6e8fc20d0ad07583285be8666834\") " pod="kube-system/kube-apiserver-localhost" May 14 00:03:11.626261 kubelet[2682]: I0514 00:03:11.626218 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:11.626261 kubelet[2682]: I0514 00:03:11.626238 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:11.626401 kubelet[2682]: I0514 00:03:11.626351 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:11.626401 kubelet[2682]: I0514 00:03:11.626387 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4dda6e8fc20d0ad07583285be8666834-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4dda6e8fc20d0ad07583285be8666834\") " pod="kube-system/kube-apiserver-localhost" May 14 00:03:11.626470 kubelet[2682]: I0514 00:03:11.626417 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:11.626470 kubelet[2682]: I0514 00:03:11.626441 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:03:11.626470 kubelet[2682]: I0514 00:03:11.626463 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 00:03:11.639816 kubelet[2682]: I0514 00:03:11.639093 2682 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 14 00:03:11.639816 kubelet[2682]: I0514 00:03:11.639211 2682 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 00:03:11.687608 sudo[2717]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 00:03:11.688119 sudo[2717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 00:03:11.902216 kubelet[2682]: E0514 00:03:11.902176 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:11.923961 kubelet[2682]: E0514 00:03:11.923920 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:11.924124 kubelet[2682]: E0514 00:03:11.924011 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:12.260912 sudo[2717]: pam_unix(sudo:session): session closed for user root May 14 00:03:12.418169 kubelet[2682]: I0514 00:03:12.418096 2682 apiserver.go:52] "Watching apiserver" May 14 00:03:12.425787 kubelet[2682]: I0514 00:03:12.425743 2682 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:03:12.474667 kubelet[2682]: I0514 00:03:12.474511 2682 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 00:03:12.474667 kubelet[2682]: I0514 00:03:12.474556 2682 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 00:03:12.475374 kubelet[2682]: E0514 00:03:12.474882 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:12.481291 kubelet[2682]: E0514 00:03:12.481238 2682 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 00:03:12.481576 kubelet[2682]: E0514 00:03:12.481427 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:12.485310 kubelet[2682]: E0514 00:03:12.482696 2682 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 00:03:12.485310 kubelet[2682]: E0514 00:03:12.482821 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:13.476303 kubelet[2682]: E0514 00:03:13.476250 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:13.476787 kubelet[2682]: E0514 00:03:13.476322 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:13.841802 sudo[1723]: pam_unix(sudo:session): session closed for user root May 14 00:03:13.843665 sshd[1722]: Connection closed by 10.0.0.1 port 33616 May 14 00:03:13.844180 sshd-session[1718]: pam_unix(sshd:session): session closed for user core May 14 00:03:13.848223 systemd[1]: sshd@7-10.0.0.106:22-10.0.0.1:33616.service: Deactivated successfully. May 14 00:03:13.850797 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:03:13.851059 systemd[1]: session-7.scope: Consumed 5.515s CPU time, 257.3M memory peak. May 14 00:03:13.852441 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. May 14 00:03:13.853607 systemd-logind[1494]: Removed session 7. May 14 00:03:14.477500 kubelet[2682]: E0514 00:03:14.477460 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:16.058262 kubelet[2682]: I0514 00:03:16.058181 2682 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:03:16.058910 kubelet[2682]: I0514 00:03:16.058829 2682 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:03:16.058957 containerd[1502]: time="2025-05-14T00:03:16.058592039Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:03:17.023566 systemd[1]: Created slice kubepods-besteffort-pod92b64838_ac9f_4106_80f4_8cb177f0d65a.slice - libcontainer container kubepods-besteffort-pod92b64838_ac9f_4106_80f4_8cb177f0d65a.slice. May 14 00:03:17.036834 systemd[1]: Created slice kubepods-burstable-pod7f79bf59_b109_4ba9_82c3_b1542f5f6a02.slice - libcontainer container kubepods-burstable-pod7f79bf59_b109_4ba9_82c3_b1542f5f6a02.slice. May 14 00:03:17.062411 kubelet[2682]: I0514 00:03:17.062345 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cni-path\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064308 kubelet[2682]: I0514 00:03:17.063393 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-hubble-tls\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064308 kubelet[2682]: I0514 00:03:17.063541 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-etc-cni-netd\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064308 kubelet[2682]: I0514 00:03:17.063570 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-host-proc-sys-net\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064308 kubelet[2682]: I0514 00:03:17.063633 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92b64838-ac9f-4106-80f4-8cb177f0d65a-kube-proxy\") pod \"kube-proxy-kfkcn\" (UID: \"92b64838-ac9f-4106-80f4-8cb177f0d65a\") " pod="kube-system/kube-proxy-kfkcn" May 14 00:03:17.064308 kubelet[2682]: I0514 00:03:17.063654 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92b64838-ac9f-4106-80f4-8cb177f0d65a-lib-modules\") pod \"kube-proxy-kfkcn\" (UID: \"92b64838-ac9f-4106-80f4-8cb177f0d65a\") " pod="kube-system/kube-proxy-kfkcn" May 14 00:03:17.064308 kubelet[2682]: I0514 00:03:17.063718 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-bpf-maps\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064537 kubelet[2682]: I0514 00:03:17.063826 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-hostproc\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064537 kubelet[2682]: I0514 00:03:17.063897 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-xtables-lock\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064537 kubelet[2682]: I0514 00:03:17.063928 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-config-path\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064537 kubelet[2682]: I0514 00:03:17.063947 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-clustermesh-secrets\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064537 kubelet[2682]: I0514 00:03:17.063965 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-host-proc-sys-kernel\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064537 kubelet[2682]: I0514 00:03:17.063986 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dt8p\" (UniqueName: \"kubernetes.io/projected/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-kube-api-access-9dt8p\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064752 kubelet[2682]: I0514 00:03:17.064007 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-run\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064752 kubelet[2682]: I0514 00:03:17.064024 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-lib-modules\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064752 kubelet[2682]: I0514 00:03:17.064066 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92b64838-ac9f-4106-80f4-8cb177f0d65a-xtables-lock\") pod \"kube-proxy-kfkcn\" (UID: \"92b64838-ac9f-4106-80f4-8cb177f0d65a\") " pod="kube-system/kube-proxy-kfkcn" May 14 00:03:17.064752 kubelet[2682]: I0514 00:03:17.064090 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-cgroup\") pod \"cilium-m7hbp\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " pod="kube-system/cilium-m7hbp" May 14 00:03:17.064752 kubelet[2682]: I0514 00:03:17.064212 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtlv\" (UniqueName: \"kubernetes.io/projected/92b64838-ac9f-4106-80f4-8cb177f0d65a-kube-api-access-bhtlv\") pod \"kube-proxy-kfkcn\" (UID: \"92b64838-ac9f-4106-80f4-8cb177f0d65a\") " pod="kube-system/kube-proxy-kfkcn" May 14 00:03:17.123408 systemd[1]: Created slice kubepods-besteffort-podacd22804_58df_4cfb_a525_80c532017468.slice - libcontainer container kubepods-besteffort-podacd22804_58df_4cfb_a525_80c532017468.slice. May 14 00:03:17.165913 kubelet[2682]: I0514 00:03:17.165420 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhptz\" (UniqueName: \"kubernetes.io/projected/acd22804-58df-4cfb-a525-80c532017468-kube-api-access-xhptz\") pod \"cilium-operator-6c4d7847fc-st7gq\" (UID: \"acd22804-58df-4cfb-a525-80c532017468\") " pod="kube-system/cilium-operator-6c4d7847fc-st7gq" May 14 00:03:17.165913 kubelet[2682]: I0514 00:03:17.165660 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acd22804-58df-4cfb-a525-80c532017468-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-st7gq\" (UID: \"acd22804-58df-4cfb-a525-80c532017468\") " pod="kube-system/cilium-operator-6c4d7847fc-st7gq" May 14 00:03:17.334362 kubelet[2682]: E0514 00:03:17.333455 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:17.334608 containerd[1502]: time="2025-05-14T00:03:17.334541490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kfkcn,Uid:92b64838-ac9f-4106-80f4-8cb177f0d65a,Namespace:kube-system,Attempt:0,}" May 14 00:03:17.341478 kubelet[2682]: E0514 00:03:17.341445 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:17.341906 containerd[1502]: time="2025-05-14T00:03:17.341873274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m7hbp,Uid:7f79bf59-b109-4ba9-82c3-b1542f5f6a02,Namespace:kube-system,Attempt:0,}" May 14 00:03:17.365213 containerd[1502]: time="2025-05-14T00:03:17.365140662Z" level=info msg="connecting to shim c6105d70ae41fd9fa08188031f5091d42c07f932f2feb2e79b98ba72969a47d9" address="unix:///run/containerd/s/9d747c88983eaa4880cb2a0b2f595496facf0f6b9d8b628154c8d1b838c33a3f" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:17.373841 containerd[1502]: time="2025-05-14T00:03:17.373778374Z" level=info msg="connecting to shim a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5" address="unix:///run/containerd/s/c20ef13d6f63d241e33bc5e1226c73ecd8d51648e89f5805daac8e58f0706e46" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:17.395789 systemd[1]: Started cri-containerd-c6105d70ae41fd9fa08188031f5091d42c07f932f2feb2e79b98ba72969a47d9.scope - libcontainer container c6105d70ae41fd9fa08188031f5091d42c07f932f2feb2e79b98ba72969a47d9. May 14 00:03:17.414565 systemd[1]: Started cri-containerd-a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5.scope - libcontainer container a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5. May 14 00:03:17.428000 kubelet[2682]: E0514 00:03:17.427757 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:17.431105 containerd[1502]: time="2025-05-14T00:03:17.430808052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-st7gq,Uid:acd22804-58df-4cfb-a525-80c532017468,Namespace:kube-system,Attempt:0,}" May 14 00:03:17.445193 containerd[1502]: time="2025-05-14T00:03:17.445036689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kfkcn,Uid:92b64838-ac9f-4106-80f4-8cb177f0d65a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6105d70ae41fd9fa08188031f5091d42c07f932f2feb2e79b98ba72969a47d9\"" May 14 00:03:17.446264 kubelet[2682]: E0514 00:03:17.446208 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:17.450194 containerd[1502]: time="2025-05-14T00:03:17.450138405Z" level=info msg="CreateContainer within sandbox \"c6105d70ae41fd9fa08188031f5091d42c07f932f2feb2e79b98ba72969a47d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:03:17.467811 containerd[1502]: time="2025-05-14T00:03:17.467605516Z" level=info msg="Container 125e40016b89737d7040e6212d2803fc766cab99b8951b96ecec8aa60e97d6a9: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:17.470803 containerd[1502]: time="2025-05-14T00:03:17.470757102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m7hbp,Uid:7f79bf59-b109-4ba9-82c3-b1542f5f6a02,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\"" May 14 00:03:17.471618 kubelet[2682]: E0514 00:03:17.471588 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:17.477404 containerd[1502]: time="2025-05-14T00:03:17.477264106Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 00:03:17.480134 containerd[1502]: time="2025-05-14T00:03:17.479777841Z" level=info msg="connecting to shim e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818" address="unix:///run/containerd/s/e59436763d4d76b896c14c7bcbecdff0f551b445d1d913dd76111ef121974e1e" namespace=k8s.io protocol=ttrpc version=3 May 14 00:03:17.497808 containerd[1502]: time="2025-05-14T00:03:17.497756193Z" level=info msg="CreateContainer within sandbox \"c6105d70ae41fd9fa08188031f5091d42c07f932f2feb2e79b98ba72969a47d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"125e40016b89737d7040e6212d2803fc766cab99b8951b96ecec8aa60e97d6a9\"" May 14 00:03:17.501047 containerd[1502]: time="2025-05-14T00:03:17.500990530Z" level=info msg="StartContainer for \"125e40016b89737d7040e6212d2803fc766cab99b8951b96ecec8aa60e97d6a9\"" May 14 00:03:17.503388 containerd[1502]: time="2025-05-14T00:03:17.503353620Z" level=info msg="connecting to shim 125e40016b89737d7040e6212d2803fc766cab99b8951b96ecec8aa60e97d6a9" address="unix:///run/containerd/s/9d747c88983eaa4880cb2a0b2f595496facf0f6b9d8b628154c8d1b838c33a3f" protocol=ttrpc version=3 May 14 00:03:17.514487 systemd[1]: Started cri-containerd-e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818.scope - libcontainer container e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818. May 14 00:03:17.522842 systemd[1]: Started cri-containerd-125e40016b89737d7040e6212d2803fc766cab99b8951b96ecec8aa60e97d6a9.scope - libcontainer container 125e40016b89737d7040e6212d2803fc766cab99b8951b96ecec8aa60e97d6a9. May 14 00:03:17.563503 containerd[1502]: time="2025-05-14T00:03:17.563461943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-st7gq,Uid:acd22804-58df-4cfb-a525-80c532017468,Namespace:kube-system,Attempt:0,} returns sandbox id \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\"" May 14 00:03:17.564927 kubelet[2682]: E0514 00:03:17.564227 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:17.581176 containerd[1502]: time="2025-05-14T00:03:17.581131613Z" level=info msg="StartContainer for \"125e40016b89737d7040e6212d2803fc766cab99b8951b96ecec8aa60e97d6a9\" returns successfully" May 14 00:03:18.513011 kubelet[2682]: E0514 00:03:18.512208 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:19.517519 kubelet[2682]: E0514 00:03:19.517472 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:19.749367 kubelet[2682]: E0514 00:03:19.749158 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:19.767327 kubelet[2682]: I0514 00:03:19.767217 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kfkcn" podStartSLOduration=2.767191979 podStartE2EDuration="2.767191979s" podCreationTimestamp="2025-05-14 00:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:03:18.541894896 +0000 UTC m=+7.192451791" watchObservedRunningTime="2025-05-14 00:03:19.767191979 +0000 UTC m=+8.417748874" May 14 00:03:20.519808 kubelet[2682]: E0514 00:03:20.519674 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:21.258541 kubelet[2682]: E0514 00:03:21.258503 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:21.522413 kubelet[2682]: E0514 00:03:21.520797 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:22.414837 kubelet[2682]: E0514 00:03:22.414765 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:22.521931 kubelet[2682]: E0514 00:03:22.521896 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:28.458735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609392786.mount: Deactivated successfully. May 14 00:03:40.460370 containerd[1502]: time="2025-05-14T00:03:40.460207423Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:03:40.588954 containerd[1502]: time="2025-05-14T00:03:40.588755863Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 00:03:40.691316 containerd[1502]: time="2025-05-14T00:03:40.691191271Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:03:40.693546 containerd[1502]: time="2025-05-14T00:03:40.693359280Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 23.215975325s" May 14 00:03:40.693546 containerd[1502]: time="2025-05-14T00:03:40.693429321Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 00:03:40.697565 containerd[1502]: time="2025-05-14T00:03:40.697523809Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 00:03:40.793110 containerd[1502]: time="2025-05-14T00:03:40.792947093Z" level=info msg="CreateContainer within sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:03:42.748929 containerd[1502]: time="2025-05-14T00:03:42.748853511Z" level=info msg="Container 1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:42.753750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3481556600.mount: Deactivated successfully. May 14 00:03:43.175114 containerd[1502]: time="2025-05-14T00:03:43.175061641Z" level=info msg="CreateContainer within sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\"" May 14 00:03:43.175694 containerd[1502]: time="2025-05-14T00:03:43.175651812Z" level=info msg="StartContainer for \"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\"" May 14 00:03:43.176772 containerd[1502]: time="2025-05-14T00:03:43.176734081Z" level=info msg="connecting to shim 1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452" address="unix:///run/containerd/s/c20ef13d6f63d241e33bc5e1226c73ecd8d51648e89f5805daac8e58f0706e46" protocol=ttrpc version=3 May 14 00:03:43.200490 systemd[1]: Started cri-containerd-1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452.scope - libcontainer container 1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452. May 14 00:03:43.288555 systemd[1]: cri-containerd-1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452.scope: Deactivated successfully. May 14 00:03:43.289921 containerd[1502]: time="2025-05-14T00:03:43.289879896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\" id:\"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\" pid:3100 exited_at:{seconds:1747181023 nanos:289356389}" May 14 00:03:43.559191 containerd[1502]: time="2025-05-14T00:03:43.558856975Z" level=info msg="received exit event container_id:\"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\" id:\"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\" pid:3100 exited_at:{seconds:1747181023 nanos:289356389}" May 14 00:03:43.560061 containerd[1502]: time="2025-05-14T00:03:43.560018511Z" level=info msg="StartContainer for \"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\" returns successfully" May 14 00:03:43.580624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452-rootfs.mount: Deactivated successfully. May 14 00:03:44.125700 kubelet[2682]: E0514 00:03:44.125646 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:45.038329 kubelet[2682]: E0514 00:03:45.038291 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:45.040351 containerd[1502]: time="2025-05-14T00:03:45.040295006Z" level=info msg="CreateContainer within sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:03:46.339003 containerd[1502]: time="2025-05-14T00:03:46.338940389Z" level=info msg="Container 451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:47.044233 containerd[1502]: time="2025-05-14T00:03:47.044177727Z" level=info msg="CreateContainer within sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\"" May 14 00:03:47.044671 containerd[1502]: time="2025-05-14T00:03:47.044533051Z" level=info msg="StartContainer for \"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\"" May 14 00:03:47.045669 containerd[1502]: time="2025-05-14T00:03:47.045616445Z" level=info msg="connecting to shim 451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4" address="unix:///run/containerd/s/c20ef13d6f63d241e33bc5e1226c73ecd8d51648e89f5805daac8e58f0706e46" protocol=ttrpc version=3 May 14 00:03:47.068555 systemd[1]: Started cri-containerd-451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4.scope - libcontainer container 451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4. May 14 00:03:47.201265 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:03:47.201717 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:03:47.201919 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 00:03:47.204628 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:03:47.206416 containerd[1502]: time="2025-05-14T00:03:47.206366405Z" level=info msg="TaskExit event in podsandbox handler container_id:\"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\" id:\"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\" pid:3146 exited_at:{seconds:1747181027 nanos:205961810}" May 14 00:03:47.206822 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:03:47.207262 systemd[1]: cri-containerd-451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4.scope: Deactivated successfully. May 14 00:03:47.454624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:03:47.493090 containerd[1502]: time="2025-05-14T00:03:47.493036296Z" level=info msg="received exit event container_id:\"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\" id:\"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\" pid:3146 exited_at:{seconds:1747181027 nanos:205961810}" May 14 00:03:47.494494 containerd[1502]: time="2025-05-14T00:03:47.494182206Z" level=info msg="StartContainer for \"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\" returns successfully" May 14 00:03:47.512517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4-rootfs.mount: Deactivated successfully. May 14 00:03:48.046848 kubelet[2682]: E0514 00:03:48.046812 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:48.048934 containerd[1502]: time="2025-05-14T00:03:48.048881335Z" level=info msg="CreateContainer within sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:03:48.759259 containerd[1502]: time="2025-05-14T00:03:48.759176785Z" level=info msg="Container 4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:49.268444 containerd[1502]: time="2025-05-14T00:03:49.268375233Z" level=info msg="CreateContainer within sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\"" May 14 00:03:49.269027 containerd[1502]: time="2025-05-14T00:03:49.268949775Z" level=info msg="StartContainer for \"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\"" May 14 00:03:49.273069 containerd[1502]: time="2025-05-14T00:03:49.270790134Z" level=info msg="connecting to shim 4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32" address="unix:///run/containerd/s/c20ef13d6f63d241e33bc5e1226c73ecd8d51648e89f5805daac8e58f0706e46" protocol=ttrpc version=3 May 14 00:03:49.296425 systemd[1]: Started cri-containerd-4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32.scope - libcontainer container 4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32. May 14 00:03:49.343707 systemd[1]: cri-containerd-4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32.scope: Deactivated successfully. May 14 00:03:49.344636 containerd[1502]: time="2025-05-14T00:03:49.344608700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\" id:\"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\" pid:3197 exited_at:{seconds:1747181029 nanos:344294933}" May 14 00:03:49.573476 containerd[1502]: time="2025-05-14T00:03:49.573325393Z" level=info msg="received exit event container_id:\"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\" id:\"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\" pid:3197 exited_at:{seconds:1747181029 nanos:344294933}" May 14 00:03:49.582663 containerd[1502]: time="2025-05-14T00:03:49.582631363Z" level=info msg="StartContainer for \"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\" returns successfully" May 14 00:03:49.596649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32-rootfs.mount: Deactivated successfully. May 14 00:03:50.053496 kubelet[2682]: E0514 00:03:50.053451 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:51.058702 kubelet[2682]: E0514 00:03:51.058614 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:51.062422 containerd[1502]: time="2025-05-14T00:03:51.062365286Z" level=info msg="CreateContainer within sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:03:51.177795 systemd[1]: Started sshd@8-10.0.0.106:22-10.0.0.1:57626.service - OpenSSH per-connection server daemon (10.0.0.1:57626). May 14 00:03:51.317858 sshd[3226]: Accepted publickey for core from 10.0.0.1 port 57626 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:03:51.320077 sshd-session[3226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:51.329570 systemd-logind[1494]: New session 8 of user core. May 14 00:03:51.336427 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 00:03:51.697868 sshd[3228]: Connection closed by 10.0.0.1 port 57626 May 14 00:03:51.698200 sshd-session[3226]: pam_unix(sshd:session): session closed for user core May 14 00:03:51.702946 systemd[1]: sshd@8-10.0.0.106:22-10.0.0.1:57626.service: Deactivated successfully. May 14 00:03:51.705420 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:03:51.706320 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. May 14 00:03:51.707841 systemd-logind[1494]: Removed session 8. May 14 00:03:52.017945 containerd[1502]: time="2025-05-14T00:03:52.017834388Z" level=info msg="Container c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:52.049697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383187457.mount: Deactivated successfully. May 14 00:03:52.331483 containerd[1502]: time="2025-05-14T00:03:52.331351666Z" level=info msg="CreateContainer within sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\"" May 14 00:03:52.332129 containerd[1502]: time="2025-05-14T00:03:52.331960621Z" level=info msg="StartContainer for \"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\"" May 14 00:03:52.332784 containerd[1502]: time="2025-05-14T00:03:52.332755816Z" level=info msg="connecting to shim c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281" address="unix:///run/containerd/s/c20ef13d6f63d241e33bc5e1226c73ecd8d51648e89f5805daac8e58f0706e46" protocol=ttrpc version=3 May 14 00:03:52.360476 systemd[1]: Started cri-containerd-c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281.scope - libcontainer container c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281. May 14 00:03:52.424235 systemd[1]: cri-containerd-c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281.scope: Deactivated successfully. May 14 00:03:52.425186 containerd[1502]: time="2025-05-14T00:03:52.424831952Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\" id:\"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\" pid:3263 exited_at:{seconds:1747181032 nanos:424574816}" May 14 00:03:52.628644 containerd[1502]: time="2025-05-14T00:03:52.628475883Z" level=info msg="received exit event container_id:\"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\" id:\"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\" pid:3263 exited_at:{seconds:1747181032 nanos:424574816}" May 14 00:03:52.637639 containerd[1502]: time="2025-05-14T00:03:52.637582935Z" level=info msg="StartContainer for \"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\" returns successfully" May 14 00:03:53.019555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281-rootfs.mount: Deactivated successfully. May 14 00:03:53.066986 kubelet[2682]: E0514 00:03:53.065598 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:53.072176 containerd[1502]: time="2025-05-14T00:03:53.072110094Z" level=info msg="CreateContainer within sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:03:53.214313 containerd[1502]: time="2025-05-14T00:03:53.214237777Z" level=info msg="Container a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:53.218825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2737881479.mount: Deactivated successfully. May 14 00:03:53.438384 containerd[1502]: time="2025-05-14T00:03:53.438316838Z" level=info msg="CreateContainer within sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\"" May 14 00:03:53.439060 containerd[1502]: time="2025-05-14T00:03:53.438883611Z" level=info msg="StartContainer for \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\"" May 14 00:03:53.439938 containerd[1502]: time="2025-05-14T00:03:53.439891494Z" level=info msg="connecting to shim a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7" address="unix:///run/containerd/s/c20ef13d6f63d241e33bc5e1226c73ecd8d51648e89f5805daac8e58f0706e46" protocol=ttrpc version=3 May 14 00:03:53.465535 systemd[1]: Started cri-containerd-a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7.scope - libcontainer container a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7. May 14 00:03:53.559259 containerd[1502]: time="2025-05-14T00:03:53.559206452Z" level=info msg="StartContainer for \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" returns successfully" May 14 00:03:53.642747 containerd[1502]: time="2025-05-14T00:03:53.642702769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" id:\"e396b7964e789f4fe078fe39025f7ec1630b490fa27650934d70c303973951b3\" pid:3348 exited_at:{seconds:1747181033 nanos:642308018}" May 14 00:03:53.650687 containerd[1502]: time="2025-05-14T00:03:53.650613832Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:03:53.661477 containerd[1502]: time="2025-05-14T00:03:53.661388667Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 00:03:53.668975 kubelet[2682]: I0514 00:03:53.668326 2682 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 00:03:53.674831 containerd[1502]: time="2025-05-14T00:03:53.674756542Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:03:53.676150 containerd[1502]: time="2025-05-14T00:03:53.676112075Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 12.978551457s" May 14 00:03:53.676221 containerd[1502]: time="2025-05-14T00:03:53.676154537Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 00:03:53.678097 containerd[1502]: time="2025-05-14T00:03:53.678055122Z" level=info msg="CreateContainer within sandbox \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 00:03:53.867663 kubelet[2682]: I0514 00:03:53.867512 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfafc42-4320-41d6-8e77-d63584100440-config-volume\") pod \"coredns-668d6bf9bc-wcw6t\" (UID: \"4bfafc42-4320-41d6-8e77-d63584100440\") " pod="kube-system/coredns-668d6bf9bc-wcw6t" May 14 00:03:53.867663 kubelet[2682]: I0514 00:03:53.867570 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36c9a799-e0a7-4b65-9bde-a5fe99dd9bea-config-volume\") pod \"coredns-668d6bf9bc-nz7rl\" (UID: \"36c9a799-e0a7-4b65-9bde-a5fe99dd9bea\") " pod="kube-system/coredns-668d6bf9bc-nz7rl" May 14 00:03:53.867663 kubelet[2682]: I0514 00:03:53.867589 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hk8c\" (UniqueName: \"kubernetes.io/projected/4bfafc42-4320-41d6-8e77-d63584100440-kube-api-access-4hk8c\") pod \"coredns-668d6bf9bc-wcw6t\" (UID: \"4bfafc42-4320-41d6-8e77-d63584100440\") " pod="kube-system/coredns-668d6bf9bc-wcw6t" May 14 00:03:53.867663 kubelet[2682]: I0514 00:03:53.867606 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8bmt\" (UniqueName: \"kubernetes.io/projected/36c9a799-e0a7-4b65-9bde-a5fe99dd9bea-kube-api-access-v8bmt\") pod \"coredns-668d6bf9bc-nz7rl\" (UID: \"36c9a799-e0a7-4b65-9bde-a5fe99dd9bea\") " pod="kube-system/coredns-668d6bf9bc-nz7rl" May 14 00:03:53.920447 containerd[1502]: time="2025-05-14T00:03:53.920325366Z" level=info msg="Container 232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6: CDI devices from CRI Config.CDIDevices: []" May 14 00:03:53.923585 systemd[1]: Created slice kubepods-burstable-pod36c9a799_e0a7_4b65_9bde_a5fe99dd9bea.slice - libcontainer container kubepods-burstable-pod36c9a799_e0a7_4b65_9bde_a5fe99dd9bea.slice. May 14 00:03:53.930706 systemd[1]: Created slice kubepods-burstable-pod4bfafc42_4320_41d6_8e77_d63584100440.slice - libcontainer container kubepods-burstable-pod4bfafc42_4320_41d6_8e77_d63584100440.slice. May 14 00:03:53.992041 containerd[1502]: time="2025-05-14T00:03:53.991993197Z" level=info msg="CreateContainer within sandbox \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\"" May 14 00:03:53.993871 containerd[1502]: time="2025-05-14T00:03:53.993605245Z" level=info msg="StartContainer for \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\"" May 14 00:03:53.994971 containerd[1502]: time="2025-05-14T00:03:53.994813174Z" level=info msg="connecting to shim 232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6" address="unix:///run/containerd/s/e59436763d4d76b896c14c7bcbecdff0f551b445d1d913dd76111ef121974e1e" protocol=ttrpc version=3 May 14 00:03:54.032478 systemd[1]: Started cri-containerd-232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6.scope - libcontainer container 232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6. May 14 00:03:54.097566 containerd[1502]: time="2025-05-14T00:03:54.097482950Z" level=info msg="StartContainer for \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\" returns successfully" May 14 00:03:54.104292 kubelet[2682]: E0514 00:03:54.104061 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:54.204612 kubelet[2682]: I0514 00:03:54.203045 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m7hbp" podStartSLOduration=13.979098223 podStartE2EDuration="37.203024311s" podCreationTimestamp="2025-05-14 00:03:17 +0000 UTC" firstStartedPulling="2025-05-14 00:03:17.47339313 +0000 UTC m=+6.123950025" lastFinishedPulling="2025-05-14 00:03:40.697319218 +0000 UTC m=+29.347876113" observedRunningTime="2025-05-14 00:03:54.202565496 +0000 UTC m=+42.853122411" watchObservedRunningTime="2025-05-14 00:03:54.203024311 +0000 UTC m=+42.853581206" May 14 00:03:54.229591 kubelet[2682]: E0514 00:03:54.229534 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:54.231591 containerd[1502]: time="2025-05-14T00:03:54.231539720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nz7rl,Uid:36c9a799-e0a7-4b65-9bde-a5fe99dd9bea,Namespace:kube-system,Attempt:0,}" May 14 00:03:54.233676 kubelet[2682]: E0514 00:03:54.233653 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:54.234031 containerd[1502]: time="2025-05-14T00:03:54.234001422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wcw6t,Uid:4bfafc42-4320-41d6-8e77-d63584100440,Namespace:kube-system,Attempt:0,}" May 14 00:03:55.106200 kubelet[2682]: E0514 00:03:55.106150 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:55.106831 kubelet[2682]: E0514 00:03:55.106421 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:55.247026 kubelet[2682]: I0514 00:03:55.246650 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-st7gq" podStartSLOduration=2.135602747 podStartE2EDuration="38.246628269s" podCreationTimestamp="2025-05-14 00:03:17 +0000 UTC" firstStartedPulling="2025-05-14 00:03:17.565798205 +0000 UTC m=+6.216355100" lastFinishedPulling="2025-05-14 00:03:53.676823727 +0000 UTC m=+42.327380622" observedRunningTime="2025-05-14 00:03:55.246492357 +0000 UTC m=+43.897049252" watchObservedRunningTime="2025-05-14 00:03:55.246628269 +0000 UTC m=+43.897185174" May 14 00:03:56.108052 kubelet[2682]: E0514 00:03:56.108015 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:56.108533 kubelet[2682]: E0514 00:03:56.108208 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:56.716077 systemd[1]: Started sshd@9-10.0.0.106:22-10.0.0.1:57630.service - OpenSSH per-connection server daemon (10.0.0.1:57630). May 14 00:03:56.782117 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 57630 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:03:56.783912 sshd-session[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:03:56.788764 systemd-logind[1494]: New session 9 of user core. May 14 00:03:56.797407 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 00:03:56.945145 sshd[3476]: Connection closed by 10.0.0.1 port 57630 May 14 00:03:56.945551 sshd-session[3474]: pam_unix(sshd:session): session closed for user core May 14 00:03:56.949855 systemd[1]: sshd@9-10.0.0.106:22-10.0.0.1:57630.service: Deactivated successfully. May 14 00:03:56.951904 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:03:56.952816 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. May 14 00:03:56.953670 systemd-logind[1494]: Removed session 9. May 14 00:03:58.128564 systemd-networkd[1424]: cilium_host: Link UP May 14 00:03:58.128794 systemd-networkd[1424]: cilium_net: Link UP May 14 00:03:58.129035 systemd-networkd[1424]: cilium_net: Gained carrier May 14 00:03:58.129243 systemd-networkd[1424]: cilium_host: Gained carrier May 14 00:03:58.191418 systemd-networkd[1424]: cilium_net: Gained IPv6LL May 14 00:03:58.229482 kubelet[2682]: E0514 00:03:58.228804 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:03:58.275269 systemd-networkd[1424]: cilium_vxlan: Link UP May 14 00:03:58.275295 systemd-networkd[1424]: cilium_vxlan: Gained carrier May 14 00:03:58.466554 systemd-networkd[1424]: cilium_host: Gained IPv6LL May 14 00:03:58.554351 kernel: NET: Registered PF_ALG protocol family May 14 00:03:59.418560 systemd-networkd[1424]: cilium_vxlan: Gained IPv6LL May 14 00:03:59.589298 systemd-networkd[1424]: lxc_health: Link UP May 14 00:03:59.592714 systemd-networkd[1424]: lxc_health: Gained carrier May 14 00:03:59.920310 kernel: eth0: renamed from tmp5606c May 14 00:03:59.926204 systemd-networkd[1424]: lxcee84ae133ec7: Link UP May 14 00:03:59.929323 systemd-networkd[1424]: lxcee84ae133ec7: Gained carrier May 14 00:03:59.946488 systemd-networkd[1424]: lxca8f207432d6d: Link UP May 14 00:03:59.950308 kernel: eth0: renamed from tmp5940f May 14 00:03:59.968495 systemd-networkd[1424]: lxca8f207432d6d: Gained carrier May 14 00:04:00.890612 systemd-networkd[1424]: lxc_health: Gained IPv6LL May 14 00:04:01.346049 kubelet[2682]: E0514 00:04:01.345932 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:01.402555 systemd-networkd[1424]: lxca8f207432d6d: Gained IPv6LL May 14 00:04:01.852522 systemd-networkd[1424]: lxcee84ae133ec7: Gained IPv6LL May 14 00:04:01.959790 systemd[1]: Started sshd@10-10.0.0.106:22-10.0.0.1:36806.service - OpenSSH per-connection server daemon (10.0.0.1:36806). May 14 00:04:02.026112 sshd[3872]: Accepted publickey for core from 10.0.0.1 port 36806 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:02.027952 sshd-session[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:02.033339 systemd-logind[1494]: New session 10 of user core. May 14 00:04:02.040713 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:04:02.148324 kubelet[2682]: E0514 00:04:02.148101 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:02.257149 sshd[3874]: Connection closed by 10.0.0.1 port 36806 May 14 00:04:02.257302 sshd-session[3872]: pam_unix(sshd:session): session closed for user core May 14 00:04:02.261869 systemd[1]: sshd@10-10.0.0.106:22-10.0.0.1:36806.service: Deactivated successfully. May 14 00:04:02.266351 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:04:02.269396 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. May 14 00:04:02.271215 systemd-logind[1494]: Removed session 10. May 14 00:04:06.239829 containerd[1502]: time="2025-05-14T00:04:06.239755342Z" level=info msg="connecting to shim 5940fd6992354ee86f632b4c82630413801608f0e5dcdb515f111529222070ea" address="unix:///run/containerd/s/5beaa74dbaf892c22dce3b225c57a3f34f000f6f4e0c8a49167adabe479f0318" namespace=k8s.io protocol=ttrpc version=3 May 14 00:04:06.266329 containerd[1502]: time="2025-05-14T00:04:06.265568702Z" level=info msg="connecting to shim 5606ca89011deda224bd61eb88211e018fa837bd7a3a11a610a8946b450fc32f" address="unix:///run/containerd/s/4f26a1e24777f6c337dfcda380ea5bbf9b3c6ac9c7ae45f7d94855ee2f42f75b" namespace=k8s.io protocol=ttrpc version=3 May 14 00:04:06.290905 systemd[1]: Started cri-containerd-5940fd6992354ee86f632b4c82630413801608f0e5dcdb515f111529222070ea.scope - libcontainer container 5940fd6992354ee86f632b4c82630413801608f0e5dcdb515f111529222070ea. May 14 00:04:06.309190 systemd[1]: Started cri-containerd-5606ca89011deda224bd61eb88211e018fa837bd7a3a11a610a8946b450fc32f.scope - libcontainer container 5606ca89011deda224bd61eb88211e018fa837bd7a3a11a610a8946b450fc32f. May 14 00:04:06.315581 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:04:06.332836 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:04:06.404471 containerd[1502]: time="2025-05-14T00:04:06.404385839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wcw6t,Uid:4bfafc42-4320-41d6-8e77-d63584100440,Namespace:kube-system,Attempt:0,} returns sandbox id \"5940fd6992354ee86f632b4c82630413801608f0e5dcdb515f111529222070ea\"" May 14 00:04:06.405517 kubelet[2682]: E0514 00:04:06.405481 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:06.410552 containerd[1502]: time="2025-05-14T00:04:06.410500183Z" level=info msg="CreateContainer within sandbox \"5940fd6992354ee86f632b4c82630413801608f0e5dcdb515f111529222070ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:04:06.438768 containerd[1502]: time="2025-05-14T00:04:06.438608933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nz7rl,Uid:36c9a799-e0a7-4b65-9bde-a5fe99dd9bea,Namespace:kube-system,Attempt:0,} returns sandbox id \"5606ca89011deda224bd61eb88211e018fa837bd7a3a11a610a8946b450fc32f\"" May 14 00:04:06.439604 kubelet[2682]: E0514 00:04:06.439550 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:06.441855 containerd[1502]: time="2025-05-14T00:04:06.441805817Z" level=info msg="CreateContainer within sandbox \"5606ca89011deda224bd61eb88211e018fa837bd7a3a11a610a8946b450fc32f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:04:06.878805 containerd[1502]: time="2025-05-14T00:04:06.877980210Z" level=info msg="Container 0b409350d2d0ad3bbf244198c8da75025e0d427cf5dbecb7de309156b0236830: CDI devices from CRI Config.CDIDevices: []" May 14 00:04:06.890682 containerd[1502]: time="2025-05-14T00:04:06.890603183Z" level=info msg="Container 8dc378c118b7258b5b54ba804106ac1b8933d275188b0928c1074fbb55366113: CDI devices from CRI Config.CDIDevices: []" May 14 00:04:07.089867 containerd[1502]: time="2025-05-14T00:04:07.089804573Z" level=info msg="CreateContainer within sandbox \"5606ca89011deda224bd61eb88211e018fa837bd7a3a11a610a8946b450fc32f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b409350d2d0ad3bbf244198c8da75025e0d427cf5dbecb7de309156b0236830\"" May 14 00:04:07.091946 containerd[1502]: time="2025-05-14T00:04:07.090475756Z" level=info msg="StartContainer for \"0b409350d2d0ad3bbf244198c8da75025e0d427cf5dbecb7de309156b0236830\"" May 14 00:04:07.091946 containerd[1502]: time="2025-05-14T00:04:07.091631105Z" level=info msg="connecting to shim 0b409350d2d0ad3bbf244198c8da75025e0d427cf5dbecb7de309156b0236830" address="unix:///run/containerd/s/4f26a1e24777f6c337dfcda380ea5bbf9b3c6ac9c7ae45f7d94855ee2f42f75b" protocol=ttrpc version=3 May 14 00:04:07.117547 systemd[1]: Started cri-containerd-0b409350d2d0ad3bbf244198c8da75025e0d427cf5dbecb7de309156b0236830.scope - libcontainer container 0b409350d2d0ad3bbf244198c8da75025e0d427cf5dbecb7de309156b0236830. May 14 00:04:07.179379 containerd[1502]: time="2025-05-14T00:04:07.179331765Z" level=info msg="StartContainer for \"0b409350d2d0ad3bbf244198c8da75025e0d427cf5dbecb7de309156b0236830\" returns successfully" May 14 00:04:07.179630 containerd[1502]: time="2025-05-14T00:04:07.179406678Z" level=info msg="CreateContainer within sandbox \"5940fd6992354ee86f632b4c82630413801608f0e5dcdb515f111529222070ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8dc378c118b7258b5b54ba804106ac1b8933d275188b0928c1074fbb55366113\"" May 14 00:04:07.179983 containerd[1502]: time="2025-05-14T00:04:07.179957271Z" level=info msg="StartContainer for \"8dc378c118b7258b5b54ba804106ac1b8933d275188b0928c1074fbb55366113\"" May 14 00:04:07.181670 containerd[1502]: time="2025-05-14T00:04:07.181638515Z" level=info msg="connecting to shim 8dc378c118b7258b5b54ba804106ac1b8933d275188b0928c1074fbb55366113" address="unix:///run/containerd/s/5beaa74dbaf892c22dce3b225c57a3f34f000f6f4e0c8a49167adabe479f0318" protocol=ttrpc version=3 May 14 00:04:07.212922 systemd[1]: Started cri-containerd-8dc378c118b7258b5b54ba804106ac1b8933d275188b0928c1074fbb55366113.scope - libcontainer container 8dc378c118b7258b5b54ba804106ac1b8933d275188b0928c1074fbb55366113. May 14 00:04:07.233516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount524738100.mount: Deactivated successfully. May 14 00:04:07.275769 systemd[1]: Started sshd@11-10.0.0.106:22-10.0.0.1:36818.service - OpenSSH per-connection server daemon (10.0.0.1:36818). May 14 00:04:07.315529 containerd[1502]: time="2025-05-14T00:04:07.315387532Z" level=info msg="StartContainer for \"8dc378c118b7258b5b54ba804106ac1b8933d275188b0928c1074fbb55366113\" returns successfully" May 14 00:04:07.373440 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 36818 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:07.375790 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:07.385011 systemd-logind[1494]: New session 11 of user core. May 14 00:04:07.392531 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:04:07.583432 sshd[4050]: Connection closed by 10.0.0.1 port 36818 May 14 00:04:07.583793 sshd-session[4040]: pam_unix(sshd:session): session closed for user core May 14 00:04:07.590449 systemd[1]: sshd@11-10.0.0.106:22-10.0.0.1:36818.service: Deactivated successfully. May 14 00:04:07.593870 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:04:07.595410 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. May 14 00:04:07.596655 systemd-logind[1494]: Removed session 11. May 14 00:04:08.205310 kubelet[2682]: E0514 00:04:08.205240 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:08.206014 kubelet[2682]: E0514 00:04:08.205997 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:08.433128 kubelet[2682]: I0514 00:04:08.432811 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nz7rl" podStartSLOduration=51.432791137 podStartE2EDuration="51.432791137s" podCreationTimestamp="2025-05-14 00:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:04:08.432165031 +0000 UTC m=+57.082721936" watchObservedRunningTime="2025-05-14 00:04:08.432791137 +0000 UTC m=+57.083348042" May 14 00:04:08.658127 kubelet[2682]: I0514 00:04:08.658043 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wcw6t" podStartSLOduration=51.658016395 podStartE2EDuration="51.658016395s" podCreationTimestamp="2025-05-14 00:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:04:08.530574562 +0000 UTC m=+57.181131477" watchObservedRunningTime="2025-05-14 00:04:08.658016395 +0000 UTC m=+57.308573290" May 14 00:04:09.209872 kubelet[2682]: E0514 00:04:09.209629 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:09.210503 kubelet[2682]: E0514 00:04:09.210367 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:10.210981 kubelet[2682]: E0514 00:04:10.210933 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:10.210981 kubelet[2682]: E0514 00:04:10.210968 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:12.604000 systemd[1]: Started sshd@12-10.0.0.106:22-10.0.0.1:45008.service - OpenSSH per-connection server daemon (10.0.0.1:45008). May 14 00:04:12.961637 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 45008 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:12.963857 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:12.971002 systemd-logind[1494]: New session 12 of user core. May 14 00:04:12.981502 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:04:13.154398 sshd[4087]: Connection closed by 10.0.0.1 port 45008 May 14 00:04:13.156643 sshd-session[4085]: pam_unix(sshd:session): session closed for user core May 14 00:04:13.168035 systemd[1]: sshd@12-10.0.0.106:22-10.0.0.1:45008.service: Deactivated successfully. May 14 00:04:13.172455 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:04:13.174736 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. May 14 00:04:13.176124 systemd-logind[1494]: Removed session 12. May 14 00:04:18.169614 systemd[1]: Started sshd@13-10.0.0.106:22-10.0.0.1:50810.service - OpenSSH per-connection server daemon (10.0.0.1:50810). May 14 00:04:18.230020 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 50810 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:18.231653 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:18.236265 systemd-logind[1494]: New session 13 of user core. May 14 00:04:18.246424 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:04:18.373553 sshd[4112]: Connection closed by 10.0.0.1 port 50810 May 14 00:04:18.374090 sshd-session[4110]: pam_unix(sshd:session): session closed for user core May 14 00:04:18.381075 systemd[1]: sshd@13-10.0.0.106:22-10.0.0.1:50810.service: Deactivated successfully. May 14 00:04:18.383918 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:04:18.386264 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. May 14 00:04:18.389134 systemd-logind[1494]: Removed session 13. May 14 00:04:23.387578 systemd[1]: Started sshd@14-10.0.0.106:22-10.0.0.1:50816.service - OpenSSH per-connection server daemon (10.0.0.1:50816). May 14 00:04:23.438424 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 50816 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:23.440732 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:23.448188 systemd-logind[1494]: New session 14 of user core. May 14 00:04:23.456691 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:04:23.596270 sshd[4128]: Connection closed by 10.0.0.1 port 50816 May 14 00:04:23.596789 sshd-session[4126]: pam_unix(sshd:session): session closed for user core May 14 00:04:23.602443 systemd[1]: sshd@14-10.0.0.106:22-10.0.0.1:50816.service: Deactivated successfully. May 14 00:04:23.605194 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:04:23.607961 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. May 14 00:04:23.609862 systemd-logind[1494]: Removed session 14. May 14 00:04:28.611559 systemd[1]: Started sshd@15-10.0.0.106:22-10.0.0.1:58002.service - OpenSSH per-connection server daemon (10.0.0.1:58002). May 14 00:04:28.675517 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 58002 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:28.677284 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:28.683231 systemd-logind[1494]: New session 15 of user core. May 14 00:04:28.690449 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:04:28.848261 sshd[4145]: Connection closed by 10.0.0.1 port 58002 May 14 00:04:28.849229 sshd-session[4143]: pam_unix(sshd:session): session closed for user core May 14 00:04:28.862553 systemd[1]: sshd@15-10.0.0.106:22-10.0.0.1:58002.service: Deactivated successfully. May 14 00:04:28.865124 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:04:28.865895 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. May 14 00:04:28.868429 systemd[1]: Started sshd@16-10.0.0.106:22-10.0.0.1:58018.service - OpenSSH per-connection server daemon (10.0.0.1:58018). May 14 00:04:28.869610 systemd-logind[1494]: Removed session 15. May 14 00:04:28.920142 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 58018 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:28.922497 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:28.928319 systemd-logind[1494]: New session 16 of user core. May 14 00:04:28.939563 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:04:29.107184 sshd[4161]: Connection closed by 10.0.0.1 port 58018 May 14 00:04:29.109749 sshd-session[4158]: pam_unix(sshd:session): session closed for user core May 14 00:04:29.119368 systemd[1]: sshd@16-10.0.0.106:22-10.0.0.1:58018.service: Deactivated successfully. May 14 00:04:29.125354 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:04:29.126372 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. May 14 00:04:29.130264 systemd[1]: Started sshd@17-10.0.0.106:22-10.0.0.1:58024.service - OpenSSH per-connection server daemon (10.0.0.1:58024). May 14 00:04:29.131126 systemd-logind[1494]: Removed session 16. May 14 00:04:29.191600 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 58024 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:29.193573 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:29.198994 systemd-logind[1494]: New session 17 of user core. May 14 00:04:29.209595 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:04:29.358324 sshd[4175]: Connection closed by 10.0.0.1 port 58024 May 14 00:04:29.358763 sshd-session[4172]: pam_unix(sshd:session): session closed for user core May 14 00:04:29.364425 systemd[1]: sshd@17-10.0.0.106:22-10.0.0.1:58024.service: Deactivated successfully. May 14 00:04:29.367482 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:04:29.368542 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. May 14 00:04:29.369917 systemd-logind[1494]: Removed session 17. May 14 00:04:34.375156 systemd[1]: Started sshd@18-10.0.0.106:22-10.0.0.1:58040.service - OpenSSH per-connection server daemon (10.0.0.1:58040). May 14 00:04:34.434253 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 58040 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:34.436926 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:34.443454 systemd-logind[1494]: New session 18 of user core. May 14 00:04:34.449716 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:04:34.585624 sshd[4191]: Connection closed by 10.0.0.1 port 58040 May 14 00:04:34.586041 sshd-session[4189]: pam_unix(sshd:session): session closed for user core May 14 00:04:34.590674 systemd[1]: sshd@18-10.0.0.106:22-10.0.0.1:58040.service: Deactivated successfully. May 14 00:04:34.593091 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:04:34.594004 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. May 14 00:04:34.595136 systemd-logind[1494]: Removed session 18. May 14 00:04:35.455486 kubelet[2682]: E0514 00:04:35.455371 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:39.602221 systemd[1]: Started sshd@19-10.0.0.106:22-10.0.0.1:38546.service - OpenSSH per-connection server daemon (10.0.0.1:38546). May 14 00:04:39.662552 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 38546 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:39.664643 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:39.670361 systemd-logind[1494]: New session 19 of user core. May 14 00:04:39.688601 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:04:39.814088 sshd[4208]: Connection closed by 10.0.0.1 port 38546 May 14 00:04:39.814476 sshd-session[4206]: pam_unix(sshd:session): session closed for user core May 14 00:04:39.818668 systemd[1]: sshd@19-10.0.0.106:22-10.0.0.1:38546.service: Deactivated successfully. May 14 00:04:39.821014 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:04:39.821933 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. May 14 00:04:39.823230 systemd-logind[1494]: Removed session 19. May 14 00:04:43.455115 kubelet[2682]: E0514 00:04:43.455017 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:44.829581 systemd[1]: Started sshd@20-10.0.0.106:22-10.0.0.1:38548.service - OpenSSH per-connection server daemon (10.0.0.1:38548). May 14 00:04:44.891429 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 38548 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:44.893763 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:44.900992 systemd-logind[1494]: New session 20 of user core. May 14 00:04:44.908696 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:04:45.053435 sshd[4223]: Connection closed by 10.0.0.1 port 38548 May 14 00:04:45.054854 sshd-session[4221]: pam_unix(sshd:session): session closed for user core May 14 00:04:45.066953 systemd[1]: sshd@20-10.0.0.106:22-10.0.0.1:38548.service: Deactivated successfully. May 14 00:04:45.069581 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:04:45.073722 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. May 14 00:04:45.079792 systemd[1]: Started sshd@21-10.0.0.106:22-10.0.0.1:38550.service - OpenSSH per-connection server daemon (10.0.0.1:38550). May 14 00:04:45.080815 systemd-logind[1494]: Removed session 20. May 14 00:04:45.140201 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 38550 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:45.142711 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:45.151732 systemd-logind[1494]: New session 21 of user core. May 14 00:04:45.161650 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:04:45.455245 kubelet[2682]: E0514 00:04:45.455199 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:46.122649 sshd[4238]: Connection closed by 10.0.0.1 port 38550 May 14 00:04:46.123381 sshd-session[4235]: pam_unix(sshd:session): session closed for user core May 14 00:04:46.135889 systemd[1]: sshd@21-10.0.0.106:22-10.0.0.1:38550.service: Deactivated successfully. May 14 00:04:46.138387 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:04:46.141132 systemd-logind[1494]: Session 21 logged out. Waiting for processes to exit. May 14 00:04:46.143224 systemd[1]: Started sshd@22-10.0.0.106:22-10.0.0.1:38564.service - OpenSSH per-connection server daemon (10.0.0.1:38564). May 14 00:04:46.144661 systemd-logind[1494]: Removed session 21. May 14 00:04:46.199297 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 38564 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:46.201219 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:46.206361 systemd-logind[1494]: New session 22 of user core. May 14 00:04:46.213421 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:04:47.394963 sshd[4252]: Connection closed by 10.0.0.1 port 38564 May 14 00:04:47.401820 sshd-session[4249]: pam_unix(sshd:session): session closed for user core May 14 00:04:47.409657 systemd[1]: sshd@22-10.0.0.106:22-10.0.0.1:38564.service: Deactivated successfully. May 14 00:04:47.412793 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:04:47.415046 systemd-logind[1494]: Session 22 logged out. Waiting for processes to exit. May 14 00:04:47.422115 systemd[1]: Started sshd@23-10.0.0.106:22-10.0.0.1:38574.service - OpenSSH per-connection server daemon (10.0.0.1:38574). May 14 00:04:47.423439 systemd-logind[1494]: Removed session 22. May 14 00:04:47.522837 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 38574 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:47.525098 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:47.531840 systemd-logind[1494]: New session 23 of user core. May 14 00:04:47.541648 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:04:48.798359 sshd[4289]: Connection closed by 10.0.0.1 port 38574 May 14 00:04:48.800036 sshd-session[4284]: pam_unix(sshd:session): session closed for user core May 14 00:04:48.850559 systemd[1]: sshd@23-10.0.0.106:22-10.0.0.1:38574.service: Deactivated successfully. May 14 00:04:48.857268 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:04:48.867353 systemd-logind[1494]: Session 23 logged out. Waiting for processes to exit. May 14 00:04:48.888764 systemd[1]: Started sshd@24-10.0.0.106:22-10.0.0.1:51042.service - OpenSSH per-connection server daemon (10.0.0.1:51042). May 14 00:04:48.898329 systemd-logind[1494]: Removed session 23. May 14 00:04:49.037628 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 51042 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:49.040411 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:49.069339 systemd-logind[1494]: New session 24 of user core. May 14 00:04:49.086238 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 00:04:49.402663 sshd[4305]: Connection closed by 10.0.0.1 port 51042 May 14 00:04:49.403384 sshd-session[4302]: pam_unix(sshd:session): session closed for user core May 14 00:04:49.421599 systemd[1]: sshd@24-10.0.0.106:22-10.0.0.1:51042.service: Deactivated successfully. May 14 00:04:49.429702 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:04:49.450783 systemd-logind[1494]: Session 24 logged out. Waiting for processes to exit. May 14 00:04:49.459118 kubelet[2682]: E0514 00:04:49.457997 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:04:49.459868 systemd-logind[1494]: Removed session 24. May 14 00:04:54.420625 systemd[1]: Started sshd@25-10.0.0.106:22-10.0.0.1:51048.service - OpenSSH per-connection server daemon (10.0.0.1:51048). May 14 00:04:54.471692 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 51048 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:54.473805 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:54.478533 systemd-logind[1494]: New session 25 of user core. May 14 00:04:54.489448 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 00:04:54.637097 sshd[4320]: Connection closed by 10.0.0.1 port 51048 May 14 00:04:54.637469 sshd-session[4318]: pam_unix(sshd:session): session closed for user core May 14 00:04:54.641704 systemd[1]: sshd@25-10.0.0.106:22-10.0.0.1:51048.service: Deactivated successfully. May 14 00:04:54.643741 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:04:54.644491 systemd-logind[1494]: Session 25 logged out. Waiting for processes to exit. May 14 00:04:54.645469 systemd-logind[1494]: Removed session 25. May 14 00:04:55.235123 update_engine[1495]: I20250514 00:04:55.235025 1495 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 00:04:55.235123 update_engine[1495]: I20250514 00:04:55.235091 1495 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 00:04:55.235690 update_engine[1495]: I20250514 00:04:55.235419 1495 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 00:04:55.236006 update_engine[1495]: I20250514 00:04:55.235972 1495 omaha_request_params.cc:62] Current group set to alpha May 14 00:04:55.236542 update_engine[1495]: I20250514 00:04:55.236469 1495 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 00:04:55.236542 update_engine[1495]: I20250514 00:04:55.236519 1495 update_attempter.cc:643] Scheduling an action processor start. May 14 00:04:55.236542 update_engine[1495]: I20250514 00:04:55.236543 1495 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 00:04:55.236712 update_engine[1495]: I20250514 00:04:55.236605 1495 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 00:04:55.236735 update_engine[1495]: I20250514 00:04:55.236713 1495 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 00:04:55.236735 update_engine[1495]: I20250514 00:04:55.236725 1495 omaha_request_action.cc:272] Request: May 14 00:04:55.236735 update_engine[1495]: May 14 00:04:55.236735 update_engine[1495]: May 14 00:04:55.236735 update_engine[1495]: May 14 00:04:55.236735 update_engine[1495]: May 14 00:04:55.236735 update_engine[1495]: May 14 00:04:55.236735 update_engine[1495]: May 14 00:04:55.236735 update_engine[1495]: May 14 00:04:55.236735 update_engine[1495]: May 14 00:04:55.236954 update_engine[1495]: I20250514 00:04:55.236735 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:04:55.236982 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 00:04:55.240112 update_engine[1495]: I20250514 00:04:55.240053 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:04:55.240494 update_engine[1495]: I20250514 00:04:55.240440 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:04:55.269726 update_engine[1495]: E20250514 00:04:55.269620 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:04:55.269726 update_engine[1495]: I20250514 00:04:55.269742 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 00:04:59.658574 systemd[1]: Started sshd@26-10.0.0.106:22-10.0.0.1:53490.service - OpenSSH per-connection server daemon (10.0.0.1:53490). May 14 00:04:59.708194 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 53490 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:04:59.710178 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:04:59.715132 systemd-logind[1494]: New session 26 of user core. May 14 00:04:59.731431 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 00:04:59.850501 sshd[4335]: Connection closed by 10.0.0.1 port 53490 May 14 00:04:59.850900 sshd-session[4333]: pam_unix(sshd:session): session closed for user core May 14 00:04:59.856217 systemd[1]: sshd@26-10.0.0.106:22-10.0.0.1:53490.service: Deactivated successfully. May 14 00:04:59.858420 systemd[1]: session-26.scope: Deactivated successfully. May 14 00:04:59.859175 systemd-logind[1494]: Session 26 logged out. Waiting for processes to exit. May 14 00:04:59.860354 systemd-logind[1494]: Removed session 26. May 14 00:05:04.867964 systemd[1]: Started sshd@27-10.0.0.106:22-10.0.0.1:53500.service - OpenSSH per-connection server daemon (10.0.0.1:53500). May 14 00:05:04.960787 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 53500 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:04.961256 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:04.967122 systemd-logind[1494]: New session 27 of user core. May 14 00:05:04.972488 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 00:05:05.088006 sshd[4351]: Connection closed by 10.0.0.1 port 53500 May 14 00:05:05.088508 sshd-session[4348]: pam_unix(sshd:session): session closed for user core May 14 00:05:05.095835 systemd[1]: sshd@27-10.0.0.106:22-10.0.0.1:53500.service: Deactivated successfully. May 14 00:05:05.098781 systemd[1]: session-27.scope: Deactivated successfully. May 14 00:05:05.101374 systemd-logind[1494]: Session 27 logged out. Waiting for processes to exit. May 14 00:05:05.102497 systemd-logind[1494]: Removed session 27. May 14 00:05:05.234634 update_engine[1495]: I20250514 00:05:05.234474 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:05:05.235214 update_engine[1495]: I20250514 00:05:05.234778 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:05:05.235214 update_engine[1495]: I20250514 00:05:05.235070 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:05:05.243250 update_engine[1495]: E20250514 00:05:05.243144 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:05:05.243426 update_engine[1495]: I20250514 00:05:05.243268 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 14 00:05:10.103214 systemd[1]: Started sshd@28-10.0.0.106:22-10.0.0.1:47264.service - OpenSSH per-connection server daemon (10.0.0.1:47264). May 14 00:05:10.158856 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 47264 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:10.165126 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:10.172913 systemd-logind[1494]: New session 28 of user core. May 14 00:05:10.186609 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 00:05:10.572462 sshd[4366]: Connection closed by 10.0.0.1 port 47264 May 14 00:05:10.572830 sshd-session[4364]: pam_unix(sshd:session): session closed for user core May 14 00:05:10.576719 systemd[1]: sshd@28-10.0.0.106:22-10.0.0.1:47264.service: Deactivated successfully. May 14 00:05:10.578919 systemd[1]: session-28.scope: Deactivated successfully. May 14 00:05:10.579699 systemd-logind[1494]: Session 28 logged out. Waiting for processes to exit. May 14 00:05:10.580755 systemd-logind[1494]: Removed session 28. May 14 00:05:15.239087 update_engine[1495]: I20250514 00:05:15.238989 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:05:15.239682 update_engine[1495]: I20250514 00:05:15.239262 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:05:15.239682 update_engine[1495]: I20250514 00:05:15.239569 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:05:15.247767 update_engine[1495]: E20250514 00:05:15.247730 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:05:15.308684 update_engine[1495]: I20250514 00:05:15.247786 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 14 00:05:15.375963 systemd[1]: Started sshd@29-10.0.0.106:22-10.0.0.1:47272.service - OpenSSH per-connection server daemon (10.0.0.1:47272). May 14 00:05:15.436853 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 47272 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:15.438540 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:15.443309 systemd-logind[1494]: New session 29 of user core. May 14 00:05:15.456526 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 00:05:15.566244 sshd[4385]: Connection closed by 10.0.0.1 port 47272 May 14 00:05:15.566596 sshd-session[4383]: pam_unix(sshd:session): session closed for user core May 14 00:05:15.571506 systemd[1]: sshd@29-10.0.0.106:22-10.0.0.1:47272.service: Deactivated successfully. May 14 00:05:15.573863 systemd[1]: session-29.scope: Deactivated successfully. May 14 00:05:15.574668 systemd-logind[1494]: Session 29 logged out. Waiting for processes to exit. May 14 00:05:15.575802 systemd-logind[1494]: Removed session 29. May 14 00:05:18.455162 kubelet[2682]: E0514 00:05:18.455096 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:20.586172 systemd[1]: Started sshd@30-10.0.0.106:22-10.0.0.1:51892.service - OpenSSH per-connection server daemon (10.0.0.1:51892). May 14 00:05:20.626610 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 51892 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:20.628409 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:20.632806 systemd-logind[1494]: New session 30 of user core. May 14 00:05:20.639354 systemd[1]: Started session-30.scope - Session 30 of User core. May 14 00:05:21.331227 sshd[4402]: Connection closed by 10.0.0.1 port 51892 May 14 00:05:21.331669 sshd-session[4400]: pam_unix(sshd:session): session closed for user core May 14 00:05:21.336421 systemd[1]: sshd@30-10.0.0.106:22-10.0.0.1:51892.service: Deactivated successfully. May 14 00:05:21.338854 systemd[1]: session-30.scope: Deactivated successfully. May 14 00:05:21.339568 systemd-logind[1494]: Session 30 logged out. Waiting for processes to exit. May 14 00:05:21.340636 systemd-logind[1494]: Removed session 30. May 14 00:05:25.238468 update_engine[1495]: I20250514 00:05:25.238337 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:05:25.238996 update_engine[1495]: I20250514 00:05:25.238691 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:05:25.239027 update_engine[1495]: I20250514 00:05:25.239007 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:05:25.257967 update_engine[1495]: E20250514 00:05:25.257879 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:05:25.257967 update_engine[1495]: I20250514 00:05:25.257969 1495 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 00:05:25.258176 update_engine[1495]: I20250514 00:05:25.257987 1495 omaha_request_action.cc:617] Omaha request response: May 14 00:05:25.258176 update_engine[1495]: E20250514 00:05:25.258101 1495 omaha_request_action.cc:636] Omaha request network transfer failed. May 14 00:05:25.258176 update_engine[1495]: I20250514 00:05:25.258122 1495 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 14 00:05:25.258176 update_engine[1495]: I20250514 00:05:25.258131 1495 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:05:25.258176 update_engine[1495]: I20250514 00:05:25.258137 1495 update_attempter.cc:306] Processing Done. May 14 00:05:25.258176 update_engine[1495]: E20250514 00:05:25.258152 1495 update_attempter.cc:619] Update failed. May 14 00:05:25.258176 update_engine[1495]: I20250514 00:05:25.258158 1495 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 14 00:05:25.258176 update_engine[1495]: I20250514 00:05:25.258165 1495 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 14 00:05:25.258176 update_engine[1495]: I20250514 00:05:25.258172 1495 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 14 00:05:25.258420 update_engine[1495]: I20250514 00:05:25.258239 1495 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 00:05:25.258420 update_engine[1495]: I20250514 00:05:25.258263 1495 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 00:05:25.258420 update_engine[1495]: I20250514 00:05:25.258270 1495 omaha_request_action.cc:272] Request: May 14 00:05:25.258420 update_engine[1495]: May 14 00:05:25.258420 update_engine[1495]: May 14 00:05:25.258420 update_engine[1495]: May 14 00:05:25.258420 update_engine[1495]: May 14 00:05:25.258420 update_engine[1495]: May 14 00:05:25.258420 update_engine[1495]: May 14 00:05:25.258420 update_engine[1495]: I20250514 00:05:25.258301 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:05:25.258621 update_engine[1495]: I20250514 00:05:25.258491 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:05:25.258649 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 14 00:05:25.259048 update_engine[1495]: I20250514 00:05:25.258689 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:05:25.267737 update_engine[1495]: E20250514 00:05:25.267679 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:05:25.267790 update_engine[1495]: I20250514 00:05:25.267761 1495 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 00:05:25.267790 update_engine[1495]: I20250514 00:05:25.267771 1495 omaha_request_action.cc:617] Omaha request response: May 14 00:05:25.267790 update_engine[1495]: I20250514 00:05:25.267779 1495 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:05:25.267790 update_engine[1495]: I20250514 00:05:25.267786 1495 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:05:25.267894 update_engine[1495]: I20250514 00:05:25.267793 1495 update_attempter.cc:306] Processing Done. May 14 00:05:25.267894 update_engine[1495]: I20250514 00:05:25.267802 1495 update_attempter.cc:310] Error event sent. May 14 00:05:25.267894 update_engine[1495]: I20250514 00:05:25.267812 1495 update_check_scheduler.cc:74] Next update check in 40m18s May 14 00:05:25.268252 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 14 00:05:25.454937 kubelet[2682]: E0514 00:05:25.454870 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:25.454937 kubelet[2682]: E0514 00:05:25.454953 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:26.138531 systemd[1]: Started sshd@31-10.0.0.106:22-10.0.0.1:51906.service - OpenSSH per-connection server daemon (10.0.0.1:51906). May 14 00:05:26.195731 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 51906 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:26.198040 sshd-session[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:26.205178 systemd-logind[1494]: New session 31 of user core. May 14 00:05:26.210450 systemd[1]: Started session-31.scope - Session 31 of User core. May 14 00:05:26.343515 sshd[4420]: Connection closed by 10.0.0.1 port 51906 May 14 00:05:26.344108 sshd-session[4418]: pam_unix(sshd:session): session closed for user core May 14 00:05:26.350148 systemd[1]: sshd@31-10.0.0.106:22-10.0.0.1:51906.service: Deactivated successfully. May 14 00:05:26.353318 systemd[1]: session-31.scope: Deactivated successfully. May 14 00:05:26.354516 systemd-logind[1494]: Session 31 logged out. Waiting for processes to exit. May 14 00:05:26.355779 systemd-logind[1494]: Removed session 31. May 14 00:05:30.455127 kubelet[2682]: E0514 00:05:30.455062 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:31.368847 systemd[1]: Started sshd@32-10.0.0.106:22-10.0.0.1:39206.service - OpenSSH per-connection server daemon (10.0.0.1:39206). May 14 00:05:31.424302 sshd[4434]: Accepted publickey for core from 10.0.0.1 port 39206 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:31.426320 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:31.432323 systemd-logind[1494]: New session 32 of user core. May 14 00:05:31.441647 systemd[1]: Started session-32.scope - Session 32 of User core. May 14 00:05:31.566143 sshd[4436]: Connection closed by 10.0.0.1 port 39206 May 14 00:05:31.566553 sshd-session[4434]: pam_unix(sshd:session): session closed for user core May 14 00:05:31.571554 systemd[1]: sshd@32-10.0.0.106:22-10.0.0.1:39206.service: Deactivated successfully. May 14 00:05:31.573591 systemd[1]: session-32.scope: Deactivated successfully. May 14 00:05:31.574562 systemd-logind[1494]: Session 32 logged out. Waiting for processes to exit. May 14 00:05:31.575588 systemd-logind[1494]: Removed session 32. May 14 00:05:36.585543 systemd[1]: Started sshd@33-10.0.0.106:22-10.0.0.1:39220.service - OpenSSH per-connection server daemon (10.0.0.1:39220). May 14 00:05:36.652797 sshd[4449]: Accepted publickey for core from 10.0.0.1 port 39220 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:36.654587 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:36.659582 systemd-logind[1494]: New session 33 of user core. May 14 00:05:36.675531 systemd[1]: Started session-33.scope - Session 33 of User core. May 14 00:05:36.855264 sshd[4451]: Connection closed by 10.0.0.1 port 39220 May 14 00:05:36.855563 sshd-session[4449]: pam_unix(sshd:session): session closed for user core May 14 00:05:36.860471 systemd[1]: sshd@33-10.0.0.106:22-10.0.0.1:39220.service: Deactivated successfully. May 14 00:05:36.863330 systemd[1]: session-33.scope: Deactivated successfully. May 14 00:05:36.864146 systemd-logind[1494]: Session 33 logged out. Waiting for processes to exit. May 14 00:05:36.865214 systemd-logind[1494]: Removed session 33. May 14 00:05:41.877443 systemd[1]: Started sshd@34-10.0.0.106:22-10.0.0.1:49090.service - OpenSSH per-connection server daemon (10.0.0.1:49090). May 14 00:05:41.924815 sshd[4464]: Accepted publickey for core from 10.0.0.1 port 49090 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:41.926704 sshd-session[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:41.931686 systemd-logind[1494]: New session 34 of user core. May 14 00:05:41.942501 systemd[1]: Started session-34.scope - Session 34 of User core. May 14 00:05:42.054686 sshd[4466]: Connection closed by 10.0.0.1 port 49090 May 14 00:05:42.055014 sshd-session[4464]: pam_unix(sshd:session): session closed for user core May 14 00:05:42.067762 systemd[1]: sshd@34-10.0.0.106:22-10.0.0.1:49090.service: Deactivated successfully. May 14 00:05:42.069808 systemd[1]: session-34.scope: Deactivated successfully. May 14 00:05:42.071371 systemd-logind[1494]: Session 34 logged out. Waiting for processes to exit. May 14 00:05:42.072810 systemd[1]: Started sshd@35-10.0.0.106:22-10.0.0.1:49100.service - OpenSSH per-connection server daemon (10.0.0.1:49100). May 14 00:05:42.073634 systemd-logind[1494]: Removed session 34. May 14 00:05:42.123992 sshd[4479]: Accepted publickey for core from 10.0.0.1 port 49100 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:42.125760 sshd-session[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:42.130674 systemd-logind[1494]: New session 35 of user core. May 14 00:05:42.138430 systemd[1]: Started session-35.scope - Session 35 of User core. May 14 00:05:45.328307 containerd[1502]: time="2025-05-14T00:05:45.328236147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" id:\"effe1acd8ca338a359f20bc9fa36603ed1455cf4dd759f76734de01ee18d77c9\" pid:4501 exited_at:{seconds:1747181145 nanos:327790367}" May 14 00:05:45.330364 containerd[1502]: time="2025-05-14T00:05:45.330270920Z" level=info msg="StopContainer for \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" with timeout 2 (s)" May 14 00:05:45.330643 containerd[1502]: time="2025-05-14T00:05:45.330624596Z" level=info msg="Stop container \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" with signal terminated" May 14 00:05:45.339698 systemd-networkd[1424]: lxc_health: Link DOWN May 14 00:05:45.339709 systemd-networkd[1424]: lxc_health: Lost carrier May 14 00:05:45.340804 containerd[1502]: time="2025-05-14T00:05:45.340730551Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:05:45.359903 systemd[1]: cri-containerd-a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7.scope: Deactivated successfully. May 14 00:05:45.360302 systemd[1]: cri-containerd-a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7.scope: Consumed 8.631s CPU time, 125.9M memory peak, 220K read from disk, 13.3M written to disk. May 14 00:05:45.362053 containerd[1502]: time="2025-05-14T00:05:45.361936683Z" level=info msg="received exit event container_id:\"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" id:\"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" pid:3309 exited_at:{seconds:1747181145 nanos:361720536}" May 14 00:05:45.362053 containerd[1502]: time="2025-05-14T00:05:45.362025430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" id:\"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" pid:3309 exited_at:{seconds:1747181145 nanos:361720536}" May 14 00:05:45.385990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7-rootfs.mount: Deactivated successfully. May 14 00:05:45.557093 containerd[1502]: time="2025-05-14T00:05:45.557016583Z" level=info msg="StopContainer for \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\" with timeout 30 (s)" May 14 00:05:45.557636 containerd[1502]: time="2025-05-14T00:05:45.557591815Z" level=info msg="Stop container \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\" with signal terminated" May 14 00:05:45.574423 systemd[1]: cri-containerd-232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6.scope: Deactivated successfully. May 14 00:05:45.575965 containerd[1502]: time="2025-05-14T00:05:45.575893918Z" level=info msg="received exit event container_id:\"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\" id:\"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\" pid:3401 exited_at:{seconds:1747181145 nanos:575564378}" May 14 00:05:45.576156 containerd[1502]: time="2025-05-14T00:05:45.576025727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\" id:\"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\" pid:3401 exited_at:{seconds:1747181145 nanos:575564378}" May 14 00:05:45.604110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6-rootfs.mount: Deactivated successfully. May 14 00:05:45.739789 sshd[4482]: Connection closed by 10.0.0.1 port 49100 May 14 00:05:45.740271 sshd-session[4479]: pam_unix(sshd:session): session closed for user core May 14 00:05:45.754539 systemd[1]: sshd@35-10.0.0.106:22-10.0.0.1:49100.service: Deactivated successfully. May 14 00:05:45.756733 systemd[1]: session-35.scope: Deactivated successfully. May 14 00:05:45.757658 systemd-logind[1494]: Session 35 logged out. Waiting for processes to exit. May 14 00:05:45.761465 systemd[1]: Started sshd@36-10.0.0.106:22-10.0.0.1:49110.service - OpenSSH per-connection server daemon (10.0.0.1:49110). May 14 00:05:45.762078 systemd-logind[1494]: Removed session 35. May 14 00:05:45.871013 containerd[1502]: time="2025-05-14T00:05:45.870866388Z" level=info msg="StopContainer for \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" returns successfully" May 14 00:05:45.871539 containerd[1502]: time="2025-05-14T00:05:45.871496535Z" level=info msg="StopPodSandbox for \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\"" May 14 00:05:45.871697 containerd[1502]: time="2025-05-14T00:05:45.871557089Z" level=info msg="Container to stop \"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:45.871697 containerd[1502]: time="2025-05-14T00:05:45.871570053Z" level=info msg="Container to stop \"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:45.871697 containerd[1502]: time="2025-05-14T00:05:45.871580653Z" level=info msg="Container to stop \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:45.871697 containerd[1502]: time="2025-05-14T00:05:45.871591083Z" level=info msg="Container to stop \"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:45.871697 containerd[1502]: time="2025-05-14T00:05:45.871599609Z" level=info msg="Container to stop \"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:45.880262 systemd[1]: cri-containerd-a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5.scope: Deactivated successfully. May 14 00:05:45.881591 containerd[1502]: time="2025-05-14T00:05:45.881250978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" id:\"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" pid:2836 exit_status:137 exited_at:{seconds:1747181145 nanos:880656498}" May 14 00:05:45.904904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5-rootfs.mount: Deactivated successfully. May 14 00:05:45.957238 containerd[1502]: time="2025-05-14T00:05:45.957186234Z" level=info msg="StopContainer for \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\" returns successfully" May 14 00:05:45.958710 containerd[1502]: time="2025-05-14T00:05:45.958673585Z" level=info msg="StopPodSandbox for \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\"" May 14 00:05:45.959076 containerd[1502]: time="2025-05-14T00:05:45.958990282Z" level=info msg="Container to stop \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:05:45.968875 systemd[1]: cri-containerd-e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818.scope: Deactivated successfully. May 14 00:05:45.985832 containerd[1502]: time="2025-05-14T00:05:45.985575697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" id:\"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" pid:2896 exit_status:137 exited_at:{seconds:1747181145 nanos:970510270}" May 14 00:05:45.995195 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5-shm.mount: Deactivated successfully. May 14 00:05:46.009638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818-rootfs.mount: Deactivated successfully. May 14 00:05:46.034784 containerd[1502]: time="2025-05-14T00:05:46.034714680Z" level=info msg="TearDown network for sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" successfully" May 14 00:05:46.034784 containerd[1502]: time="2025-05-14T00:05:46.034766087Z" level=info msg="StopPodSandbox for \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" returns successfully" May 14 00:05:46.052063 containerd[1502]: time="2025-05-14T00:05:46.039505654Z" level=info msg="received exit event sandbox_id:\"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" exit_status:137 exited_at:{seconds:1747181145 nanos:880656498}" May 14 00:05:46.052063 containerd[1502]: time="2025-05-14T00:05:46.048856997Z" level=info msg="shim disconnected" id=a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5 namespace=k8s.io May 14 00:05:46.052063 containerd[1502]: time="2025-05-14T00:05:46.048915858Z" level=warning msg="cleaning up after shim disconnected" id=a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5 namespace=k8s.io May 14 00:05:46.052187 sshd[4553]: Accepted publickey for core from 10.0.0.1 port 49110 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:46.053052 sshd-session[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:46.054099 containerd[1502]: time="2025-05-14T00:05:46.048925706Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:05:46.108319 kubelet[2682]: I0514 00:05:46.108184 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-bpf-maps\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.108319 kubelet[2682]: I0514 00:05:46.108253 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-host-proc-sys-net\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.108319 kubelet[2682]: I0514 00:05:46.108292 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-lib-modules\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.108319 kubelet[2682]: I0514 00:05:46.108318 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-host-proc-sys-kernel\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.109141 kubelet[2682]: I0514 00:05:46.108347 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dt8p\" (UniqueName: \"kubernetes.io/projected/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-kube-api-access-9dt8p\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.109141 kubelet[2682]: I0514 00:05:46.108366 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-xtables-lock\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.109141 kubelet[2682]: I0514 00:05:46.108387 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-config-path\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.109141 kubelet[2682]: I0514 00:05:46.108406 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-etc-cni-netd\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.109141 kubelet[2682]: I0514 00:05:46.108427 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-clustermesh-secrets\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.109141 kubelet[2682]: I0514 00:05:46.108461 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cni-path\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.109362 kubelet[2682]: I0514 00:05:46.108480 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-hostproc\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.109362 kubelet[2682]: I0514 00:05:46.108500 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-run\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.109362 kubelet[2682]: I0514 00:05:46.108518 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-cgroup\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.109362 kubelet[2682]: I0514 00:05:46.108546 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-hubble-tls\") pod \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\" (UID: \"7f79bf59-b109-4ba9-82c3-b1542f5f6a02\") " May 14 00:05:46.112573 kubelet[2682]: I0514 00:05:46.109534 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cni-path" (OuterVolumeSpecName: "cni-path") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:05:46.112573 kubelet[2682]: I0514 00:05:46.109555 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:05:46.112573 kubelet[2682]: I0514 00:05:46.109588 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:05:46.112573 kubelet[2682]: I0514 00:05:46.109610 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:05:46.112573 kubelet[2682]: I0514 00:05:46.109630 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:05:46.112765 kubelet[2682]: I0514 00:05:46.109648 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:05:46.112765 kubelet[2682]: I0514 00:05:46.109666 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-hostproc" (OuterVolumeSpecName: "hostproc") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:05:46.112765 kubelet[2682]: I0514 00:05:46.112511 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:05:46.112765 kubelet[2682]: I0514 00:05:46.112572 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:05:46.112765 kubelet[2682]: I0514 00:05:46.112590 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 00:05:46.113308 kubelet[2682]: I0514 00:05:46.113172 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 00:05:46.113479 kubelet[2682]: I0514 00:05:46.113454 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-kube-api-access-9dt8p" (OuterVolumeSpecName: "kube-api-access-9dt8p") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "kube-api-access-9dt8p". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 00:05:46.114514 kubelet[2682]: I0514 00:05:46.114398 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 14 00:05:46.115456 kubelet[2682]: I0514 00:05:46.115333 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7f79bf59-b109-4ba9-82c3-b1542f5f6a02" (UID: "7f79bf59-b109-4ba9-82c3-b1542f5f6a02"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 00:05:46.117801 systemd-logind[1494]: New session 36 of user core. May 14 00:05:46.128525 systemd[1]: Started session-36.scope - Session 36 of User core. May 14 00:05:46.208815 kubelet[2682]: I0514 00:05:46.208769 2682 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.208815 kubelet[2682]: I0514 00:05:46.208801 2682 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9dt8p\" (UniqueName: \"kubernetes.io/projected/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-kube-api-access-9dt8p\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.208815 kubelet[2682]: I0514 00:05:46.208812 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.208815 kubelet[2682]: I0514 00:05:46.208823 2682 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.208815 kubelet[2682]: I0514 00:05:46.208835 2682 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.209137 kubelet[2682]: I0514 00:05:46.208844 2682 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.209137 kubelet[2682]: I0514 00:05:46.208854 2682 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.209137 kubelet[2682]: I0514 00:05:46.208865 2682 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.209137 kubelet[2682]: I0514 00:05:46.208875 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.209137 kubelet[2682]: I0514 00:05:46.208884 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.209137 kubelet[2682]: I0514 00:05:46.208893 2682 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.209137 kubelet[2682]: I0514 00:05:46.208903 2682 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.209137 kubelet[2682]: I0514 00:05:46.208912 2682 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.209418 kubelet[2682]: I0514 00:05:46.208923 2682 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f79bf59-b109-4ba9-82c3-b1542f5f6a02-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.250978 containerd[1502]: time="2025-05-14T00:05:46.250864295Z" level=info msg="shim disconnected" id=e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818 namespace=k8s.io May 14 00:05:46.250978 containerd[1502]: time="2025-05-14T00:05:46.250925581Z" level=error msg="Failed to handle event container_id:\"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" id:\"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" pid:2896 exit_status:137 exited_at:{seconds:1747181145 nanos:970510270} for e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818" error="failed to handle container TaskExit event: failed to stop sandbox: ttrpc: closed" May 14 00:05:46.250978 containerd[1502]: time="2025-05-14T00:05:46.250928908Z" level=warning msg="cleaning up after shim disconnected" id=e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818 namespace=k8s.io May 14 00:05:46.251218 containerd[1502]: time="2025-05-14T00:05:46.250988149Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:05:46.274324 containerd[1502]: time="2025-05-14T00:05:46.273848106Z" level=info msg="received exit event sandbox_id:\"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" exit_status:137 exited_at:{seconds:1747181145 nanos:970510270}" May 14 00:05:46.274324 containerd[1502]: time="2025-05-14T00:05:46.274208694Z" level=info msg="TearDown network for sandbox \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" successfully" May 14 00:05:46.274324 containerd[1502]: time="2025-05-14T00:05:46.274249592Z" level=info msg="StopPodSandbox for \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" returns successfully" May 14 00:05:46.385960 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818-shm.mount: Deactivated successfully. May 14 00:05:46.386103 systemd[1]: var-lib-kubelet-pods-7f79bf59\x2db109\x2d4ba9\x2d82c3\x2db1542f5f6a02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9dt8p.mount: Deactivated successfully. May 14 00:05:46.386206 systemd[1]: var-lib-kubelet-pods-7f79bf59\x2db109\x2d4ba9\x2d82c3\x2db1542f5f6a02-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:05:46.386327 systemd[1]: var-lib-kubelet-pods-7f79bf59\x2db109\x2d4ba9\x2d82c3\x2db1542f5f6a02-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:05:46.410109 kubelet[2682]: I0514 00:05:46.410026 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhptz\" (UniqueName: \"kubernetes.io/projected/acd22804-58df-4cfb-a525-80c532017468-kube-api-access-xhptz\") pod \"acd22804-58df-4cfb-a525-80c532017468\" (UID: \"acd22804-58df-4cfb-a525-80c532017468\") " May 14 00:05:46.410109 kubelet[2682]: I0514 00:05:46.410087 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acd22804-58df-4cfb-a525-80c532017468-cilium-config-path\") pod \"acd22804-58df-4cfb-a525-80c532017468\" (UID: \"acd22804-58df-4cfb-a525-80c532017468\") " May 14 00:05:46.414317 kubelet[2682]: I0514 00:05:46.414238 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd22804-58df-4cfb-a525-80c532017468-kube-api-access-xhptz" (OuterVolumeSpecName: "kube-api-access-xhptz") pod "acd22804-58df-4cfb-a525-80c532017468" (UID: "acd22804-58df-4cfb-a525-80c532017468"). InnerVolumeSpecName "kube-api-access-xhptz". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 00:05:46.415752 systemd[1]: var-lib-kubelet-pods-acd22804\x2d58df\x2d4cfb\x2da525\x2d80c532017468-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhptz.mount: Deactivated successfully. May 14 00:05:46.417186 kubelet[2682]: I0514 00:05:46.417130 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd22804-58df-4cfb-a525-80c532017468-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "acd22804-58df-4cfb-a525-80c532017468" (UID: "acd22804-58df-4cfb-a525-80c532017468"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 00:05:46.479926 kubelet[2682]: I0514 00:05:46.479881 2682 scope.go:117] "RemoveContainer" containerID="a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7" May 14 00:05:46.485168 containerd[1502]: time="2025-05-14T00:05:46.484937080Z" level=info msg="RemoveContainer for \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\"" May 14 00:05:46.487703 systemd[1]: Removed slice kubepods-burstable-pod7f79bf59_b109_4ba9_82c3_b1542f5f6a02.slice - libcontainer container kubepods-burstable-pod7f79bf59_b109_4ba9_82c3_b1542f5f6a02.slice. May 14 00:05:46.487927 systemd[1]: kubepods-burstable-pod7f79bf59_b109_4ba9_82c3_b1542f5f6a02.slice: Consumed 8.777s CPU time, 127.5M memory peak, 1M read from disk, 13.3M written to disk. May 14 00:05:46.491794 systemd[1]: Removed slice kubepods-besteffort-podacd22804_58df_4cfb_a525_80c532017468.slice - libcontainer container kubepods-besteffort-podacd22804_58df_4cfb_a525_80c532017468.slice. May 14 00:05:46.511584 kubelet[2682]: I0514 00:05:46.511302 2682 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xhptz\" (UniqueName: \"kubernetes.io/projected/acd22804-58df-4cfb-a525-80c532017468-kube-api-access-xhptz\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.511584 kubelet[2682]: I0514 00:05:46.511351 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acd22804-58df-4cfb-a525-80c532017468-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:05:46.544536 kubelet[2682]: E0514 00:05:46.544452 2682 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:05:46.702547 containerd[1502]: time="2025-05-14T00:05:46.702370443Z" level=info msg="RemoveContainer for \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" returns successfully" May 14 00:05:46.703098 kubelet[2682]: I0514 00:05:46.702909 2682 scope.go:117] "RemoveContainer" containerID="c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281" May 14 00:05:46.705088 containerd[1502]: time="2025-05-14T00:05:46.705052234Z" level=info msg="RemoveContainer for \"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\"" May 14 00:05:47.055445 containerd[1502]: time="2025-05-14T00:05:47.055107669Z" level=info msg="RemoveContainer for \"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\" returns successfully" May 14 00:05:47.055596 kubelet[2682]: I0514 00:05:47.055524 2682 scope.go:117] "RemoveContainer" containerID="4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32" May 14 00:05:47.059091 containerd[1502]: time="2025-05-14T00:05:47.059023184Z" level=info msg="RemoveContainer for \"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\"" May 14 00:05:47.389550 containerd[1502]: time="2025-05-14T00:05:47.389377529Z" level=info msg="RemoveContainer for \"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\" returns successfully" May 14 00:05:47.389886 kubelet[2682]: I0514 00:05:47.389841 2682 scope.go:117] "RemoveContainer" containerID="451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4" May 14 00:05:47.392268 containerd[1502]: time="2025-05-14T00:05:47.392216406Z" level=info msg="RemoveContainer for \"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\"" May 14 00:05:47.579345 containerd[1502]: time="2025-05-14T00:05:47.579244224Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" id:\"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" pid:2896 exit_status:137 exited_at:{seconds:1747181145 nanos:970510270}" May 14 00:05:47.625417 containerd[1502]: time="2025-05-14T00:05:47.625332118Z" level=info msg="RemoveContainer for \"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\" returns successfully" May 14 00:05:47.625687 kubelet[2682]: I0514 00:05:47.625658 2682 scope.go:117] "RemoveContainer" containerID="1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452" May 14 00:05:47.627144 containerd[1502]: time="2025-05-14T00:05:47.627119354Z" level=info msg="RemoveContainer for \"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\"" May 14 00:05:47.870995 containerd[1502]: time="2025-05-14T00:05:47.870922766Z" level=info msg="RemoveContainer for \"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\" returns successfully" May 14 00:05:47.871411 kubelet[2682]: I0514 00:05:47.871341 2682 scope.go:117] "RemoveContainer" containerID="a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7" May 14 00:05:47.871739 containerd[1502]: time="2025-05-14T00:05:47.871688008Z" level=error msg="ContainerStatus for \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\": not found" May 14 00:05:47.872002 kubelet[2682]: E0514 00:05:47.871954 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\": not found" containerID="a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7" May 14 00:05:47.872089 kubelet[2682]: I0514 00:05:47.871999 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7"} err="failed to get container status \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"a450cde6bf21691b5eb86c3ba27e1a554e8bfd525e3274f251329ef5e00c7cb7\": not found" May 14 00:05:47.872133 kubelet[2682]: I0514 00:05:47.872090 2682 scope.go:117] "RemoveContainer" containerID="c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281" May 14 00:05:47.872523 containerd[1502]: time="2025-05-14T00:05:47.872464039Z" level=error msg="ContainerStatus for \"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\": not found" May 14 00:05:47.872720 kubelet[2682]: E0514 00:05:47.872657 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\": not found" containerID="c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281" May 14 00:05:47.872720 kubelet[2682]: I0514 00:05:47.872684 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281"} err="failed to get container status \"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\": rpc error: code = NotFound desc = an error occurred when try to find container \"c406c1dc7c01d102e45727f86842aec60542c9e71699c088acc0b53397a20281\": not found" May 14 00:05:47.872720 kubelet[2682]: I0514 00:05:47.872708 2682 scope.go:117] "RemoveContainer" containerID="4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32" May 14 00:05:47.872922 containerd[1502]: time="2025-05-14T00:05:47.872880754Z" level=error msg="ContainerStatus for \"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\": not found" May 14 00:05:47.873100 kubelet[2682]: E0514 00:05:47.873066 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\": not found" containerID="4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32" May 14 00:05:47.873158 kubelet[2682]: I0514 00:05:47.873114 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32"} err="failed to get container status \"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\": rpc error: code = NotFound desc = an error occurred when try to find container \"4032d9c05c3311311aa060f33f7034efd4a684fe010dd49f5af3378e4a285c32\": not found" May 14 00:05:47.873158 kubelet[2682]: I0514 00:05:47.873149 2682 scope.go:117] "RemoveContainer" containerID="451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4" May 14 00:05:47.873476 containerd[1502]: time="2025-05-14T00:05:47.873446370Z" level=error msg="ContainerStatus for \"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\": not found" May 14 00:05:47.873628 kubelet[2682]: E0514 00:05:47.873600 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\": not found" containerID="451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4" May 14 00:05:47.873705 kubelet[2682]: I0514 00:05:47.873630 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4"} err="failed to get container status \"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\": rpc error: code = NotFound desc = an error occurred when try to find container \"451043fbc3653757a11e96c5ef0506b2076883dba7c5c07678ccf4eff2cf3ef4\": not found" May 14 00:05:47.873705 kubelet[2682]: I0514 00:05:47.873646 2682 scope.go:117] "RemoveContainer" containerID="1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452" May 14 00:05:47.874142 containerd[1502]: time="2025-05-14T00:05:47.874096254Z" level=error msg="ContainerStatus for \"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\": not found" May 14 00:05:47.874306 kubelet[2682]: E0514 00:05:47.874252 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\": not found" containerID="1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452" May 14 00:05:47.874347 kubelet[2682]: I0514 00:05:47.874303 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452"} err="failed to get container status \"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f35c4298f0dc8128df2cc987ec5a74249a707595960ff146fff41639bc6b452\": not found" May 14 00:05:47.874347 kubelet[2682]: I0514 00:05:47.874323 2682 scope.go:117] "RemoveContainer" containerID="232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6" May 14 00:05:47.878109 containerd[1502]: time="2025-05-14T00:05:47.878072413Z" level=info msg="RemoveContainer for \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\"" May 14 00:05:48.126720 containerd[1502]: time="2025-05-14T00:05:48.126525757Z" level=info msg="RemoveContainer for \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\" returns successfully" May 14 00:05:48.126873 kubelet[2682]: I0514 00:05:48.126811 2682 scope.go:117] "RemoveContainer" containerID="232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6" May 14 00:05:48.127309 containerd[1502]: time="2025-05-14T00:05:48.127207331Z" level=error msg="ContainerStatus for \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\": not found" May 14 00:05:48.127492 kubelet[2682]: E0514 00:05:48.127462 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\": not found" containerID="232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6" May 14 00:05:48.127560 kubelet[2682]: I0514 00:05:48.127503 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6"} err="failed to get container status \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\": rpc error: code = NotFound desc = an error occurred when try to find container \"232e20b182a7d353d6da76d0d54a650dd6533ff0a84f8dd1f8e607ab33bd9fe6\": not found" May 14 00:05:49.456991 kubelet[2682]: I0514 00:05:49.456942 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f79bf59-b109-4ba9-82c3-b1542f5f6a02" path="/var/lib/kubelet/pods/7f79bf59-b109-4ba9-82c3-b1542f5f6a02/volumes" May 14 00:05:49.457864 kubelet[2682]: I0514 00:05:49.457831 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd22804-58df-4cfb-a525-80c532017468" path="/var/lib/kubelet/pods/acd22804-58df-4cfb-a525-80c532017468/volumes" May 14 00:05:49.980500 sshd[4610]: Connection closed by 10.0.0.1 port 49110 May 14 00:05:49.980864 sshd-session[4553]: pam_unix(sshd:session): session closed for user core May 14 00:05:50.000453 systemd[1]: sshd@36-10.0.0.106:22-10.0.0.1:49110.service: Deactivated successfully. May 14 00:05:50.002884 systemd[1]: session-36.scope: Deactivated successfully. May 14 00:05:50.004715 systemd-logind[1494]: Session 36 logged out. Waiting for processes to exit. May 14 00:05:50.006514 systemd[1]: Started sshd@37-10.0.0.106:22-10.0.0.1:44314.service - OpenSSH per-connection server daemon (10.0.0.1:44314). May 14 00:05:50.007687 systemd-logind[1494]: Removed session 36. May 14 00:05:50.063259 sshd[4638]: Accepted publickey for core from 10.0.0.1 port 44314 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:50.065609 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:50.071762 systemd-logind[1494]: New session 37 of user core. May 14 00:05:50.081613 systemd[1]: Started session-37.scope - Session 37 of User core. May 14 00:05:50.135260 sshd[4641]: Connection closed by 10.0.0.1 port 44314 May 14 00:05:50.135757 sshd-session[4638]: pam_unix(sshd:session): session closed for user core May 14 00:05:50.145035 systemd[1]: sshd@37-10.0.0.106:22-10.0.0.1:44314.service: Deactivated successfully. May 14 00:05:50.147228 systemd[1]: session-37.scope: Deactivated successfully. May 14 00:05:50.149378 systemd-logind[1494]: Session 37 logged out. Waiting for processes to exit. May 14 00:05:50.151210 systemd[1]: Started sshd@38-10.0.0.106:22-10.0.0.1:44324.service - OpenSSH per-connection server daemon (10.0.0.1:44324). May 14 00:05:50.152411 systemd-logind[1494]: Removed session 37. May 14 00:05:50.205376 sshd[4647]: Accepted publickey for core from 10.0.0.1 port 44324 ssh2: RSA SHA256:UIO2GBLwcS3ioFABoZ1D2izRTMNLszi+/rE/G21mFYQ May 14 00:05:50.207050 sshd-session[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:50.212142 systemd-logind[1494]: New session 38 of user core. May 14 00:05:50.225576 systemd[1]: Started session-38.scope - Session 38 of User core. May 14 00:05:50.455183 kubelet[2682]: E0514 00:05:50.455072 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:50.618127 kubelet[2682]: I0514 00:05:50.614967 2682 memory_manager.go:355] "RemoveStaleState removing state" podUID="7f79bf59-b109-4ba9-82c3-b1542f5f6a02" containerName="cilium-agent" May 14 00:05:50.618127 kubelet[2682]: I0514 00:05:50.614996 2682 memory_manager.go:355] "RemoveStaleState removing state" podUID="acd22804-58df-4cfb-a525-80c532017468" containerName="cilium-operator" May 14 00:05:50.627131 systemd[1]: Created slice kubepods-burstable-pod5ef6cc6e_85d3_4450_ab68_6fa5a33b2a4e.slice - libcontainer container kubepods-burstable-pod5ef6cc6e_85d3_4450_ab68_6fa5a33b2a4e.slice. May 14 00:05:50.738841 kubelet[2682]: I0514 00:05:50.738471 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-lib-modules\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.738841 kubelet[2682]: I0514 00:05:50.738535 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-xtables-lock\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.738841 kubelet[2682]: I0514 00:05:50.738593 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-cilium-ipsec-secrets\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.738841 kubelet[2682]: I0514 00:05:50.738615 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-etc-cni-netd\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.738841 kubelet[2682]: I0514 00:05:50.738672 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-hostproc\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.738841 kubelet[2682]: I0514 00:05:50.738692 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-host-proc-sys-net\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.739191 kubelet[2682]: I0514 00:05:50.738730 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-clustermesh-secrets\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.739191 kubelet[2682]: I0514 00:05:50.738749 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-cilium-config-path\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.739191 kubelet[2682]: I0514 00:05:50.738771 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-bpf-maps\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.739191 kubelet[2682]: I0514 00:05:50.738796 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-cilium-run\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.739191 kubelet[2682]: I0514 00:05:50.738814 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-cni-path\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.739191 kubelet[2682]: I0514 00:05:50.738837 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-hubble-tls\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.739474 kubelet[2682]: I0514 00:05:50.738858 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-cilium-cgroup\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.739474 kubelet[2682]: I0514 00:05:50.738895 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-host-proc-sys-kernel\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.739474 kubelet[2682]: I0514 00:05:50.738957 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfkpq\" (UniqueName: \"kubernetes.io/projected/5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e-kube-api-access-gfkpq\") pod \"cilium-9zx49\" (UID: \"5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e\") " pod="kube-system/cilium-9zx49" May 14 00:05:50.930419 kubelet[2682]: E0514 00:05:50.930225 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:50.930968 containerd[1502]: time="2025-05-14T00:05:50.930897560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zx49,Uid:5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e,Namespace:kube-system,Attempt:0,}" May 14 00:05:51.517725 containerd[1502]: time="2025-05-14T00:05:51.517671226Z" level=info msg="connecting to shim 10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df" address="unix:///run/containerd/s/bddeb231ef9e5392aa0e60b3049ee07b2be3f54984198c3259841c970031e2b6" namespace=k8s.io protocol=ttrpc version=3 May 14 00:05:51.545569 kubelet[2682]: E0514 00:05:51.545532 2682 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:05:51.668491 systemd[1]: Started cri-containerd-10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df.scope - libcontainer container 10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df. May 14 00:05:52.122488 containerd[1502]: time="2025-05-14T00:05:52.122430457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zx49,Uid:5ef6cc6e-85d3-4450-ab68-6fa5a33b2a4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df\"" May 14 00:05:52.123374 kubelet[2682]: E0514 00:05:52.123327 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:52.124872 containerd[1502]: time="2025-05-14T00:05:52.124834004Z" level=info msg="CreateContainer within sandbox \"10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:05:53.363893 containerd[1502]: time="2025-05-14T00:05:53.363810159Z" level=info msg="Container 1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81: CDI devices from CRI Config.CDIDevices: []" May 14 00:05:54.337166 containerd[1502]: time="2025-05-14T00:05:54.337094175Z" level=info msg="CreateContainer within sandbox \"10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81\"" May 14 00:05:54.337897 containerd[1502]: time="2025-05-14T00:05:54.337837695Z" level=info msg="StartContainer for \"1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81\"" May 14 00:05:54.339123 containerd[1502]: time="2025-05-14T00:05:54.339078653Z" level=info msg="connecting to shim 1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81" address="unix:///run/containerd/s/bddeb231ef9e5392aa0e60b3049ee07b2be3f54984198c3259841c970031e2b6" protocol=ttrpc version=3 May 14 00:05:54.361482 systemd[1]: Started cri-containerd-1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81.scope - libcontainer container 1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81. May 14 00:05:54.405299 systemd[1]: cri-containerd-1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81.scope: Deactivated successfully. May 14 00:05:54.407369 containerd[1502]: time="2025-05-14T00:05:54.407301530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81\" id:\"1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81\" pid:4720 exited_at:{seconds:1747181154 nanos:406800386}" May 14 00:05:55.091091 containerd[1502]: time="2025-05-14T00:05:55.089860977Z" level=info msg="received exit event container_id:\"1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81\" id:\"1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81\" pid:4720 exited_at:{seconds:1747181154 nanos:406800386}" May 14 00:05:55.091982 containerd[1502]: time="2025-05-14T00:05:55.091569735Z" level=info msg="StartContainer for \"1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81\" returns successfully" May 14 00:05:55.116644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a9b661640154068f7e80f46344b04976585ee58c362229e5a73f2c478636a81-rootfs.mount: Deactivated successfully. May 14 00:05:56.097857 kubelet[2682]: E0514 00:05:56.097818 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:56.214186 kubelet[2682]: I0514 00:05:56.214126 2682 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T00:05:56Z","lastTransitionTime":"2025-05-14T00:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 00:05:56.552331 kubelet[2682]: E0514 00:05:56.551056 2682 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:05:57.102942 kubelet[2682]: E0514 00:05:57.102888 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:05:57.106614 containerd[1502]: time="2025-05-14T00:05:57.106538514Z" level=info msg="CreateContainer within sandbox \"10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:05:58.449116 containerd[1502]: time="2025-05-14T00:05:58.449059578Z" level=info msg="Container 285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5: CDI devices from CRI Config.CDIDevices: []" May 14 00:05:59.008214 containerd[1502]: time="2025-05-14T00:05:59.008139750Z" level=info msg="CreateContainer within sandbox \"10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5\"" May 14 00:05:59.008987 containerd[1502]: time="2025-05-14T00:05:59.008949655Z" level=info msg="StartContainer for \"285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5\"" May 14 00:05:59.012839 containerd[1502]: time="2025-05-14T00:05:59.009887287Z" level=info msg="connecting to shim 285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5" address="unix:///run/containerd/s/bddeb231ef9e5392aa0e60b3049ee07b2be3f54984198c3259841c970031e2b6" protocol=ttrpc version=3 May 14 00:05:59.035424 systemd[1]: Started cri-containerd-285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5.scope - libcontainer container 285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5. May 14 00:05:59.104867 systemd[1]: cri-containerd-285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5.scope: Deactivated successfully. May 14 00:05:59.105879 containerd[1502]: time="2025-05-14T00:05:59.105817513Z" level=info msg="TaskExit event in podsandbox handler container_id:\"285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5\" id:\"285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5\" pid:4764 exited_at:{seconds:1747181159 nanos:105336975}" May 14 00:05:59.259969 containerd[1502]: time="2025-05-14T00:05:59.259777776Z" level=info msg="received exit event container_id:\"285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5\" id:\"285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5\" pid:4764 exited_at:{seconds:1747181159 nanos:105336975}" May 14 00:05:59.261474 containerd[1502]: time="2025-05-14T00:05:59.261417073Z" level=info msg="StartContainer for \"285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5\" returns successfully" May 14 00:05:59.281569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-285022fb99e166222cf13acf2c64a36092ef2b7ed60884e3a1b6cd51b9cf86b5-rootfs.mount: Deactivated successfully. May 14 00:05:59.455481 kubelet[2682]: E0514 00:05:59.455424 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:00.283935 kubelet[2682]: E0514 00:06:00.283893 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:00.286822 containerd[1502]: time="2025-05-14T00:06:00.286748056Z" level=info msg="CreateContainer within sandbox \"10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:06:01.129971 containerd[1502]: time="2025-05-14T00:06:01.129776822Z" level=info msg="Container 28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763: CDI devices from CRI Config.CDIDevices: []" May 14 00:06:01.517793 containerd[1502]: time="2025-05-14T00:06:01.517583120Z" level=info msg="CreateContainer within sandbox \"10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763\"" May 14 00:06:01.521171 containerd[1502]: time="2025-05-14T00:06:01.519724113Z" level=info msg="StartContainer for \"28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763\"" May 14 00:06:01.522148 containerd[1502]: time="2025-05-14T00:06:01.522121071Z" level=info msg="connecting to shim 28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763" address="unix:///run/containerd/s/bddeb231ef9e5392aa0e60b3049ee07b2be3f54984198c3259841c970031e2b6" protocol=ttrpc version=3 May 14 00:06:01.551567 systemd[1]: Started cri-containerd-28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763.scope - libcontainer container 28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763. May 14 00:06:01.552789 kubelet[2682]: E0514 00:06:01.552504 2682 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:06:01.614773 systemd[1]: cri-containerd-28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763.scope: Deactivated successfully. May 14 00:06:01.616854 containerd[1502]: time="2025-05-14T00:06:01.616809779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763\" id:\"28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763\" pid:4807 exited_at:{seconds:1747181161 nanos:616432290}" May 14 00:06:01.778211 containerd[1502]: time="2025-05-14T00:06:01.778048533Z" level=info msg="received exit event container_id:\"28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763\" id:\"28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763\" pid:4807 exited_at:{seconds:1747181161 nanos:616432290}" May 14 00:06:01.788837 containerd[1502]: time="2025-05-14T00:06:01.788786613Z" level=info msg="StartContainer for \"28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763\" returns successfully" May 14 00:06:01.802909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28d113877f0cdca887123609bc67cf90988feb2098df338ab443eeeeb738b763-rootfs.mount: Deactivated successfully. May 14 00:06:02.374070 kubelet[2682]: E0514 00:06:02.374041 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:02.375442 containerd[1502]: time="2025-05-14T00:06:02.375403469Z" level=info msg="CreateContainer within sandbox \"10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:06:02.898442 containerd[1502]: time="2025-05-14T00:06:02.898324380Z" level=info msg="Container 797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f: CDI devices from CRI Config.CDIDevices: []" May 14 00:06:03.305590 containerd[1502]: time="2025-05-14T00:06:03.305524026Z" level=info msg="CreateContainer within sandbox \"10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f\"" May 14 00:06:03.306089 containerd[1502]: time="2025-05-14T00:06:03.306052535Z" level=info msg="StartContainer for \"797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f\"" May 14 00:06:03.307177 containerd[1502]: time="2025-05-14T00:06:03.307143992Z" level=info msg="connecting to shim 797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f" address="unix:///run/containerd/s/bddeb231ef9e5392aa0e60b3049ee07b2be3f54984198c3259841c970031e2b6" protocol=ttrpc version=3 May 14 00:06:03.335451 systemd[1]: Started cri-containerd-797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f.scope - libcontainer container 797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f. May 14 00:06:03.367024 systemd[1]: cri-containerd-797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f.scope: Deactivated successfully. May 14 00:06:03.367576 containerd[1502]: time="2025-05-14T00:06:03.367530479Z" level=info msg="TaskExit event in podsandbox handler container_id:\"797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f\" id:\"797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f\" pid:4845 exited_at:{seconds:1747181163 nanos:367187727}" May 14 00:06:03.634508 containerd[1502]: time="2025-05-14T00:06:03.634247719Z" level=info msg="received exit event container_id:\"797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f\" id:\"797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f\" pid:4845 exited_at:{seconds:1747181163 nanos:367187727}" May 14 00:06:03.636578 containerd[1502]: time="2025-05-14T00:06:03.636249230Z" level=info msg="StartContainer for \"797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f\" returns successfully" May 14 00:06:03.656653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-797af6466bce7ac8c2a4001f586b03cc90f0d362345676a3bcd1697d557c9a7f-rootfs.mount: Deactivated successfully. May 14 00:06:04.646951 kubelet[2682]: E0514 00:06:04.646897 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:04.652999 containerd[1502]: time="2025-05-14T00:06:04.651662015Z" level=info msg="CreateContainer within sandbox \"10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:06:04.700576 containerd[1502]: time="2025-05-14T00:06:04.700517456Z" level=info msg="Container 857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0: CDI devices from CRI Config.CDIDevices: []" May 14 00:06:04.831299 containerd[1502]: time="2025-05-14T00:06:04.831213259Z" level=info msg="CreateContainer within sandbox \"10682389629939676283f661f538e2dc473c0e873eedc52e0cc79fd11efea4df\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0\"" May 14 00:06:04.831840 containerd[1502]: time="2025-05-14T00:06:04.831801895Z" level=info msg="StartContainer for \"857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0\"" May 14 00:06:04.832909 containerd[1502]: time="2025-05-14T00:06:04.832873390Z" level=info msg="connecting to shim 857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0" address="unix:///run/containerd/s/bddeb231ef9e5392aa0e60b3049ee07b2be3f54984198c3259841c970031e2b6" protocol=ttrpc version=3 May 14 00:06:04.867558 systemd[1]: Started cri-containerd-857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0.scope - libcontainer container 857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0. May 14 00:06:05.049194 containerd[1502]: time="2025-05-14T00:06:05.049144461Z" level=info msg="StartContainer for \"857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0\" returns successfully" May 14 00:06:05.159623 containerd[1502]: time="2025-05-14T00:06:05.159574159Z" level=info msg="TaskExit event in podsandbox handler container_id:\"857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0\" id:\"e823348ed884121a09e4e0f727244935b5507f3f69e312ce334bac52d5f0bff1\" pid:4918 exited_at:{seconds:1747181165 nanos:158866205}" May 14 00:06:05.415326 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 14 00:06:05.653227 kubelet[2682]: E0514 00:06:05.653181 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:06.220881 kubelet[2682]: I0514 00:06:06.220754 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9zx49" podStartSLOduration=16.220732793 podStartE2EDuration="16.220732793s" podCreationTimestamp="2025-05-14 00:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:06:06.220526556 +0000 UTC m=+174.871083451" watchObservedRunningTime="2025-05-14 00:06:06.220732793 +0000 UTC m=+174.871289688" May 14 00:06:06.455172 kubelet[2682]: E0514 00:06:06.455116 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:06.655388 kubelet[2682]: E0514 00:06:06.655342 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:06.823606 containerd[1502]: time="2025-05-14T00:06:06.823549257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0\" id:\"b0dcfc57edc82e7013cfad7cec646e298b230ad3f5e1199a87a1dec6f66f7c2a\" pid:5019 exit_status:1 exited_at:{seconds:1747181166 nanos:822930985}" May 14 00:06:07.657645 kubelet[2682]: E0514 00:06:07.657601 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:08.938940 containerd[1502]: time="2025-05-14T00:06:08.938851631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0\" id:\"0f2825da8f66d2479b838cc545f6be75c80f2dbb4acdd0ac90c68e6d840e28e6\" pid:5416 exit_status:1 exited_at:{seconds:1747181168 nanos:938204995}" May 14 00:06:09.008038 systemd-networkd[1424]: lxc_health: Link UP May 14 00:06:09.012015 systemd-networkd[1424]: lxc_health: Gained carrier May 14 00:06:10.932253 kubelet[2682]: E0514 00:06:10.931936 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:11.069386 systemd-networkd[1424]: lxc_health: Gained IPv6LL May 14 00:06:11.349605 containerd[1502]: time="2025-05-14T00:06:11.349473935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0\" id:\"92857255b46396b8caa8aeb7685090f5b53cf17fda7022b215f0bceb19bf71d9\" pid:5507 exited_at:{seconds:1747181171 nanos:349072443}" May 14 00:06:11.450885 containerd[1502]: time="2025-05-14T00:06:11.450694404Z" level=info msg="StopPodSandbox for \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\"" May 14 00:06:11.450885 containerd[1502]: time="2025-05-14T00:06:11.450827420Z" level=info msg="TearDown network for sandbox \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" successfully" May 14 00:06:11.450885 containerd[1502]: time="2025-05-14T00:06:11.450837610Z" level=info msg="StopPodSandbox for \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" returns successfully" May 14 00:06:11.451216 containerd[1502]: time="2025-05-14T00:06:11.451197723Z" level=info msg="RemovePodSandbox for \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\"" May 14 00:06:11.451248 containerd[1502]: time="2025-05-14T00:06:11.451219124Z" level=info msg="Forcibly stopping sandbox \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\"" May 14 00:06:11.451326 containerd[1502]: time="2025-05-14T00:06:11.451312373Z" level=info msg="TearDown network for sandbox \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" successfully" May 14 00:06:11.452793 containerd[1502]: time="2025-05-14T00:06:11.452775790Z" level=info msg="Ensure that sandbox e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818 in task-service has been cleanup successfully" May 14 00:06:11.454731 kubelet[2682]: E0514 00:06:11.454707 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:11.664377 kubelet[2682]: E0514 00:06:11.664352 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:12.209182 containerd[1502]: time="2025-05-14T00:06:12.209120061Z" level=info msg="RemovePodSandbox \"e548508e67c7fc73ee439972729eab8ce0b547db8b1402ec139003fd2a9fd818\" returns successfully" May 14 00:06:12.209853 containerd[1502]: time="2025-05-14T00:06:12.209825808Z" level=info msg="StopPodSandbox for \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\"" May 14 00:06:12.210094 containerd[1502]: time="2025-05-14T00:06:12.209988241Z" level=info msg="TearDown network for sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" successfully" May 14 00:06:12.210094 containerd[1502]: time="2025-05-14T00:06:12.210014121Z" level=info msg="StopPodSandbox for \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" returns successfully" May 14 00:06:12.210476 containerd[1502]: time="2025-05-14T00:06:12.210381207Z" level=info msg="RemovePodSandbox for \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\"" May 14 00:06:12.210476 containerd[1502]: time="2025-05-14T00:06:12.210408099Z" level=info msg="Forcibly stopping sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\"" May 14 00:06:12.210476 containerd[1502]: time="2025-05-14T00:06:12.210472763Z" level=info msg="TearDown network for sandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" successfully" May 14 00:06:12.212463 containerd[1502]: time="2025-05-14T00:06:12.212434847Z" level=info msg="Ensure that sandbox a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5 in task-service has been cleanup successfully" May 14 00:06:12.640640 containerd[1502]: time="2025-05-14T00:06:12.640459856Z" level=info msg="RemovePodSandbox \"a9245e471a978d2cc9fd38d115fa53d13bb4aceef52fabb290a4020d1ea75ea5\" returns successfully" May 14 00:06:12.669305 kubelet[2682]: E0514 00:06:12.666879 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:06:14.130064 containerd[1502]: time="2025-05-14T00:06:14.130009168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0\" id:\"d0a19862db79249aa437d63610f197b5ef5b8e9053f841698ff3bfdc577d83ea\" pid:5544 exited_at:{seconds:1747181174 nanos:129365589}" May 14 00:06:16.924644 containerd[1502]: time="2025-05-14T00:06:16.924591272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0\" id:\"fb42ea2bebe2478436f7535699c82521bdf7e58634e1b7632d8c1868e8610251\" pid:5568 exited_at:{seconds:1747181176 nanos:924124395}" May 14 00:06:19.390221 containerd[1502]: time="2025-05-14T00:06:19.388722337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0\" id:\"c21e55f11a1098d361736cf22993a101903d49b0acc408c2287aa95abdca5126\" pid:5594 exited_at:{seconds:1747181179 nanos:388339552}" May 14 00:06:21.519703 containerd[1502]: time="2025-05-14T00:06:21.519647341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"857adbdce3b6b6ac3e504d6bade024207ddd858e236407e690c90f1908f040e0\" id:\"71a8e0ee6696ff5ed2d8c4026deb3d29efaa02d52536f8c53b0cf5dff8a8ff50\" pid:5618 exited_at:{seconds:1747181181 nanos:519222897}" May 14 00:06:21.551565 sshd[4650]: Connection closed by 10.0.0.1 port 44324 May 14 00:06:21.556874 sshd-session[4647]: pam_unix(sshd:session): session closed for user core May 14 00:06:21.565016 systemd[1]: sshd@38-10.0.0.106:22-10.0.0.1:44324.service: Deactivated successfully. May 14 00:06:21.568472 systemd[1]: session-38.scope: Deactivated successfully. May 14 00:06:21.570607 systemd-logind[1494]: Session 38 logged out. Waiting for processes to exit. May 14 00:06:21.572170 systemd-logind[1494]: Removed session 38.