May 15 23:58:12.916427 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:08:20 -00 2025 May 15 23:58:12.916451 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 15 23:58:12.916463 kernel: BIOS-provided physical RAM map: May 15 23:58:12.916469 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 23:58:12.916476 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 23:58:12.916483 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 23:58:12.916490 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 15 23:58:12.916497 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 23:58:12.916504 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 15 23:58:12.916510 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 15 23:58:12.916517 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 15 23:58:12.916526 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 15 23:58:12.916533 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 15 23:58:12.916540 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 15 23:58:12.916548 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 15 23:58:12.916555 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 23:58:12.916565 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 15 23:58:12.916572 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 15 23:58:12.916579 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 15 23:58:12.916586 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 15 23:58:12.916593 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 15 23:58:12.916601 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 23:58:12.916608 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 15 23:58:12.916615 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 23:58:12.916622 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 15 23:58:12.916629 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 23:58:12.916636 kernel: NX (Execute Disable) protection: active May 15 23:58:12.916646 kernel: APIC: Static calls initialized May 15 23:58:12.916653 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 15 23:58:12.916661 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 15 23:58:12.916668 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 15 23:58:12.916675 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 15 23:58:12.916682 kernel: extended physical RAM map: May 15 23:58:12.916689 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 23:58:12.916696 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 23:58:12.916703 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 23:58:12.916710 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 15 23:58:12.916718 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 23:58:12.916725 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 15 23:58:12.916736 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 15 23:58:12.916758 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 15 23:58:12.916769 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 15 23:58:12.916779 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 15 23:58:12.916789 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 15 23:58:12.916804 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 15 23:58:12.916815 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 15 23:58:12.916824 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 15 23:58:12.916831 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 15 23:58:12.916839 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 15 23:58:12.916846 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 23:58:12.916853 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 15 23:58:12.916861 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 15 23:58:12.916868 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 15 23:58:12.916876 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 15 23:58:12.916883 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 15 23:58:12.916893 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 23:58:12.916903 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 15 23:58:12.916910 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 23:58:12.916917 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 15 23:58:12.916925 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 23:58:12.916932 kernel: efi: EFI v2.7 by EDK II May 15 23:58:12.916940 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 15 23:58:12.916947 kernel: random: crng init done May 15 23:58:12.916955 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 15 23:58:12.916962 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 15 23:58:12.916970 kernel: secureboot: Secure boot disabled May 15 23:58:12.916981 kernel: SMBIOS 2.8 present. May 15 23:58:12.916991 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 15 23:58:12.917001 kernel: Hypervisor detected: KVM May 15 23:58:12.917011 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 23:58:12.917019 kernel: kvm-clock: using sched offset of 3355780776 cycles May 15 23:58:12.917027 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 23:58:12.917035 kernel: tsc: Detected 2794.746 MHz processor May 15 23:58:12.917043 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 23:58:12.917051 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 23:58:12.917058 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 15 23:58:12.917070 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 15 23:58:12.917080 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 23:58:12.917090 kernel: Using GB pages for direct mapping May 15 23:58:12.917101 kernel: ACPI: Early table checksum verification disabled May 15 23:58:12.917111 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 15 23:58:12.917119 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 15 23:58:12.917127 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:12.917135 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:12.917142 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 15 23:58:12.917153 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:12.917161 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:12.917169 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:12.917186 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:58:12.917201 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 15 23:58:12.917217 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 15 23:58:12.917238 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 15 23:58:12.917246 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 15 23:58:12.917253 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 15 23:58:12.917264 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 15 23:58:12.917275 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 15 23:58:12.917285 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 15 23:58:12.917295 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 15 23:58:12.917340 kernel: No NUMA configuration found May 15 23:58:12.917349 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 15 23:58:12.917359 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 15 23:58:12.917369 kernel: Zone ranges: May 15 23:58:12.917380 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 23:58:12.917394 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 15 23:58:12.917404 kernel: Normal empty May 15 23:58:12.917415 kernel: Movable zone start for each node May 15 23:58:12.917425 kernel: Early memory node ranges May 15 23:58:12.917433 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 15 23:58:12.917440 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 15 23:58:12.917448 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 15 23:58:12.917456 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 15 23:58:12.917463 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 15 23:58:12.917473 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 15 23:58:12.917481 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 15 23:58:12.917489 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 15 23:58:12.917496 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 15 23:58:12.917504 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 23:58:12.917511 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 15 23:58:12.917527 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 15 23:58:12.917538 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 23:58:12.917546 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 15 23:58:12.917554 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 15 23:58:12.917562 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 15 23:58:12.917570 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 15 23:58:12.917580 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 15 23:58:12.917588 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 23:58:12.917596 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 23:58:12.917604 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 23:58:12.917612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 23:58:12.917622 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 23:58:12.917630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 23:58:12.917638 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 23:58:12.917646 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 23:58:12.917654 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 23:58:12.917662 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 23:58:12.917670 kernel: TSC deadline timer available May 15 23:58:12.917678 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 23:58:12.917686 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 23:58:12.917696 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 23:58:12.917704 kernel: kvm-guest: setup PV sched yield May 15 23:58:12.917712 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 15 23:58:12.917720 kernel: Booting paravirtualized kernel on KVM May 15 23:58:12.917728 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 23:58:12.917736 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 15 23:58:12.917744 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 15 23:58:12.917752 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 15 23:58:12.917760 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 23:58:12.917770 kernel: kvm-guest: PV spinlocks enabled May 15 23:58:12.917778 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 23:58:12.917787 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 15 23:58:12.917803 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 23:58:12.917811 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 23:58:12.917819 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 23:58:12.917827 kernel: Fallback order for Node 0: 0 May 15 23:58:12.917835 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 15 23:58:12.917843 kernel: Policy zone: DMA32 May 15 23:58:12.917854 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 23:58:12.917862 kernel: Memory: 2385672K/2565800K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43600K init, 1472K bss, 179872K reserved, 0K cma-reserved) May 15 23:58:12.917870 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 23:58:12.917878 kernel: ftrace: allocating 37997 entries in 149 pages May 15 23:58:12.917886 kernel: ftrace: allocated 149 pages with 4 groups May 15 23:58:12.917896 kernel: Dynamic Preempt: voluntary May 15 23:58:12.917912 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 23:58:12.917924 kernel: rcu: RCU event tracing is enabled. May 15 23:58:12.917934 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 23:58:12.917950 kernel: Trampoline variant of Tasks RCU enabled. May 15 23:58:12.917960 kernel: Rude variant of Tasks RCU enabled. May 15 23:58:12.917970 kernel: Tracing variant of Tasks RCU enabled. May 15 23:58:12.917980 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 23:58:12.917991 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 23:58:12.918002 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 23:58:12.918011 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 23:58:12.918019 kernel: Console: colour dummy device 80x25 May 15 23:58:12.918027 kernel: printk: console [ttyS0] enabled May 15 23:58:12.918038 kernel: ACPI: Core revision 20230628 May 15 23:58:12.918046 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 23:58:12.918054 kernel: APIC: Switch to symmetric I/O mode setup May 15 23:58:12.918062 kernel: x2apic enabled May 15 23:58:12.918070 kernel: APIC: Switched APIC routing to: physical x2apic May 15 23:58:12.918087 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 23:58:12.918104 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 23:58:12.918119 kernel: kvm-guest: setup PV IPIs May 15 23:58:12.918128 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 23:58:12.918139 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 23:58:12.918147 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 15 23:58:12.918155 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 23:58:12.918163 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 23:58:12.918170 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 23:58:12.918182 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 23:58:12.918190 kernel: Spectre V2 : Mitigation: Retpolines May 15 23:58:12.918198 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 23:58:12.918206 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 23:58:12.918216 kernel: RETBleed: Mitigation: untrained return thunk May 15 23:58:12.918224 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 23:58:12.918232 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 23:58:12.918240 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 23:58:12.918249 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 23:58:12.918257 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 23:58:12.918265 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 23:58:12.918273 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 23:58:12.918284 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 23:58:12.918291 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 23:58:12.918299 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 15 23:58:12.918321 kernel: Freeing SMP alternatives memory: 32K May 15 23:58:12.918330 kernel: pid_max: default: 32768 minimum: 301 May 15 23:58:12.918337 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 23:58:12.918345 kernel: landlock: Up and running. May 15 23:58:12.918353 kernel: SELinux: Initializing. May 15 23:58:12.918361 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:58:12.918372 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:58:12.918380 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 23:58:12.918388 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:58:12.918396 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:58:12.918404 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:58:12.918412 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 23:58:12.918420 kernel: ... version: 0 May 15 23:58:12.918428 kernel: ... bit width: 48 May 15 23:58:12.918436 kernel: ... generic registers: 6 May 15 23:58:12.918446 kernel: ... value mask: 0000ffffffffffff May 15 23:58:12.918454 kernel: ... max period: 00007fffffffffff May 15 23:58:12.918462 kernel: ... fixed-purpose events: 0 May 15 23:58:12.918470 kernel: ... event mask: 000000000000003f May 15 23:58:12.918479 kernel: signal: max sigframe size: 1776 May 15 23:58:12.918494 kernel: rcu: Hierarchical SRCU implementation. May 15 23:58:12.918508 kernel: rcu: Max phase no-delay instances is 400. May 15 23:58:12.918519 kernel: smp: Bringing up secondary CPUs ... May 15 23:58:12.918527 kernel: smpboot: x86: Booting SMP configuration: May 15 23:58:12.918539 kernel: .... node #0, CPUs: #1 #2 #3 May 15 23:58:12.918547 kernel: smp: Brought up 1 node, 4 CPUs May 15 23:58:12.918555 kernel: smpboot: Max logical packages: 1 May 15 23:58:12.918563 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 15 23:58:12.918571 kernel: devtmpfs: initialized May 15 23:58:12.918581 kernel: x86/mm: Memory block size: 128MB May 15 23:58:12.918592 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 15 23:58:12.918603 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 15 23:58:12.918614 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 15 23:58:12.918626 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 15 23:58:12.918635 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 15 23:58:12.918643 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 15 23:58:12.918651 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 23:58:12.918659 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 23:58:12.918667 kernel: pinctrl core: initialized pinctrl subsystem May 15 23:58:12.918674 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 23:58:12.918682 kernel: audit: initializing netlink subsys (disabled) May 15 23:58:12.918690 kernel: audit: type=2000 audit(1747353491.577:1): state=initialized audit_enabled=0 res=1 May 15 23:58:12.918701 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 23:58:12.918709 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 23:58:12.918717 kernel: cpuidle: using governor menu May 15 23:58:12.918725 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 23:58:12.918733 kernel: dca service started, version 1.12.1 May 15 23:58:12.918741 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 15 23:58:12.918749 kernel: PCI: Using configuration type 1 for base access May 15 23:58:12.918757 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 23:58:12.918765 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 23:58:12.918775 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 23:58:12.918783 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 23:58:12.918799 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 23:58:12.918807 kernel: ACPI: Added _OSI(Module Device) May 15 23:58:12.918815 kernel: ACPI: Added _OSI(Processor Device) May 15 23:58:12.918824 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 23:58:12.918832 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 23:58:12.918839 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 23:58:12.918847 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 15 23:58:12.918858 kernel: ACPI: Interpreter enabled May 15 23:58:12.918866 kernel: ACPI: PM: (supports S0 S3 S5) May 15 23:58:12.918873 kernel: ACPI: Using IOAPIC for interrupt routing May 15 23:58:12.918881 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 23:58:12.918889 kernel: PCI: Using E820 reservations for host bridge windows May 15 23:58:12.918897 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 23:58:12.918905 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 23:58:12.919110 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 23:58:12.919251 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 23:58:12.919393 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 23:58:12.919404 kernel: PCI host bridge to bus 0000:00 May 15 23:58:12.919537 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 23:58:12.919664 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 23:58:12.919804 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 23:58:12.919927 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 15 23:58:12.920070 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 15 23:58:12.920188 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 15 23:58:12.920303 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 23:58:12.920461 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 23:58:12.920598 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 23:58:12.920723 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 15 23:58:12.920883 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 15 23:58:12.921021 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 15 23:58:12.921166 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 15 23:58:12.921305 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 23:58:12.921465 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 23:58:12.921591 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 15 23:58:12.921715 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 15 23:58:12.921867 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 15 23:58:12.922027 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 23:58:12.922178 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 15 23:58:12.922344 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 15 23:58:12.922490 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 15 23:58:12.922625 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 23:58:12.922752 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 15 23:58:12.922897 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 15 23:58:12.923023 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 15 23:58:12.923160 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 15 23:58:12.923326 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 23:58:12.923471 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 23:58:12.923625 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 23:58:12.923758 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 15 23:58:12.923893 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 15 23:58:12.924033 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 23:58:12.924158 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 15 23:58:12.924169 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 23:58:12.924177 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 23:58:12.924185 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 23:58:12.924193 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 23:58:12.924205 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 23:58:12.924213 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 23:58:12.924221 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 23:58:12.924229 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 23:58:12.924237 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 23:58:12.924245 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 23:58:12.924253 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 23:58:12.924261 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 23:58:12.924269 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 23:58:12.924280 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 23:58:12.924288 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 23:58:12.924297 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 23:58:12.924304 kernel: iommu: Default domain type: Translated May 15 23:58:12.924325 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 23:58:12.924333 kernel: efivars: Registered efivars operations May 15 23:58:12.924341 kernel: PCI: Using ACPI for IRQ routing May 15 23:58:12.924349 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 23:58:12.924358 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 15 23:58:12.924369 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 15 23:58:12.924377 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 15 23:58:12.924384 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 15 23:58:12.924393 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 15 23:58:12.924400 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 15 23:58:12.924408 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 15 23:58:12.924416 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 15 23:58:12.924564 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 23:58:12.924694 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 23:58:12.924833 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 23:58:12.924844 kernel: vgaarb: loaded May 15 23:58:12.924852 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 23:58:12.924861 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 23:58:12.924869 kernel: clocksource: Switched to clocksource kvm-clock May 15 23:58:12.924879 kernel: VFS: Disk quotas dquot_6.6.0 May 15 23:58:12.924890 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 23:58:12.924901 kernel: pnp: PnP ACPI init May 15 23:58:12.925087 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 15 23:58:12.925108 kernel: pnp: PnP ACPI: found 6 devices May 15 23:58:12.925119 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 23:58:12.925129 kernel: NET: Registered PF_INET protocol family May 15 23:58:12.925139 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 23:58:12.925174 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 23:58:12.925188 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 23:58:12.925198 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 23:58:12.925208 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 23:58:12.925217 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 23:58:12.925225 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:58:12.925234 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:58:12.925243 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 23:58:12.925251 kernel: NET: Registered PF_XDP protocol family May 15 23:58:12.925399 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 15 23:58:12.925529 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 15 23:58:12.925647 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 23:58:12.925768 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 23:58:12.925917 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 23:58:12.926067 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 15 23:58:12.926182 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 15 23:58:12.926296 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 15 23:58:12.926329 kernel: PCI: CLS 0 bytes, default 64 May 15 23:58:12.926338 kernel: Initialise system trusted keyrings May 15 23:58:12.926346 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 23:58:12.926360 kernel: Key type asymmetric registered May 15 23:58:12.926368 kernel: Asymmetric key parser 'x509' registered May 15 23:58:12.926377 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 15 23:58:12.926385 kernel: io scheduler mq-deadline registered May 15 23:58:12.926393 kernel: io scheduler kyber registered May 15 23:58:12.926401 kernel: io scheduler bfq registered May 15 23:58:12.926410 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 23:58:12.926419 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 23:58:12.926427 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 23:58:12.926439 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 23:58:12.926450 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 23:58:12.926458 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 23:58:12.926467 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 23:58:12.926475 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 23:58:12.926484 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 23:58:12.926624 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 23:58:12.926636 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 23:58:12.926755 kernel: rtc_cmos 00:04: registered as rtc0 May 15 23:58:12.926883 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T23:58:12 UTC (1747353492) May 15 23:58:12.927012 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 15 23:58:12.927026 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 23:58:12.927037 kernel: efifb: probing for efifb May 15 23:58:12.927047 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 15 23:58:12.927063 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 15 23:58:12.927073 kernel: efifb: scrolling: redraw May 15 23:58:12.927084 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 23:58:12.927095 kernel: Console: switching to colour frame buffer device 160x50 May 15 23:58:12.927106 kernel: fb0: EFI VGA frame buffer device May 15 23:58:12.927116 kernel: pstore: Using crash dump compression: deflate May 15 23:58:12.927130 kernel: pstore: Registered efi_pstore as persistent store backend May 15 23:58:12.927141 kernel: NET: Registered PF_INET6 protocol family May 15 23:58:12.927154 kernel: Segment Routing with IPv6 May 15 23:58:12.927172 kernel: In-situ OAM (IOAM) with IPv6 May 15 23:58:12.927185 kernel: NET: Registered PF_PACKET protocol family May 15 23:58:12.927199 kernel: Key type dns_resolver registered May 15 23:58:12.927209 kernel: IPI shorthand broadcast: enabled May 15 23:58:12.927220 kernel: sched_clock: Marking stable (713003193, 238991639)->(991593604, -39598772) May 15 23:58:12.927230 kernel: registered taskstats version 1 May 15 23:58:12.927241 kernel: Loading compiled-in X.509 certificates May 15 23:58:12.927251 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 36d9e3bf63b9b28466bcfa7a508d814673a33a26' May 15 23:58:12.927259 kernel: Key type .fscrypt registered May 15 23:58:12.927271 kernel: Key type fscrypt-provisioning registered May 15 23:58:12.927280 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 23:58:12.927288 kernel: ima: Allocated hash algorithm: sha1 May 15 23:58:12.927297 kernel: ima: No architecture policies found May 15 23:58:12.927305 kernel: clk: Disabling unused clocks May 15 23:58:12.927327 kernel: Freeing unused kernel image (initmem) memory: 43600K May 15 23:58:12.927335 kernel: Write protecting the kernel read-only data: 40960k May 15 23:58:12.927344 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 15 23:58:12.927353 kernel: Run /init as init process May 15 23:58:12.927364 kernel: with arguments: May 15 23:58:12.927373 kernel: /init May 15 23:58:12.927384 kernel: with environment: May 15 23:58:12.927396 kernel: HOME=/ May 15 23:58:12.927407 kernel: TERM=linux May 15 23:58:12.927418 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 23:58:12.927430 systemd[1]: Successfully made /usr/ read-only. May 15 23:58:12.927447 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 23:58:12.927462 systemd[1]: Detected virtualization kvm. May 15 23:58:12.927471 systemd[1]: Detected architecture x86-64. May 15 23:58:12.927482 systemd[1]: Running in initrd. May 15 23:58:12.927493 systemd[1]: No hostname configured, using default hostname. May 15 23:58:12.927506 systemd[1]: Hostname set to . May 15 23:58:12.927518 systemd[1]: Initializing machine ID from VM UUID. May 15 23:58:12.927530 systemd[1]: Queued start job for default target initrd.target. May 15 23:58:12.927540 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:58:12.927553 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:58:12.927562 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 23:58:12.927572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:58:12.927581 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 23:58:12.927591 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 23:58:12.927602 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 23:58:12.927614 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 23:58:12.927623 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:58:12.927632 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:58:12.927641 systemd[1]: Reached target paths.target - Path Units. May 15 23:58:12.927650 systemd[1]: Reached target slices.target - Slice Units. May 15 23:58:12.927659 systemd[1]: Reached target swap.target - Swaps. May 15 23:58:12.927668 systemd[1]: Reached target timers.target - Timer Units. May 15 23:58:12.927677 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:58:12.927687 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:58:12.927699 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 23:58:12.927708 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 23:58:12.927717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:58:12.927726 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:58:12.927735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:58:12.927744 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:58:12.927753 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 23:58:12.927762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:58:12.927773 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 23:58:12.927783 systemd[1]: Starting systemd-fsck-usr.service... May 15 23:58:12.927801 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:58:12.927811 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:58:12.927820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:58:12.927829 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 23:58:12.927838 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:58:12.927851 systemd[1]: Finished systemd-fsck-usr.service. May 15 23:58:12.927860 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:58:12.927905 systemd-journald[193]: Collecting audit messages is disabled. May 15 23:58:12.927930 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:12.927940 systemd-journald[193]: Journal started May 15 23:58:12.927960 systemd-journald[193]: Runtime Journal (/run/log/journal/d34d56ec6af24a0a95c70970b55c2798) is 6M, max 48.2M, 42.2M free. May 15 23:58:12.916045 systemd-modules-load[194]: Inserted module 'overlay' May 15 23:58:12.930456 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:58:12.931957 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:58:12.937963 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:58:12.942250 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:58:12.946205 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 23:58:12.947383 systemd-modules-load[194]: Inserted module 'br_netfilter' May 15 23:58:12.948379 kernel: Bridge firewalling registered May 15 23:58:12.954520 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:58:12.957556 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:58:12.961862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:58:12.964595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:58:12.969586 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:58:12.972429 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:58:12.975201 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 23:58:12.978265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:58:12.993027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:58:13.004072 dracut-cmdline[228]: dracut-dracut-053 May 15 23:58:13.006892 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 15 23:58:13.042393 systemd-resolved[231]: Positive Trust Anchors: May 15 23:58:13.042406 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:58:13.042437 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:58:13.053868 systemd-resolved[231]: Defaulting to hostname 'linux'. May 15 23:58:13.055979 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:58:13.058299 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:58:13.085341 kernel: SCSI subsystem initialized May 15 23:58:13.094328 kernel: Loading iSCSI transport class v2.0-870. May 15 23:58:13.105345 kernel: iscsi: registered transport (tcp) May 15 23:58:13.127349 kernel: iscsi: registered transport (qla4xxx) May 15 23:58:13.127390 kernel: QLogic iSCSI HBA Driver May 15 23:58:13.262910 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 23:58:13.266900 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 23:58:13.316360 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 23:58:13.316452 kernel: device-mapper: uevent: version 1.0.3 May 15 23:58:13.316470 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 23:58:13.372383 kernel: raid6: avx2x4 gen() 27918 MB/s May 15 23:58:13.389355 kernel: raid6: avx2x2 gen() 20165 MB/s May 15 23:58:13.406918 kernel: raid6: avx2x1 gen() 16545 MB/s May 15 23:58:13.408236 kernel: raid6: using algorithm avx2x4 gen() 27918 MB/s May 15 23:58:13.424745 kernel: raid6: .... xor() 6142 MB/s, rmw enabled May 15 23:58:13.424866 kernel: raid6: using avx2x2 recovery algorithm May 15 23:58:13.450363 kernel: xor: automatically using best checksumming function avx May 15 23:58:13.619349 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 23:58:13.633089 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 23:58:13.635302 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:58:13.664519 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 15 23:58:13.670070 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:58:13.676426 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 23:58:13.710000 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation May 15 23:58:13.747688 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:58:13.751439 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:58:13.840985 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:58:13.846760 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 23:58:13.869839 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 23:58:13.870687 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:58:13.877160 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 15 23:58:13.873533 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:58:13.877473 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:58:13.881488 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 23:58:13.886416 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 23:58:13.890631 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 23:58:13.890681 kernel: GPT:9289727 != 19775487 May 15 23:58:13.890693 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 23:58:13.890704 kernel: GPT:9289727 != 19775487 May 15 23:58:13.890719 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 23:58:13.890730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:58:13.906054 kernel: cryptd: max_cpu_qlen set to 1000 May 15 23:58:13.914631 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 23:58:13.925444 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:58:13.928003 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:58:13.929898 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:58:13.941199 kernel: BTRFS: device fsid a728581e-9e7f-4655-895a-4f66e17e3645 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (473) May 15 23:58:13.941224 kernel: AVX2 version of gcm_enc/dec engaged. May 15 23:58:13.941242 kernel: AES CTR mode by8 optimization enabled May 15 23:58:13.931857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:58:13.932032 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:13.947970 kernel: libata version 3.00 loaded. May 15 23:58:13.935351 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:58:13.952405 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (467) May 15 23:58:13.944998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:58:13.947074 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 23:58:13.967468 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 23:58:13.969108 kernel: ahci 0000:00:1f.2: version 3.0 May 15 23:58:13.969355 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 23:58:13.973437 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 23:58:13.973664 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 23:58:13.987818 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 23:58:14.002333 kernel: scsi host0: ahci May 15 23:58:14.002593 kernel: scsi host1: ahci May 15 23:58:14.003610 kernel: scsi host2: ahci May 15 23:58:14.003886 kernel: scsi host3: ahci May 15 23:58:14.005332 kernel: scsi host4: ahci May 15 23:58:14.007092 kernel: scsi host5: ahci May 15 23:58:14.007343 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 15 23:58:14.007360 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 15 23:58:14.007996 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 15 23:58:14.009796 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 15 23:58:14.009810 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 15 23:58:14.010751 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 15 23:58:14.033219 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 23:58:14.037020 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 23:58:14.049687 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:58:14.054235 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 23:58:14.056797 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:58:14.057970 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:14.060342 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:58:14.065899 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:58:14.068562 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 23:58:14.082430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:14.106591 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:58:14.135064 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:58:14.153262 disk-uuid[570]: Primary Header is updated. May 15 23:58:14.153262 disk-uuid[570]: Secondary Entries is updated. May 15 23:58:14.153262 disk-uuid[570]: Secondary Header is updated. May 15 23:58:14.157193 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:58:14.161331 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:58:14.317418 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 23:58:14.317495 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 23:58:14.318335 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 23:58:14.318369 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 23:58:14.319731 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 23:58:14.319748 kernel: ata3.00: applying bridge limits May 15 23:58:14.321348 kernel: ata3.00: configured for UDMA/100 May 15 23:58:14.321376 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 23:58:14.326347 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 23:58:14.326369 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 23:58:14.365359 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 23:58:14.365657 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 23:58:14.379362 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 23:58:15.181351 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:58:15.182011 disk-uuid[584]: The operation has completed successfully. May 15 23:58:15.214978 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 23:58:15.215115 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 23:58:15.252182 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 23:58:15.268836 sh[600]: Success May 15 23:58:15.309338 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 23:58:15.347992 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 23:58:15.359082 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 23:58:15.385908 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 23:58:15.413703 kernel: BTRFS info (device dm-0): first mount of filesystem a728581e-9e7f-4655-895a-4f66e17e3645 May 15 23:58:15.413761 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 23:58:15.413773 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 23:58:15.415998 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 23:58:15.416014 kernel: BTRFS info (device dm-0): using free space tree May 15 23:58:15.422007 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 23:58:15.422933 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 23:58:15.424054 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 23:58:15.428257 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 23:58:15.461747 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 15 23:58:15.461835 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:58:15.461852 kernel: BTRFS info (device vda6): using free space tree May 15 23:58:15.465359 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:58:15.472601 kernel: BTRFS info (device vda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 15 23:58:15.482219 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 23:58:15.486084 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 23:58:15.582156 ignition[691]: Ignition 2.20.0 May 15 23:58:15.582168 ignition[691]: Stage: fetch-offline May 15 23:58:15.582203 ignition[691]: no configs at "/usr/lib/ignition/base.d" May 15 23:58:15.582215 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:15.582352 ignition[691]: parsed url from cmdline: "" May 15 23:58:15.582358 ignition[691]: no config URL provided May 15 23:58:15.582365 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" May 15 23:58:15.582378 ignition[691]: no config at "/usr/lib/ignition/user.ign" May 15 23:58:15.582412 ignition[691]: op(1): [started] loading QEMU firmware config module May 15 23:58:15.582419 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 23:58:15.593348 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:58:15.597681 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:58:15.598114 ignition[691]: op(1): [finished] loading QEMU firmware config module May 15 23:58:15.639603 systemd-networkd[787]: lo: Link UP May 15 23:58:15.639616 systemd-networkd[787]: lo: Gained carrier May 15 23:58:15.639803 ignition[691]: parsing config with SHA512: da6a710f0fed08357123028a32443434ef72ad759a4faae74322418a110ba3b14bcbcb1276c8424b6c037963f78a313ce36b01df276ee00ac38a91ab9ea7139d May 15 23:58:15.641404 systemd-networkd[787]: Enumeration completed May 15 23:58:15.641504 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:58:15.642397 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:58:15.642402 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:58:15.647904 ignition[691]: fetch-offline: fetch-offline passed May 15 23:58:15.643551 systemd-networkd[787]: eth0: Link UP May 15 23:58:15.647998 ignition[691]: Ignition finished successfully May 15 23:58:15.643556 systemd-networkd[787]: eth0: Gained carrier May 15 23:58:15.643563 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:58:15.644160 systemd[1]: Reached target network.target - Network. May 15 23:58:15.647497 unknown[691]: fetched base config from "system" May 15 23:58:15.647507 unknown[691]: fetched user config from "qemu" May 15 23:58:15.650545 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:58:15.652971 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 23:58:15.653901 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 23:58:15.661453 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:58:15.688667 ignition[791]: Ignition 2.20.0 May 15 23:58:15.688683 ignition[791]: Stage: kargs May 15 23:58:15.688906 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 15 23:58:15.688928 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:15.690077 ignition[791]: kargs: kargs passed May 15 23:58:15.690142 ignition[791]: Ignition finished successfully May 15 23:58:15.695750 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 23:58:15.700926 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 23:58:15.739510 ignition[801]: Ignition 2.20.0 May 15 23:58:15.739530 ignition[801]: Stage: disks May 15 23:58:15.739770 ignition[801]: no configs at "/usr/lib/ignition/base.d" May 15 23:58:15.739786 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:15.741037 ignition[801]: disks: disks passed May 15 23:58:15.743808 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 23:58:15.741099 ignition[801]: Ignition finished successfully May 15 23:58:15.745489 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 23:58:15.747423 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 23:58:15.749646 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:58:15.751842 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:58:15.754089 systemd[1]: Reached target basic.target - Basic System. May 15 23:58:15.757462 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 23:58:15.788492 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 23:58:16.083187 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 23:58:16.084456 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 23:58:16.187345 kernel: EXT4-fs (vda9): mounted filesystem f27adc75-a467-4bfb-9c02-79a2879452a3 r/w with ordered data mode. Quota mode: none. May 15 23:58:16.187580 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 23:58:16.189979 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 23:58:16.193763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:58:16.196733 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 23:58:16.198832 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 23:58:16.198880 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 23:58:16.198907 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:58:16.215058 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 23:58:16.220071 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (820) May 15 23:58:16.221859 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 23:58:16.226515 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 15 23:58:16.226553 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:58:16.226565 kernel: BTRFS info (device vda6): using free space tree May 15 23:58:16.228330 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:58:16.244267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:58:16.280058 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory May 15 23:58:16.287029 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory May 15 23:58:16.293386 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory May 15 23:58:16.298225 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory May 15 23:58:16.429622 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 23:58:16.433623 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 23:58:16.436951 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 23:58:16.458682 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 23:58:16.460215 kernel: BTRFS info (device vda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 15 23:58:16.479594 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 23:58:16.511291 ignition[934]: INFO : Ignition 2.20.0 May 15 23:58:16.511291 ignition[934]: INFO : Stage: mount May 15 23:58:16.513694 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:58:16.513694 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:16.513694 ignition[934]: INFO : mount: mount passed May 15 23:58:16.513694 ignition[934]: INFO : Ignition finished successfully May 15 23:58:16.519006 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 23:58:16.523967 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 23:58:16.552978 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:58:16.575666 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (947) May 15 23:58:16.575762 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 15 23:58:16.575779 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 23:58:16.577786 kernel: BTRFS info (device vda6): using free space tree May 15 23:58:16.581403 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:58:16.583217 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:58:16.620329 ignition[964]: INFO : Ignition 2.20.0 May 15 23:58:16.620329 ignition[964]: INFO : Stage: files May 15 23:58:16.622582 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:58:16.622582 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:16.622582 ignition[964]: DEBUG : files: compiled without relabeling support, skipping May 15 23:58:16.626565 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 23:58:16.626565 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 23:58:16.629530 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 23:58:16.631069 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 23:58:16.632808 unknown[964]: wrote ssh authorized keys file for user: core May 15 23:58:16.634011 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 23:58:16.635615 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 23:58:16.635615 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 23:58:16.677127 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 23:58:16.838933 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 23:58:16.838933 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:58:16.843512 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 23:58:17.134648 systemd-networkd[787]: eth0: Gained IPv6LL May 15 23:58:17.330059 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 23:58:17.435152 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:58:17.435152 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 23:58:17.439182 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 23:58:17.439182 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 23:58:17.442784 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 23:58:17.442784 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:58:17.446572 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:58:17.446572 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:58:17.450626 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:58:17.453157 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:58:17.455640 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:58:17.457920 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:58:17.457920 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:58:17.457920 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:58:17.457920 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 15 23:58:18.294984 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 23:58:18.667431 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 15 23:58:18.667431 ignition[964]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 23:58:18.671384 ignition[964]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:58:18.673361 ignition[964]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:58:18.673361 ignition[964]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 23:58:18.673361 ignition[964]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 23:58:18.673361 ignition[964]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:58:18.673361 ignition[964]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:58:18.673361 ignition[964]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 23:58:18.673361 ignition[964]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 23:58:18.704759 ignition[964]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:58:18.712067 ignition[964]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:58:18.714415 ignition[964]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 23:58:18.714415 ignition[964]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 23:58:18.714415 ignition[964]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 23:58:18.714415 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 23:58:18.714415 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 23:58:18.714415 ignition[964]: INFO : files: files passed May 15 23:58:18.714415 ignition[964]: INFO : Ignition finished successfully May 15 23:58:18.727572 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 23:58:18.730229 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 23:58:18.732888 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 23:58:18.749060 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 23:58:18.750288 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 23:58:18.753157 initrd-setup-root-after-ignition[994]: grep: /sysroot/oem/oem-release: No such file or directory May 15 23:58:18.754746 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:58:18.754746 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 23:58:18.758362 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:58:18.757568 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:58:18.761185 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 23:58:18.764802 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 23:58:18.827823 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 23:58:18.827977 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 23:58:18.830596 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 23:58:18.832700 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 23:58:18.834793 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 23:58:18.835826 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 23:58:18.867125 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:58:18.870342 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 23:58:18.902633 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 23:58:18.904473 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:58:18.907347 systemd[1]: Stopped target timers.target - Timer Units. May 15 23:58:18.909575 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 23:58:18.909800 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:58:18.912505 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 23:58:18.914080 systemd[1]: Stopped target basic.target - Basic System. May 15 23:58:18.916154 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 23:58:18.918379 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:58:18.920476 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 23:58:18.922733 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 23:58:18.924975 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:58:18.927329 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 23:58:18.929399 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 23:58:18.931623 systemd[1]: Stopped target swap.target - Swaps. May 15 23:58:18.933464 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 23:58:18.933600 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 23:58:18.936067 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 23:58:18.937593 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:58:18.939853 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 23:58:18.940036 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:58:18.942084 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 23:58:18.942217 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 23:58:18.944627 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 23:58:18.944750 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:58:18.946706 systemd[1]: Stopped target paths.target - Path Units. May 15 23:58:18.948505 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 23:58:18.952400 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:58:18.954632 systemd[1]: Stopped target slices.target - Slice Units. May 15 23:58:18.957081 systemd[1]: Stopped target sockets.target - Socket Units. May 15 23:58:18.959160 systemd[1]: iscsid.socket: Deactivated successfully. May 15 23:58:18.959302 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:58:18.961513 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 23:58:18.961637 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:58:18.964388 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 23:58:18.964560 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:58:18.966913 systemd[1]: ignition-files.service: Deactivated successfully. May 15 23:58:18.967065 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 23:58:18.970237 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 23:58:18.972971 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 23:58:18.974133 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 23:58:18.974294 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:58:18.976736 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 23:58:18.976895 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:58:18.985708 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 23:58:18.995215 ignition[1020]: INFO : Ignition 2.20.0 May 15 23:58:18.995215 ignition[1020]: INFO : Stage: umount May 15 23:58:18.995215 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:58:18.995215 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:58:18.995215 ignition[1020]: INFO : umount: umount passed May 15 23:58:18.995215 ignition[1020]: INFO : Ignition finished successfully May 15 23:58:18.985821 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 23:58:18.997304 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 23:58:18.997490 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 23:58:18.999394 systemd[1]: Stopped target network.target - Network. May 15 23:58:19.001270 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 23:58:19.001349 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 23:58:19.003414 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 23:58:19.003468 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 23:58:19.005494 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 23:58:19.005565 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 23:58:19.008181 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 23:58:19.008233 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 23:58:19.010749 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 23:58:19.013244 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 23:58:19.015855 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 23:58:19.024371 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 23:58:19.024514 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 23:58:19.028369 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 23:58:19.028577 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 23:58:19.028709 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 23:58:19.031740 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 23:58:19.032595 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 23:58:19.032677 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 23:58:19.036411 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 23:58:19.038194 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 23:58:19.038257 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:58:19.041015 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:58:19.041071 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:58:19.043756 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 23:58:19.043823 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 23:58:19.046998 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 23:58:19.047051 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:58:19.049619 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:58:19.053468 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 23:58:19.053557 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 23:58:19.068588 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 23:58:19.068758 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 23:58:19.078236 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 23:58:19.078440 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:58:19.081031 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 23:58:19.081094 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 23:58:19.082248 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 23:58:19.082287 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:58:19.084289 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 23:58:19.084356 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 23:58:19.088388 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 23:58:19.088443 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 23:58:19.091708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:58:19.091760 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:58:19.098436 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 23:58:19.099595 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 23:58:19.099666 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:58:19.103255 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 23:58:19.103305 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:58:19.105925 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 23:58:19.105987 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:58:19.107219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:58:19.107276 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:19.113776 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 23:58:19.113857 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 23:58:19.142986 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 23:58:19.143144 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 23:58:19.244286 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 23:58:19.244460 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 23:58:19.246933 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 23:58:19.248813 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 23:58:19.248875 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 23:58:19.252325 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 23:58:19.275658 systemd[1]: Switching root. May 15 23:58:19.312867 systemd-journald[193]: Journal stopped May 15 23:58:21.478223 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 15 23:58:21.478494 kernel: SELinux: policy capability network_peer_controls=1 May 15 23:58:21.478516 kernel: SELinux: policy capability open_perms=1 May 15 23:58:21.478532 kernel: SELinux: policy capability extended_socket_class=1 May 15 23:58:21.478547 kernel: SELinux: policy capability always_check_network=0 May 15 23:58:21.478562 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 23:58:21.478590 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 23:58:21.478606 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 23:58:21.478637 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 23:58:21.478659 kernel: audit: type=1403 audit(1747353500.330:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 23:58:21.478676 systemd[1]: Successfully loaded SELinux policy in 45.460ms. May 15 23:58:21.478702 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.141ms. May 15 23:58:21.478719 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 23:58:21.478735 systemd[1]: Detected virtualization kvm. May 15 23:58:21.478751 systemd[1]: Detected architecture x86-64. May 15 23:58:21.478767 systemd[1]: Detected first boot. May 15 23:58:21.478783 systemd[1]: Initializing machine ID from VM UUID. May 15 23:58:21.478806 zram_generator::config[1067]: No configuration found. May 15 23:58:21.478824 kernel: Guest personality initialized and is inactive May 15 23:58:21.478839 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 23:58:21.478855 kernel: Initialized host personality May 15 23:58:21.478870 kernel: NET: Registered PF_VSOCK protocol family May 15 23:58:21.478886 systemd[1]: Populated /etc with preset unit settings. May 15 23:58:21.478904 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 23:58:21.478921 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 23:58:21.478937 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 23:58:21.478957 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 23:58:21.478974 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 23:58:21.478997 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 23:58:21.479014 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 23:58:21.479030 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 23:58:21.479046 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 23:58:21.479063 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 23:58:21.479085 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 23:58:21.479116 systemd[1]: Created slice user.slice - User and Session Slice. May 15 23:58:21.479138 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:58:21.479157 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:58:21.479183 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 23:58:21.479220 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 23:58:21.479254 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 23:58:21.479286 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:58:21.479335 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 23:58:21.479360 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:58:21.479376 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 23:58:21.479392 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 23:58:21.479407 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 23:58:21.479422 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 23:58:21.479438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:58:21.479453 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:58:21.479468 systemd[1]: Reached target slices.target - Slice Units. May 15 23:58:21.479484 systemd[1]: Reached target swap.target - Swaps. May 15 23:58:21.479504 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 23:58:21.479520 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 23:58:21.479538 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 23:58:21.479554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:58:21.479571 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:58:21.479600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:58:21.479618 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 23:58:21.479642 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 23:58:21.479663 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 23:58:21.479683 systemd[1]: Mounting media.mount - External Media Directory... May 15 23:58:21.479700 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:21.479717 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 23:58:21.479733 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 23:58:21.479750 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 23:58:21.479768 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 23:58:21.479785 systemd[1]: Reached target machines.target - Containers. May 15 23:58:21.479801 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 23:58:21.479822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:58:21.479839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:58:21.479855 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 23:58:21.479871 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:58:21.479887 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:58:21.479903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:58:21.479919 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 23:58:21.479935 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:58:21.479951 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 23:58:21.479971 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 23:58:21.479987 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 23:58:21.480002 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 23:58:21.480019 systemd[1]: Stopped systemd-fsck-usr.service. May 15 23:58:21.480036 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:58:21.480052 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:58:21.480067 kernel: fuse: init (API version 7.39) May 15 23:58:21.480082 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:58:21.480102 kernel: loop: module loaded May 15 23:58:21.480119 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 23:58:21.480134 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 23:58:21.480149 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 23:58:21.480221 systemd-journald[1142]: Collecting audit messages is disabled. May 15 23:58:21.480251 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:58:21.480267 systemd[1]: verity-setup.service: Deactivated successfully. May 15 23:58:21.480283 systemd[1]: Stopped verity-setup.service. May 15 23:58:21.480338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:21.480359 systemd-journald[1142]: Journal started May 15 23:58:21.480389 systemd-journald[1142]: Runtime Journal (/run/log/journal/d34d56ec6af24a0a95c70970b55c2798) is 6M, max 48.2M, 42.2M free. May 15 23:58:21.189138 systemd[1]: Queued start job for default target multi-user.target. May 15 23:58:21.207169 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 23:58:21.207896 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 23:58:21.483749 kernel: ACPI: bus type drm_connector registered May 15 23:58:21.486979 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:58:21.487914 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 23:58:21.496523 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 23:58:21.498043 systemd[1]: Mounted media.mount - External Media Directory. May 15 23:58:21.499288 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 23:58:21.500687 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 23:58:21.502066 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 23:58:21.503518 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 23:58:21.505193 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:58:21.506945 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 23:58:21.507223 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 23:58:21.508984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:58:21.509223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:58:21.510889 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:58:21.511125 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:58:21.513058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:58:21.513295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:58:21.515123 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 23:58:21.515374 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 23:58:21.516941 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:58:21.517175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:58:21.518795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:58:21.520438 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 23:58:21.522202 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 23:58:21.524177 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 23:58:21.542517 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 23:58:21.545769 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 23:58:21.548401 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 23:58:21.549692 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 23:58:21.549725 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:58:21.552151 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 23:58:21.564482 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 23:58:21.567714 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 23:58:21.569498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:58:21.571505 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 23:58:21.574339 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 23:58:21.575942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:58:21.577691 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 23:58:21.579121 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:58:21.581630 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:58:21.587180 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 23:58:21.591770 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:58:21.599556 systemd-journald[1142]: Time spent on flushing to /var/log/journal/d34d56ec6af24a0a95c70970b55c2798 is 21.271ms for 1063 entries. May 15 23:58:21.599556 systemd-journald[1142]: System Journal (/var/log/journal/d34d56ec6af24a0a95c70970b55c2798) is 8M, max 195.6M, 187.6M free. May 15 23:58:21.636816 systemd-journald[1142]: Received client request to flush runtime journal. May 15 23:58:21.636869 kernel: loop0: detected capacity change from 0 to 109808 May 15 23:58:21.601041 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 23:58:21.603503 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 23:58:21.606175 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 23:58:21.613454 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 23:58:21.619841 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 23:58:21.630902 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 23:58:21.634352 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:58:21.636627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:58:21.639252 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 23:58:21.657981 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 23:58:21.667186 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 15 23:58:21.676764 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 23:58:21.667427 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 15 23:58:21.677358 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:58:21.680896 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 23:58:21.689484 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 23:58:21.724348 kernel: loop1: detected capacity change from 0 to 151640 May 15 23:58:21.733121 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 23:58:21.743705 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:58:21.854636 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 23:58:21.873303 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. May 15 23:58:21.873803 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. May 15 23:58:21.875329 kernel: loop2: detected capacity change from 0 to 224512 May 15 23:58:21.882879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:58:21.980350 kernel: loop3: detected capacity change from 0 to 109808 May 15 23:58:21.991346 kernel: loop4: detected capacity change from 0 to 151640 May 15 23:58:22.006380 kernel: loop5: detected capacity change from 0 to 224512 May 15 23:58:22.016285 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 23:58:22.017049 (sd-merge)[1215]: Merged extensions into '/usr'. May 15 23:58:22.025498 systemd[1]: Reload requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... May 15 23:58:22.025520 systemd[1]: Reloading... May 15 23:58:22.124371 zram_generator::config[1244]: No configuration found. May 15 23:58:22.222384 ldconfig[1182]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 23:58:22.282897 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:58:22.356665 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 23:58:22.357332 systemd[1]: Reloading finished in 331 ms. May 15 23:58:22.377573 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 23:58:22.379172 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 23:58:22.394481 systemd[1]: Starting ensure-sysext.service... May 15 23:58:22.397090 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:58:22.422718 systemd[1]: Reload requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... May 15 23:58:22.422736 systemd[1]: Reloading... May 15 23:58:22.430988 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 23:58:22.431262 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 23:58:22.432469 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 23:58:22.432836 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. May 15 23:58:22.432930 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. May 15 23:58:22.437819 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:58:22.437905 systemd-tmpfiles[1281]: Skipping /boot May 15 23:58:22.456970 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:58:22.457134 systemd-tmpfiles[1281]: Skipping /boot May 15 23:58:22.489356 zram_generator::config[1313]: No configuration found. May 15 23:58:22.629944 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:58:22.706855 systemd[1]: Reloading finished in 283 ms. May 15 23:58:22.722053 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 23:58:22.739634 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:58:22.751179 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:58:22.754098 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 23:58:22.757234 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 23:58:22.779456 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:58:22.787804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:58:22.793447 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 23:58:22.798907 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:22.799121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:58:22.804615 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:58:22.808954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:58:22.846073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:58:22.847476 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:58:22.847782 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:58:22.853195 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 23:58:22.868734 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:22.874477 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 23:58:22.874959 systemd-udevd[1355]: Using default interface naming scheme 'v255'. May 15 23:58:22.877181 augenrules[1379]: No rules May 15 23:58:22.877167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:58:22.877511 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:58:22.880537 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:58:22.887769 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:58:22.894902 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:58:22.895274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:58:22.897647 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:58:22.897945 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:58:22.910532 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:58:22.916423 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:22.916702 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:58:22.918869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:58:22.947620 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:58:22.957736 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:58:22.960471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:58:22.960668 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:58:22.964862 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:58:22.968931 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 23:58:22.970013 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:22.972010 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 23:58:22.973909 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 23:58:22.982751 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:58:22.983018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:58:22.994925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:58:22.995263 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:58:22.999612 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 23:58:23.002515 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:58:23.002875 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:58:23.014835 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 23:58:23.038909 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:23.106175 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:58:23.107408 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 23:58:23.107807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:58:23.112455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:58:23.140461 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 15 23:58:23.140785 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 23:58:23.140958 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 23:58:23.141142 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 23:58:23.140905 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:58:23.149335 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 23:58:23.156742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:58:23.160733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:58:23.163561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:58:23.163725 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:58:23.163905 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 23:58:23.164033 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 23:58:23.171353 kernel: ACPI: button: Power Button [PWRF] May 15 23:58:23.177558 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:58:23.177886 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:58:23.180150 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:58:23.180479 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:58:23.182505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:58:23.182824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:58:23.189053 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1404) May 15 23:58:23.194244 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:58:23.195676 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:58:23.202532 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 23:58:23.213640 systemd[1]: Finished ensure-sysext.service. May 15 23:58:23.226888 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:58:23.226961 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:58:23.229687 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 23:58:23.242480 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:58:23.263574 augenrules[1430]: /sbin/augenrules: No change May 15 23:58:23.273675 systemd-resolved[1353]: Positive Trust Anchors: May 15 23:58:23.273702 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:58:23.278868 augenrules[1465]: No rules May 15 23:58:23.273745 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:58:23.278488 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:58:23.278870 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:23.281850 systemd-resolved[1353]: Defaulting to hostname 'linux'. May 15 23:58:23.285846 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:58:23.288109 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:58:23.290476 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:58:23.290858 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:58:23.298735 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:58:23.393972 systemd-networkd[1411]: lo: Link UP May 15 23:58:23.393988 systemd-networkd[1411]: lo: Gained carrier May 15 23:58:23.397861 systemd-networkd[1411]: Enumeration completed May 15 23:58:23.398456 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:58:23.400710 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:58:23.400905 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:58:23.403658 systemd-networkd[1411]: eth0: Link UP May 15 23:58:23.403863 systemd-networkd[1411]: eth0: Gained carrier May 15 23:58:23.403993 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:58:23.409062 systemd[1]: Reached target network.target - Network. May 15 23:58:23.413471 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 23:58:23.419502 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 23:58:23.435342 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:58:23.454407 kernel: mousedev: PS/2 mouse device common for all mice May 15 23:58:23.454525 systemd-networkd[1411]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:58:23.455449 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 23:58:23.509430 kernel: kvm_amd: TSC scaling supported May 15 23:58:23.509495 kernel: kvm_amd: Nested Virtualization enabled May 15 23:58:23.509512 kernel: kvm_amd: Nested Paging enabled May 15 23:58:23.509564 kernel: kvm_amd: LBR virtualization supported May 15 23:58:23.512328 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 15 23:58:23.512373 kernel: kvm_amd: Virtual GIF supported May 15 23:58:23.510883 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 23:58:23.529219 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 23:58:23.532116 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 23:58:23.532490 systemd[1]: Reached target time-set.target - System Time Set. May 15 23:58:23.534474 systemd-timesyncd[1453]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 23:58:23.534858 systemd-timesyncd[1453]: Initial clock synchronization to Thu 2025-05-15 23:58:23.444376 UTC. May 15 23:58:23.549345 kernel: EDAC MC: Ver: 3.0.0 May 15 23:58:23.560138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:58:23.597124 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 23:58:23.602490 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 23:58:23.641463 lvm[1489]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:58:23.680023 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 23:58:23.681849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:58:23.683175 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:58:23.684583 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 23:58:23.685998 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 23:58:23.687758 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 23:58:23.689246 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 23:58:23.690733 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 23:58:23.692185 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 23:58:23.692229 systemd[1]: Reached target paths.target - Path Units. May 15 23:58:23.693214 systemd[1]: Reached target timers.target - Timer Units. May 15 23:58:23.695331 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 23:58:23.698471 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 23:58:23.702391 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 23:58:23.704004 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 23:58:23.705456 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 23:58:23.709489 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 23:58:23.711082 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 23:58:23.713701 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 23:58:23.715454 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 23:58:23.716676 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:58:23.717850 systemd[1]: Reached target basic.target - Basic System. May 15 23:58:23.718902 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 23:58:23.718933 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 23:58:23.719961 systemd[1]: Starting containerd.service - containerd container runtime... May 15 23:58:23.722295 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 23:58:23.728462 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 23:58:23.732336 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 23:58:23.734987 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 23:58:23.736484 jq[1496]: false May 15 23:58:23.736798 lvm[1493]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:58:23.736667 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 23:58:23.737923 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 23:58:23.744053 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 23:58:23.748683 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 23:58:23.753812 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 23:58:23.756064 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 23:58:23.757257 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 23:58:23.762517 systemd[1]: Starting update-engine.service - Update Engine... May 15 23:58:23.765201 extend-filesystems[1497]: Found loop3 May 15 23:58:23.766284 extend-filesystems[1497]: Found loop4 May 15 23:58:23.766284 extend-filesystems[1497]: Found loop5 May 15 23:58:23.766284 extend-filesystems[1497]: Found sr0 May 15 23:58:23.766284 extend-filesystems[1497]: Found vda May 15 23:58:23.766284 extend-filesystems[1497]: Found vda1 May 15 23:58:23.766284 extend-filesystems[1497]: Found vda2 May 15 23:58:23.766284 extend-filesystems[1497]: Found vda3 May 15 23:58:23.766284 extend-filesystems[1497]: Found usr May 15 23:58:23.782329 extend-filesystems[1497]: Found vda4 May 15 23:58:23.782329 extend-filesystems[1497]: Found vda6 May 15 23:58:23.782329 extend-filesystems[1497]: Found vda7 May 15 23:58:23.782329 extend-filesystems[1497]: Found vda9 May 15 23:58:23.782329 extend-filesystems[1497]: Checking size of /dev/vda9 May 15 23:58:23.782329 extend-filesystems[1497]: Resized partition /dev/vda9 May 15 23:58:23.769616 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 23:58:23.770794 dbus-daemon[1495]: [system] SELinux support is enabled May 15 23:58:23.793895 extend-filesystems[1517]: resize2fs 1.47.2 (1-Jan-2025) May 15 23:58:23.773923 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 23:58:23.778751 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 23:58:23.796836 update_engine[1509]: I20250515 23:58:23.785856 1509 main.cc:92] Flatcar Update Engine starting May 15 23:58:23.796836 update_engine[1509]: I20250515 23:58:23.787199 1509 update_check_scheduler.cc:74] Next update check in 5m34s May 15 23:58:23.782602 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 23:58:23.782856 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 23:58:23.783184 systemd[1]: motdgen.service: Deactivated successfully. May 15 23:58:23.784356 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 23:58:23.790535 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 23:58:23.790785 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 23:58:23.801936 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 23:58:23.801995 jq[1510]: true May 15 23:58:23.810616 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1405) May 15 23:58:23.812708 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 23:58:23.812746 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 23:58:23.814871 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 23:58:23.814914 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 23:58:23.818533 systemd[1]: Started update-engine.service - Update Engine. May 15 23:58:23.822758 tar[1519]: linux-amd64/LICENSE May 15 23:58:23.823026 tar[1519]: linux-amd64/helm May 15 23:58:23.825284 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 23:58:23.825395 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 23:58:23.833336 jq[1527]: true May 15 23:58:23.869405 systemd-logind[1505]: Watching system buttons on /dev/input/event1 (Power Button) May 15 23:58:23.869443 systemd-logind[1505]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 23:58:23.878393 systemd-logind[1505]: New seat seat0. May 15 23:58:23.885930 systemd[1]: Started systemd-logind.service - User Login Management. May 15 23:58:23.932372 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 23:58:24.011029 locksmithd[1532]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 23:58:24.477675 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 23:58:24.477675 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 23:58:24.477675 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 23:58:24.492825 extend-filesystems[1497]: Resized filesystem in /dev/vda9 May 15 23:58:24.480360 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 23:58:24.494466 sshd_keygen[1524]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 23:58:24.480724 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 23:58:24.518176 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 23:58:24.521884 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 23:58:24.569519 systemd[1]: issuegen.service: Deactivated successfully. May 15 23:58:24.569810 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 23:58:24.574710 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 23:58:24.614292 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 23:58:24.618748 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 23:58:24.639027 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 23:58:24.650845 systemd[1]: Reached target getty.target - Login Prompts. May 15 23:58:24.740754 containerd[1523]: time="2025-05-15T23:58:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 23:58:24.744676 containerd[1523]: time="2025-05-15T23:58:24.744339228Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 15 23:58:24.761161 containerd[1523]: time="2025-05-15T23:58:24.760996458Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.701µs" May 15 23:58:24.761161 containerd[1523]: time="2025-05-15T23:58:24.761054904Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 23:58:24.761161 containerd[1523]: time="2025-05-15T23:58:24.761078709Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 23:58:24.761485 containerd[1523]: time="2025-05-15T23:58:24.761374068Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 23:58:24.761485 containerd[1523]: time="2025-05-15T23:58:24.761399774Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 23:58:24.761485 containerd[1523]: time="2025-05-15T23:58:24.761435763Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 23:58:24.761572 containerd[1523]: time="2025-05-15T23:58:24.761526562Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 23:58:24.761572 containerd[1523]: time="2025-05-15T23:58:24.761539906Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 23:58:24.761954 containerd[1523]: time="2025-05-15T23:58:24.761888233Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 23:58:24.761954 containerd[1523]: time="2025-05-15T23:58:24.761914593Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 23:58:24.762026 containerd[1523]: time="2025-05-15T23:58:24.761952146Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 23:58:24.762026 containerd[1523]: time="2025-05-15T23:58:24.761972275Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 23:58:24.762544 containerd[1523]: time="2025-05-15T23:58:24.762098290Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 23:58:24.762544 containerd[1523]: time="2025-05-15T23:58:24.762420327Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 23:58:24.762544 containerd[1523]: time="2025-05-15T23:58:24.762456128Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 23:58:24.762544 containerd[1523]: time="2025-05-15T23:58:24.762467341Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 23:58:24.762544 containerd[1523]: time="2025-05-15T23:58:24.762528977Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 23:58:24.762986 containerd[1523]: time="2025-05-15T23:58:24.762894036Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 23:58:24.762986 containerd[1523]: time="2025-05-15T23:58:24.762980861Z" level=info msg="metadata content store policy set" policy=shared May 15 23:58:24.793439 bash[1549]: Updated "/home/core/.ssh/authorized_keys" May 15 23:58:24.794993 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816537273Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816676939Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816710065Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816730937Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816750204Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816764528Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816781171Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816803202Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816817892Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816832782Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816847809Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.816871632Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.817056876Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 23:58:24.818331 containerd[1523]: time="2025-05-15T23:58:24.817079581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 23:58:24.816582 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817091449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817102613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817113717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817125654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817137630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817151559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817162851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817175393Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817186041Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817284865Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817329045Z" level=info msg="Start snapshots syncer" May 15 23:58:24.819052 containerd[1523]: time="2025-05-15T23:58:24.817358417Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 23:58:24.819416 containerd[1523]: time="2025-05-15T23:58:24.817625287Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 23:58:24.819416 containerd[1523]: time="2025-05-15T23:58:24.817680443Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.817763902Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.817886074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.817990474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.818005323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.818015318Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.818063590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.818076280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.818095686Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.818119580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.818131912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.818141590Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.818198352Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.818212429Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 23:58:24.819659 containerd[1523]: time="2025-05-15T23:58:24.818221345Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 23:58:24.820228 containerd[1523]: time="2025-05-15T23:58:24.818230755Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 23:58:24.820228 containerd[1523]: time="2025-05-15T23:58:24.818248577Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 23:58:24.820228 containerd[1523]: time="2025-05-15T23:58:24.818260167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 23:58:24.820228 containerd[1523]: time="2025-05-15T23:58:24.818271786Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 23:58:24.820228 containerd[1523]: time="2025-05-15T23:58:24.818305892Z" level=info msg="runtime interface created" May 15 23:58:24.820228 containerd[1523]: time="2025-05-15T23:58:24.818327696Z" level=info msg="created NRI interface" May 15 23:58:24.820228 containerd[1523]: time="2025-05-15T23:58:24.818339474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 23:58:24.820228 containerd[1523]: time="2025-05-15T23:58:24.818357969Z" level=info msg="Connect containerd service" May 15 23:58:24.820228 containerd[1523]: time="2025-05-15T23:58:24.818413948Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 23:58:24.820228 containerd[1523]: time="2025-05-15T23:58:24.819271853Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:58:25.066435 containerd[1523]: time="2025-05-15T23:58:25.065814145Z" level=info msg="Start subscribing containerd event" May 15 23:58:25.066435 containerd[1523]: time="2025-05-15T23:58:25.065943623Z" level=info msg="Start recovering state" May 15 23:58:25.066435 containerd[1523]: time="2025-05-15T23:58:25.066120470Z" level=info msg="Start event monitor" May 15 23:58:25.066435 containerd[1523]: time="2025-05-15T23:58:25.066153712Z" level=info msg="Start cni network conf syncer for default" May 15 23:58:25.066435 containerd[1523]: time="2025-05-15T23:58:25.066162501Z" level=info msg="Start streaming server" May 15 23:58:25.066435 containerd[1523]: time="2025-05-15T23:58:25.066188888Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 23:58:25.066435 containerd[1523]: time="2025-05-15T23:58:25.066192341Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 23:58:25.066435 containerd[1523]: time="2025-05-15T23:58:25.066227052Z" level=info msg="runtime interface starting up..." May 15 23:58:25.066435 containerd[1523]: time="2025-05-15T23:58:25.066236436Z" level=info msg="starting plugins..." May 15 23:58:25.066435 containerd[1523]: time="2025-05-15T23:58:25.066258409Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 23:58:25.066435 containerd[1523]: time="2025-05-15T23:58:25.066275333Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 23:58:25.067242 containerd[1523]: time="2025-05-15T23:58:25.067220606Z" level=info msg="containerd successfully booted in 0.328953s" May 15 23:58:25.067521 systemd[1]: Started containerd.service - containerd container runtime. May 15 23:58:25.126262 tar[1519]: linux-amd64/README.md May 15 23:58:25.157965 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 23:58:25.390535 systemd-networkd[1411]: eth0: Gained IPv6LL May 15 23:58:25.394038 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 23:58:25.395989 systemd[1]: Reached target network-online.target - Network is Online. May 15 23:58:25.398932 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 23:58:25.402579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:58:25.412628 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 23:58:25.435825 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 23:58:25.436151 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 23:58:25.438020 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 23:58:25.442841 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 23:58:26.381849 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 23:58:26.385166 systemd[1]: Started sshd@0-10.0.0.27:22-10.0.0.1:52522.service - OpenSSH per-connection server daemon (10.0.0.1:52522). May 15 23:58:26.553136 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 52522 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 15 23:58:26.556792 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:26.564214 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 23:58:26.567066 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 23:58:26.574819 systemd-logind[1505]: New session 1 of user core. May 15 23:58:26.595386 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 23:58:26.601945 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 23:58:26.624527 (systemd)[1621]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 23:58:26.627533 systemd-logind[1505]: New session c1 of user core. May 15 23:58:26.829180 systemd[1621]: Queued start job for default target default.target. May 15 23:58:26.841404 systemd[1621]: Created slice app.slice - User Application Slice. May 15 23:58:26.841454 systemd[1621]: Reached target paths.target - Paths. May 15 23:58:26.841519 systemd[1621]: Reached target timers.target - Timers. May 15 23:58:26.844245 systemd[1621]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 23:58:26.863815 systemd[1621]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 23:58:26.863988 systemd[1621]: Reached target sockets.target - Sockets. May 15 23:58:26.864050 systemd[1621]: Reached target basic.target - Basic System. May 15 23:58:26.864105 systemd[1621]: Reached target default.target - Main User Target. May 15 23:58:26.864158 systemd[1621]: Startup finished in 225ms. May 15 23:58:26.864884 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 23:58:26.869031 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 23:58:26.972248 systemd[1]: Started sshd@1-10.0.0.27:22-10.0.0.1:48532.service - OpenSSH per-connection server daemon (10.0.0.1:48532). May 15 23:58:27.018113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:58:27.027156 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 23:58:27.029685 systemd[1]: Startup finished in 853ms (kernel) + 7.614s (initrd) + 6.742s (userspace) = 15.210s. May 15 23:58:27.034806 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:58:27.043032 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 48532 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 15 23:58:27.045711 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:27.052103 systemd-logind[1505]: New session 2 of user core. May 15 23:58:27.054855 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 23:58:27.118171 sshd[1642]: Connection closed by 10.0.0.1 port 48532 May 15 23:58:27.119002 sshd-session[1634]: pam_unix(sshd:session): session closed for user core May 15 23:58:27.130601 systemd[1]: sshd@1-10.0.0.27:22-10.0.0.1:48532.service: Deactivated successfully. May 15 23:58:27.132817 systemd[1]: session-2.scope: Deactivated successfully. May 15 23:58:27.133736 systemd-logind[1505]: Session 2 logged out. Waiting for processes to exit. May 15 23:58:27.135986 systemd[1]: Started sshd@2-10.0.0.27:22-10.0.0.1:48548.service - OpenSSH per-connection server daemon (10.0.0.1:48548). May 15 23:58:27.138725 systemd-logind[1505]: Removed session 2. May 15 23:58:27.196732 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 48548 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 15 23:58:27.199244 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:27.205260 systemd-logind[1505]: New session 3 of user core. May 15 23:58:27.214571 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 23:58:27.270291 sshd[1657]: Connection closed by 10.0.0.1 port 48548 May 15 23:58:27.270861 sshd-session[1653]: pam_unix(sshd:session): session closed for user core May 15 23:58:27.288101 systemd[1]: sshd@2-10.0.0.27:22-10.0.0.1:48548.service: Deactivated successfully. May 15 23:58:27.290043 systemd[1]: session-3.scope: Deactivated successfully. May 15 23:58:27.291041 systemd-logind[1505]: Session 3 logged out. Waiting for processes to exit. May 15 23:58:27.293750 systemd[1]: Started sshd@3-10.0.0.27:22-10.0.0.1:48564.service - OpenSSH per-connection server daemon (10.0.0.1:48564). May 15 23:58:27.294492 systemd-logind[1505]: Removed session 3. May 15 23:58:27.349208 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 48564 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 15 23:58:27.350698 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:27.357822 systemd-logind[1505]: New session 4 of user core. May 15 23:58:27.368695 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 23:58:27.431496 sshd[1665]: Connection closed by 10.0.0.1 port 48564 May 15 23:58:27.431943 sshd-session[1662]: pam_unix(sshd:session): session closed for user core May 15 23:58:27.444578 systemd[1]: sshd@3-10.0.0.27:22-10.0.0.1:48564.service: Deactivated successfully. May 15 23:58:27.447255 systemd[1]: session-4.scope: Deactivated successfully. May 15 23:58:27.449938 systemd-logind[1505]: Session 4 logged out. Waiting for processes to exit. May 15 23:58:27.451681 systemd[1]: Started sshd@4-10.0.0.27:22-10.0.0.1:48578.service - OpenSSH per-connection server daemon (10.0.0.1:48578). May 15 23:58:27.452922 systemd-logind[1505]: Removed session 4. May 15 23:58:27.506381 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 48578 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 15 23:58:27.530010 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:27.536159 systemd-logind[1505]: New session 5 of user core. May 15 23:58:27.545537 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 23:58:27.646870 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 23:58:27.647511 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:58:27.671938 sudo[1675]: pam_unix(sudo:session): session closed for user root May 15 23:58:27.674503 sshd[1674]: Connection closed by 10.0.0.1 port 48578 May 15 23:58:27.675070 sshd-session[1671]: pam_unix(sshd:session): session closed for user core May 15 23:58:27.684272 systemd[1]: sshd@4-10.0.0.27:22-10.0.0.1:48578.service: Deactivated successfully. May 15 23:58:27.686233 systemd[1]: session-5.scope: Deactivated successfully. May 15 23:58:27.687880 systemd-logind[1505]: Session 5 logged out. Waiting for processes to exit. May 15 23:58:27.689526 systemd[1]: Started sshd@5-10.0.0.27:22-10.0.0.1:48594.service - OpenSSH per-connection server daemon (10.0.0.1:48594). May 15 23:58:27.690487 systemd-logind[1505]: Removed session 5. May 15 23:58:27.738403 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 48594 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 15 23:58:27.740359 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:27.746146 systemd-logind[1505]: New session 6 of user core. May 15 23:58:27.754501 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 23:58:27.764417 kubelet[1639]: E0515 23:58:27.764362 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:58:27.768464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:58:27.768678 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:58:27.769038 systemd[1]: kubelet.service: Consumed 2.072s CPU time, 267.3M memory peak. May 15 23:58:27.811662 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 23:58:27.812130 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:58:27.817435 sudo[1686]: pam_unix(sudo:session): session closed for user root May 15 23:58:27.826439 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 23:58:27.826905 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:58:27.838183 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:58:27.886300 augenrules[1708]: No rules May 15 23:58:27.888281 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:58:27.888641 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:58:27.890039 sudo[1685]: pam_unix(sudo:session): session closed for user root May 15 23:58:27.891722 sshd[1683]: Connection closed by 10.0.0.1 port 48594 May 15 23:58:27.892213 sshd-session[1680]: pam_unix(sshd:session): session closed for user core May 15 23:58:27.901533 systemd[1]: sshd@5-10.0.0.27:22-10.0.0.1:48594.service: Deactivated successfully. May 15 23:58:27.904607 systemd[1]: session-6.scope: Deactivated successfully. May 15 23:58:27.908790 systemd-logind[1505]: Session 6 logged out. Waiting for processes to exit. May 15 23:58:27.910534 systemd[1]: Started sshd@6-10.0.0.27:22-10.0.0.1:48600.service - OpenSSH per-connection server daemon (10.0.0.1:48600). May 15 23:58:27.911467 systemd-logind[1505]: Removed session 6. May 15 23:58:27.968195 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 48600 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 15 23:58:27.969662 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:58:27.974425 systemd-logind[1505]: New session 7 of user core. May 15 23:58:27.984508 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 23:58:28.037565 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 23:58:28.037918 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:58:28.699118 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 23:58:28.712762 (dockerd)[1741]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 23:58:29.268451 dockerd[1741]: time="2025-05-15T23:58:29.268369620Z" level=info msg="Starting up" May 15 23:58:29.269420 dockerd[1741]: time="2025-05-15T23:58:29.269379673Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 23:58:31.495239 dockerd[1741]: time="2025-05-15T23:58:31.495139959Z" level=info msg="Loading containers: start." May 15 23:58:32.259353 kernel: Initializing XFRM netlink socket May 15 23:58:32.336159 systemd-networkd[1411]: docker0: Link UP May 15 23:58:32.526388 dockerd[1741]: time="2025-05-15T23:58:32.526190488Z" level=info msg="Loading containers: done." May 15 23:58:32.544040 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1214814409-merged.mount: Deactivated successfully. May 15 23:58:32.763389 dockerd[1741]: time="2025-05-15T23:58:32.763276817Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 23:58:32.763597 dockerd[1741]: time="2025-05-15T23:58:32.763430958Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 15 23:58:32.763597 dockerd[1741]: time="2025-05-15T23:58:32.763592955Z" level=info msg="Daemon has completed initialization" May 15 23:58:32.943950 dockerd[1741]: time="2025-05-15T23:58:32.943725042Z" level=info msg="API listen on /run/docker.sock" May 15 23:58:32.944105 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 23:58:33.679894 containerd[1523]: time="2025-05-15T23:58:33.679832067Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 15 23:58:35.215952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814059196.mount: Deactivated successfully. May 15 23:58:38.019385 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 23:58:38.021706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:58:38.279141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:58:38.291710 (kubelet)[2013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:58:38.990577 kubelet[2013]: E0515 23:58:38.990450 2013 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:58:38.997812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:58:38.998038 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:58:38.998494 systemd[1]: kubelet.service: Consumed 297ms CPU time, 114.2M memory peak. May 15 23:58:40.011082 containerd[1523]: time="2025-05-15T23:58:40.011017364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:40.055585 containerd[1523]: time="2025-05-15T23:58:40.055398323Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 15 23:58:40.092843 containerd[1523]: time="2025-05-15T23:58:40.091837217Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:40.138152 containerd[1523]: time="2025-05-15T23:58:40.138065530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:40.139271 containerd[1523]: time="2025-05-15T23:58:40.139217232Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 6.459327589s" May 15 23:58:40.139271 containerd[1523]: time="2025-05-15T23:58:40.139264598Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 15 23:58:40.139845 containerd[1523]: time="2025-05-15T23:58:40.139813309Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 15 23:58:42.911211 containerd[1523]: time="2025-05-15T23:58:42.911127702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:42.980104 containerd[1523]: time="2025-05-15T23:58:42.979978346Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 15 23:58:43.040698 containerd[1523]: time="2025-05-15T23:58:43.040497358Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:43.073869 containerd[1523]: time="2025-05-15T23:58:43.073763922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:43.075161 containerd[1523]: time="2025-05-15T23:58:43.075109241Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 2.935262669s" May 15 23:58:43.075161 containerd[1523]: time="2025-05-15T23:58:43.075150291Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 15 23:58:43.075851 containerd[1523]: time="2025-05-15T23:58:43.075810013Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 15 23:58:48.746840 containerd[1523]: time="2025-05-15T23:58:48.746723879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:48.750611 containerd[1523]: time="2025-05-15T23:58:48.750525619Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 15 23:58:48.753390 containerd[1523]: time="2025-05-15T23:58:48.753282395Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:48.757916 containerd[1523]: time="2025-05-15T23:58:48.757832559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:48.759099 containerd[1523]: time="2025-05-15T23:58:48.759044952Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 5.683186806s" May 15 23:58:48.759099 containerd[1523]: time="2025-05-15T23:58:48.759094943Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 15 23:58:48.759664 containerd[1523]: time="2025-05-15T23:58:48.759620750Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 15 23:58:49.248480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 23:58:49.250169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:58:49.444170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:58:49.455633 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:58:49.491985 kubelet[2038]: E0515 23:58:49.491920 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:58:49.496254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:58:49.496519 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:58:49.496932 systemd[1]: kubelet.service: Consumed 224ms CPU time, 112.7M memory peak. May 15 23:58:51.908277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520516742.mount: Deactivated successfully. May 15 23:58:52.970897 containerd[1523]: time="2025-05-15T23:58:52.970824217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:52.971848 containerd[1523]: time="2025-05-15T23:58:52.971734040Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 15 23:58:52.973711 containerd[1523]: time="2025-05-15T23:58:52.973673993Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:52.976586 containerd[1523]: time="2025-05-15T23:58:52.976534527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:52.977076 containerd[1523]: time="2025-05-15T23:58:52.977023243Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 4.217364436s" May 15 23:58:52.977076 containerd[1523]: time="2025-05-15T23:58:52.977058539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 15 23:58:52.977689 containerd[1523]: time="2025-05-15T23:58:52.977645404Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 23:58:53.757325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917395347.mount: Deactivated successfully. May 15 23:58:55.615378 containerd[1523]: time="2025-05-15T23:58:55.615277722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:55.683127 containerd[1523]: time="2025-05-15T23:58:55.683028095Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 15 23:58:55.755863 containerd[1523]: time="2025-05-15T23:58:55.755789928Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:55.800245 containerd[1523]: time="2025-05-15T23:58:55.800172186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:58:55.801296 containerd[1523]: time="2025-05-15T23:58:55.801248266Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.823565199s" May 15 23:58:55.801493 containerd[1523]: time="2025-05-15T23:58:55.801298421Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 23:58:55.802119 containerd[1523]: time="2025-05-15T23:58:55.802093171Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 23:58:58.520226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184684068.mount: Deactivated successfully. May 15 23:58:58.531672 containerd[1523]: time="2025-05-15T23:58:58.531586310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:58:58.532920 containerd[1523]: time="2025-05-15T23:58:58.532844262Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 23:58:58.534901 containerd[1523]: time="2025-05-15T23:58:58.534860951Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:58:58.537451 containerd[1523]: time="2025-05-15T23:58:58.537392265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:58:58.538223 containerd[1523]: time="2025-05-15T23:58:58.538108221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.735979177s" May 15 23:58:58.538223 containerd[1523]: time="2025-05-15T23:58:58.538153307Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 23:58:58.538854 containerd[1523]: time="2025-05-15T23:58:58.538803077Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 23:58:59.053851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575215669.mount: Deactivated successfully. May 15 23:58:59.746952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 23:58:59.748622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:58:59.927617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:58:59.931704 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:58:59.964902 kubelet[2122]: E0515 23:58:59.964843 2122 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:58:59.969135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:58:59.969367 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:58:59.969804 systemd[1]: kubelet.service: Consumed 211ms CPU time, 110.7M memory peak. May 15 23:59:03.793708 containerd[1523]: time="2025-05-15T23:59:03.793611572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:03.819816 containerd[1523]: time="2025-05-15T23:59:03.819696734Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 15 23:59:03.837968 containerd[1523]: time="2025-05-15T23:59:03.837864145Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:03.854301 containerd[1523]: time="2025-05-15T23:59:03.854224771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:03.855840 containerd[1523]: time="2025-05-15T23:59:03.855779658Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.316917379s" May 15 23:59:03.855840 containerd[1523]: time="2025-05-15T23:59:03.855824173Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 23:59:06.157183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:59:06.157475 systemd[1]: kubelet.service: Consumed 211ms CPU time, 110.7M memory peak. May 15 23:59:06.159926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:59:06.196413 systemd[1]: Reload requested from client PID 2211 ('systemctl') (unit session-7.scope)... May 15 23:59:06.196438 systemd[1]: Reloading... May 15 23:59:06.292753 zram_generator::config[2257]: No configuration found. May 15 23:59:06.648604 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:59:06.755193 systemd[1]: Reloading finished in 558 ms. May 15 23:59:06.815443 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 23:59:06.815596 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 23:59:06.816015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:59:06.816074 systemd[1]: kubelet.service: Consumed 176ms CPU time, 98.2M memory peak. May 15 23:59:06.818368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:59:07.019703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:59:07.031805 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:59:07.144701 kubelet[2302]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:59:07.144701 kubelet[2302]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 23:59:07.144701 kubelet[2302]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:59:07.144701 kubelet[2302]: I0515 23:59:07.144460 2302 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:59:07.604376 kubelet[2302]: I0515 23:59:07.603825 2302 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 15 23:59:07.604376 kubelet[2302]: I0515 23:59:07.603855 2302 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:59:07.604376 kubelet[2302]: I0515 23:59:07.604381 2302 server.go:954] "Client rotation is on, will bootstrap in background" May 15 23:59:07.633129 kubelet[2302]: E0515 23:59:07.633050 2302 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:07.633439 kubelet[2302]: I0515 23:59:07.633219 2302 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:59:07.648736 kubelet[2302]: I0515 23:59:07.648427 2302 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 23:59:07.653937 kubelet[2302]: I0515 23:59:07.653907 2302 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:59:07.656137 kubelet[2302]: I0515 23:59:07.656063 2302 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:59:07.656496 kubelet[2302]: I0515 23:59:07.656118 2302 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:59:07.656714 kubelet[2302]: I0515 23:59:07.656505 2302 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:59:07.656714 kubelet[2302]: I0515 23:59:07.656521 2302 container_manager_linux.go:304] "Creating device plugin manager" May 15 23:59:07.656714 kubelet[2302]: I0515 23:59:07.656709 2302 state_mem.go:36] "Initialized new in-memory state store" May 15 23:59:07.665825 kubelet[2302]: I0515 23:59:07.665758 2302 kubelet.go:446] "Attempting to sync node with API server" May 15 23:59:07.665825 kubelet[2302]: I0515 23:59:07.665805 2302 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:59:07.665825 kubelet[2302]: I0515 23:59:07.665838 2302 kubelet.go:352] "Adding apiserver pod source" May 15 23:59:07.666048 kubelet[2302]: I0515 23:59:07.665853 2302 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:59:07.679095 kubelet[2302]: W0515 23:59:07.678160 2302 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 15 23:59:07.679095 kubelet[2302]: E0515 23:59:07.678340 2302 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:07.679095 kubelet[2302]: I0515 23:59:07.678352 2302 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 15 23:59:07.679095 kubelet[2302]: W0515 23:59:07.678819 2302 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 15 23:59:07.679095 kubelet[2302]: E0515 23:59:07.678872 2302 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:07.679095 kubelet[2302]: I0515 23:59:07.678937 2302 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:59:07.680605 kubelet[2302]: W0515 23:59:07.680400 2302 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 23:59:07.684153 kubelet[2302]: I0515 23:59:07.684113 2302 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 23:59:07.684217 kubelet[2302]: I0515 23:59:07.684163 2302 server.go:1287] "Started kubelet" May 15 23:59:07.685040 kubelet[2302]: I0515 23:59:07.684962 2302 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:59:07.685607 kubelet[2302]: I0515 23:59:07.685515 2302 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:59:07.686515 kubelet[2302]: I0515 23:59:07.685862 2302 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:59:07.686906 kubelet[2302]: I0515 23:59:07.686885 2302 server.go:479] "Adding debug handlers to kubelet server" May 15 23:59:07.688216 kubelet[2302]: I0515 23:59:07.687170 2302 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:59:07.688216 kubelet[2302]: I0515 23:59:07.687652 2302 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:59:07.689125 kubelet[2302]: E0515 23:59:07.689093 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:07.689184 kubelet[2302]: I0515 23:59:07.689153 2302 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 23:59:07.689458 kubelet[2302]: I0515 23:59:07.689429 2302 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 15 23:59:07.689558 kubelet[2302]: I0515 23:59:07.689535 2302 reconciler.go:26] "Reconciler: start to sync state" May 15 23:59:07.690369 kubelet[2302]: W0515 23:59:07.689915 2302 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 15 23:59:07.690369 kubelet[2302]: E0515 23:59:07.689976 2302 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:07.690574 kubelet[2302]: I0515 23:59:07.690558 2302 factory.go:221] Registration of the systemd container factory successfully May 15 23:59:07.690895 kubelet[2302]: I0515 23:59:07.690759 2302 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:59:07.690895 kubelet[2302]: E0515 23:59:07.690800 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="200ms" May 15 23:59:07.691094 kubelet[2302]: E0515 23:59:07.691077 2302 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:59:07.692111 kubelet[2302]: I0515 23:59:07.692087 2302 factory.go:221] Registration of the containerd container factory successfully May 15 23:59:07.694441 kubelet[2302]: E0515 23:59:07.690797 2302 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.27:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.27:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd8cb32bb44c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 23:59:07.684136133 +0000 UTC m=+0.606786047,LastTimestamp:2025-05-15 23:59:07.684136133 +0000 UTC m=+0.606786047,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 23:59:07.713563 kubelet[2302]: I0515 23:59:07.713445 2302 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 23:59:07.713563 kubelet[2302]: I0515 23:59:07.713464 2302 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 23:59:07.713563 kubelet[2302]: I0515 23:59:07.713479 2302 state_mem.go:36] "Initialized new in-memory state store" May 15 23:59:07.713740 kubelet[2302]: I0515 23:59:07.713699 2302 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:59:07.715382 kubelet[2302]: I0515 23:59:07.715125 2302 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:59:07.715382 kubelet[2302]: I0515 23:59:07.715155 2302 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 23:59:07.715382 kubelet[2302]: I0515 23:59:07.715185 2302 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 23:59:07.715382 kubelet[2302]: I0515 23:59:07.715194 2302 kubelet.go:2382] "Starting kubelet main sync loop" May 15 23:59:07.715382 kubelet[2302]: E0515 23:59:07.715257 2302 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:59:07.789995 kubelet[2302]: E0515 23:59:07.789942 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:07.816466 kubelet[2302]: E0515 23:59:07.816364 2302 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 23:59:07.890850 kubelet[2302]: E0515 23:59:07.890681 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:07.892540 kubelet[2302]: E0515 23:59:07.892496 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="400ms" May 15 23:59:07.991842 kubelet[2302]: E0515 23:59:07.991768 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:08.017126 kubelet[2302]: E0515 23:59:08.017030 2302 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 23:59:08.092781 kubelet[2302]: E0515 23:59:08.092707 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:08.193761 kubelet[2302]: E0515 23:59:08.193687 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:08.293868 kubelet[2302]: E0515 23:59:08.293811 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:08.293868 kubelet[2302]: E0515 23:59:08.293810 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="800ms" May 15 23:59:08.394558 kubelet[2302]: E0515 23:59:08.394482 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:08.417828 kubelet[2302]: E0515 23:59:08.417748 2302 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 23:59:08.438637 kubelet[2302]: W0515 23:59:08.438544 2302 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 15 23:59:08.438755 kubelet[2302]: E0515 23:59:08.438640 2302 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:08.495487 kubelet[2302]: E0515 23:59:08.495329 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:08.517646 kubelet[2302]: W0515 23:59:08.517563 2302 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 15 23:59:08.517646 kubelet[2302]: E0515 23:59:08.517632 2302 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:08.560651 kubelet[2302]: I0515 23:59:08.560586 2302 policy_none.go:49] "None policy: Start" May 15 23:59:08.560651 kubelet[2302]: I0515 23:59:08.560635 2302 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 23:59:08.560651 kubelet[2302]: I0515 23:59:08.560654 2302 state_mem.go:35] "Initializing new in-memory state store" May 15 23:59:08.568533 kubelet[2302]: W0515 23:59:08.568478 2302 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 15 23:59:08.568533 kubelet[2302]: E0515 23:59:08.568528 2302 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:08.596126 kubelet[2302]: E0515 23:59:08.596099 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:08.634221 kubelet[2302]: W0515 23:59:08.634170 2302 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 15 23:59:08.634277 kubelet[2302]: E0515 23:59:08.634257 2302 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:08.696905 kubelet[2302]: E0515 23:59:08.696846 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:08.797807 kubelet[2302]: E0515 23:59:08.797670 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:08.856677 update_engine[1509]: I20250515 23:59:08.856590 1509 update_attempter.cc:509] Updating boot flags... May 15 23:59:08.898578 kubelet[2302]: E0515 23:59:08.898503 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:08.999216 kubelet[2302]: E0515 23:59:08.999151 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:09.095231 kubelet[2302]: E0515 23:59:09.095074 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="1.6s" May 15 23:59:09.100206 kubelet[2302]: E0515 23:59:09.100168 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:09.106999 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 23:59:09.117860 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 23:59:09.121614 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 23:59:09.134670 kubelet[2302]: I0515 23:59:09.134560 2302 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:59:09.134886 kubelet[2302]: I0515 23:59:09.134863 2302 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:59:09.134925 kubelet[2302]: I0515 23:59:09.134881 2302 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:59:09.135203 kubelet[2302]: I0515 23:59:09.135187 2302 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:59:09.136070 kubelet[2302]: E0515 23:59:09.136046 2302 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 23:59:09.136118 kubelet[2302]: E0515 23:59:09.136110 2302 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 23:59:09.227282 systemd[1]: Created slice kubepods-burstable-podbf381f622f5c997e0c3471d802863608.slice - libcontainer container kubepods-burstable-podbf381f622f5c997e0c3471d802863608.slice. May 15 23:59:09.236595 kubelet[2302]: I0515 23:59:09.236522 2302 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:59:09.237032 kubelet[2302]: E0515 23:59:09.236930 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" May 15 23:59:09.250205 kubelet[2302]: E0515 23:59:09.250169 2302 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:09.253967 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 15 23:59:09.255726 kubelet[2302]: E0515 23:59:09.255688 2302 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:09.265532 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 15 23:59:09.267437 kubelet[2302]: E0515 23:59:09.267402 2302 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:09.300909 kubelet[2302]: I0515 23:59:09.300847 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf381f622f5c997e0c3471d802863608-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf381f622f5c997e0c3471d802863608\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:09.300909 kubelet[2302]: I0515 23:59:09.300913 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:09.301105 kubelet[2302]: I0515 23:59:09.300943 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:09.301105 kubelet[2302]: I0515 23:59:09.300969 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:09.301105 kubelet[2302]: I0515 23:59:09.300995 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:09.301105 kubelet[2302]: I0515 23:59:09.301017 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf381f622f5c997e0c3471d802863608-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf381f622f5c997e0c3471d802863608\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:09.301208 kubelet[2302]: I0515 23:59:09.301107 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf381f622f5c997e0c3471d802863608-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bf381f622f5c997e0c3471d802863608\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:09.301208 kubelet[2302]: I0515 23:59:09.301166 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:09.301267 kubelet[2302]: I0515 23:59:09.301231 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 15 23:59:09.439016 kubelet[2302]: I0515 23:59:09.438980 2302 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:59:09.439589 kubelet[2302]: E0515 23:59:09.439521 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" May 15 23:59:09.467097 kubelet[2302]: W0515 23:59:09.467020 2302 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 15 23:59:09.467177 kubelet[2302]: E0515 23:59:09.467108 2302 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:09.523357 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2342) May 15 23:59:09.551337 kubelet[2302]: E0515 23:59:09.550812 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:09.551674 containerd[1523]: time="2025-05-15T23:59:09.551628948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bf381f622f5c997e0c3471d802863608,Namespace:kube-system,Attempt:0,}" May 15 23:59:09.556726 kubelet[2302]: E0515 23:59:09.556702 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:09.557304 containerd[1523]: time="2025-05-15T23:59:09.557280307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 15 23:59:09.561345 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2346) May 15 23:59:09.569151 kubelet[2302]: E0515 23:59:09.569102 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:09.569850 containerd[1523]: time="2025-05-15T23:59:09.569812184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 15 23:59:09.612374 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2346) May 15 23:59:09.698195 kubelet[2302]: E0515 23:59:09.698091 2302 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 15 23:59:09.790149 containerd[1523]: time="2025-05-15T23:59:09.790100583Z" level=info msg="connecting to shim 8701b7632faee5a0473e466cee660fcc075a0a1602636aa566f4f11afe1b57a5" address="unix:///run/containerd/s/c6f994675f8dce59136edfe215c414ddcea37d5b93c65afbf44d5929590b027e" namespace=k8s.io protocol=ttrpc version=3 May 15 23:59:09.794443 containerd[1523]: time="2025-05-15T23:59:09.794401415Z" level=info msg="connecting to shim 930bc037b0a7bace3e124376cf8f6d6404714f434e278ded1aeb05dba7b2deef" address="unix:///run/containerd/s/cd37541503658a289b0af3bf91e4f2735593b64bf6e57257a26ed04502a4f4c7" namespace=k8s.io protocol=ttrpc version=3 May 15 23:59:09.801414 containerd[1523]: time="2025-05-15T23:59:09.800723169Z" level=info msg="connecting to shim 647d6d3cae2fc26683c86245eb8f1f5c49814bf6ce80d73a8c5df953b7efdfa3" address="unix:///run/containerd/s/4a627338f5eaf24d7fcc4fc389bcf7fc8126918b0baa8ec6769de5365aadf48a" namespace=k8s.io protocol=ttrpc version=3 May 15 23:59:09.819511 systemd[1]: Started cri-containerd-8701b7632faee5a0473e466cee660fcc075a0a1602636aa566f4f11afe1b57a5.scope - libcontainer container 8701b7632faee5a0473e466cee660fcc075a0a1602636aa566f4f11afe1b57a5. May 15 23:59:09.824235 systemd[1]: Started cri-containerd-647d6d3cae2fc26683c86245eb8f1f5c49814bf6ce80d73a8c5df953b7efdfa3.scope - libcontainer container 647d6d3cae2fc26683c86245eb8f1f5c49814bf6ce80d73a8c5df953b7efdfa3. May 15 23:59:09.826025 systemd[1]: Started cri-containerd-930bc037b0a7bace3e124376cf8f6d6404714f434e278ded1aeb05dba7b2deef.scope - libcontainer container 930bc037b0a7bace3e124376cf8f6d6404714f434e278ded1aeb05dba7b2deef. May 15 23:59:09.842413 kubelet[2302]: I0515 23:59:09.842374 2302 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:59:09.842823 kubelet[2302]: E0515 23:59:09.842707 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" May 15 23:59:09.879590 containerd[1523]: time="2025-05-15T23:59:09.879532543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8701b7632faee5a0473e466cee660fcc075a0a1602636aa566f4f11afe1b57a5\"" May 15 23:59:09.881239 containerd[1523]: time="2025-05-15T23:59:09.881183366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bf381f622f5c997e0c3471d802863608,Namespace:kube-system,Attempt:0,} returns sandbox id \"930bc037b0a7bace3e124376cf8f6d6404714f434e278ded1aeb05dba7b2deef\"" May 15 23:59:09.881664 kubelet[2302]: E0515 23:59:09.881642 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:09.882286 kubelet[2302]: E0515 23:59:09.882196 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:09.884260 containerd[1523]: time="2025-05-15T23:59:09.884225604Z" level=info msg="CreateContainer within sandbox \"8701b7632faee5a0473e466cee660fcc075a0a1602636aa566f4f11afe1b57a5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 23:59:09.884478 containerd[1523]: time="2025-05-15T23:59:09.884229010Z" level=info msg="CreateContainer within sandbox \"930bc037b0a7bace3e124376cf8f6d6404714f434e278ded1aeb05dba7b2deef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 23:59:09.885085 containerd[1523]: time="2025-05-15T23:59:09.885049778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"647d6d3cae2fc26683c86245eb8f1f5c49814bf6ce80d73a8c5df953b7efdfa3\"" May 15 23:59:09.885941 kubelet[2302]: E0515 23:59:09.885912 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:09.887361 containerd[1523]: time="2025-05-15T23:59:09.887330389Z" level=info msg="CreateContainer within sandbox \"647d6d3cae2fc26683c86245eb8f1f5c49814bf6ce80d73a8c5df953b7efdfa3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 23:59:09.905294 containerd[1523]: time="2025-05-15T23:59:09.905224119Z" level=info msg="Container e3f719b15b76ed041f15f19ffce96a5a45c5dc98aa699edf6472a61c7af6c953: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:09.908911 containerd[1523]: time="2025-05-15T23:59:09.908846622Z" level=info msg="Container e34b2cb837e6f2458177cf80c0bffb499dac06dfe7cf0cba5c160b07002eebbc: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:09.912388 containerd[1523]: time="2025-05-15T23:59:09.912304624Z" level=info msg="Container 295230ff2a298708b90c9df76be963a96e65e00a773387c591f6ef793a5b20d7: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:09.921603 containerd[1523]: time="2025-05-15T23:59:09.921551214Z" level=info msg="CreateContainer within sandbox \"8701b7632faee5a0473e466cee660fcc075a0a1602636aa566f4f11afe1b57a5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e3f719b15b76ed041f15f19ffce96a5a45c5dc98aa699edf6472a61c7af6c953\"" May 15 23:59:09.922904 containerd[1523]: time="2025-05-15T23:59:09.922864020Z" level=info msg="StartContainer for \"e3f719b15b76ed041f15f19ffce96a5a45c5dc98aa699edf6472a61c7af6c953\"" May 15 23:59:09.924205 containerd[1523]: time="2025-05-15T23:59:09.924177337Z" level=info msg="connecting to shim e3f719b15b76ed041f15f19ffce96a5a45c5dc98aa699edf6472a61c7af6c953" address="unix:///run/containerd/s/c6f994675f8dce59136edfe215c414ddcea37d5b93c65afbf44d5929590b027e" protocol=ttrpc version=3 May 15 23:59:09.930585 containerd[1523]: time="2025-05-15T23:59:09.930536001Z" level=info msg="CreateContainer within sandbox \"930bc037b0a7bace3e124376cf8f6d6404714f434e278ded1aeb05dba7b2deef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e34b2cb837e6f2458177cf80c0bffb499dac06dfe7cf0cba5c160b07002eebbc\"" May 15 23:59:09.931397 containerd[1523]: time="2025-05-15T23:59:09.931364994Z" level=info msg="StartContainer for \"e34b2cb837e6f2458177cf80c0bffb499dac06dfe7cf0cba5c160b07002eebbc\"" May 15 23:59:09.933685 containerd[1523]: time="2025-05-15T23:59:09.932832733Z" level=info msg="connecting to shim e34b2cb837e6f2458177cf80c0bffb499dac06dfe7cf0cba5c160b07002eebbc" address="unix:///run/containerd/s/cd37541503658a289b0af3bf91e4f2735593b64bf6e57257a26ed04502a4f4c7" protocol=ttrpc version=3 May 15 23:59:09.933685 containerd[1523]: time="2025-05-15T23:59:09.933607463Z" level=info msg="CreateContainer within sandbox \"647d6d3cae2fc26683c86245eb8f1f5c49814bf6ce80d73a8c5df953b7efdfa3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"295230ff2a298708b90c9df76be963a96e65e00a773387c591f6ef793a5b20d7\"" May 15 23:59:09.935183 containerd[1523]: time="2025-05-15T23:59:09.935156585Z" level=info msg="StartContainer for \"295230ff2a298708b90c9df76be963a96e65e00a773387c591f6ef793a5b20d7\"" May 15 23:59:09.936294 containerd[1523]: time="2025-05-15T23:59:09.936266960Z" level=info msg="connecting to shim 295230ff2a298708b90c9df76be963a96e65e00a773387c591f6ef793a5b20d7" address="unix:///run/containerd/s/4a627338f5eaf24d7fcc4fc389bcf7fc8126918b0baa8ec6769de5365aadf48a" protocol=ttrpc version=3 May 15 23:59:09.949519 systemd[1]: Started cri-containerd-e3f719b15b76ed041f15f19ffce96a5a45c5dc98aa699edf6472a61c7af6c953.scope - libcontainer container e3f719b15b76ed041f15f19ffce96a5a45c5dc98aa699edf6472a61c7af6c953. May 15 23:59:09.960553 systemd[1]: Started cri-containerd-e34b2cb837e6f2458177cf80c0bffb499dac06dfe7cf0cba5c160b07002eebbc.scope - libcontainer container e34b2cb837e6f2458177cf80c0bffb499dac06dfe7cf0cba5c160b07002eebbc. May 15 23:59:09.965787 systemd[1]: Started cri-containerd-295230ff2a298708b90c9df76be963a96e65e00a773387c591f6ef793a5b20d7.scope - libcontainer container 295230ff2a298708b90c9df76be963a96e65e00a773387c591f6ef793a5b20d7. May 15 23:59:10.030251 containerd[1523]: time="2025-05-15T23:59:10.030197820Z" level=info msg="StartContainer for \"e3f719b15b76ed041f15f19ffce96a5a45c5dc98aa699edf6472a61c7af6c953\" returns successfully" May 15 23:59:10.031117 containerd[1523]: time="2025-05-15T23:59:10.030422053Z" level=info msg="StartContainer for \"e34b2cb837e6f2458177cf80c0bffb499dac06dfe7cf0cba5c160b07002eebbc\" returns successfully" May 15 23:59:10.041171 containerd[1523]: time="2025-05-15T23:59:10.041110618Z" level=info msg="StartContainer for \"295230ff2a298708b90c9df76be963a96e65e00a773387c591f6ef793a5b20d7\" returns successfully" May 15 23:59:10.644272 kubelet[2302]: I0515 23:59:10.644237 2302 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:59:10.729883 kubelet[2302]: E0515 23:59:10.729832 2302 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:10.730069 kubelet[2302]: E0515 23:59:10.729948 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:10.733792 kubelet[2302]: E0515 23:59:10.733756 2302 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:10.733893 kubelet[2302]: E0515 23:59:10.733864 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:10.734926 kubelet[2302]: E0515 23:59:10.734894 2302 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 23:59:10.735001 kubelet[2302]: E0515 23:59:10.734985 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:11.171594 kubelet[2302]: E0515 23:59:11.171487 2302 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 23:59:11.314801 kubelet[2302]: I0515 23:59:11.314747 2302 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 15 23:59:11.314801 kubelet[2302]: E0515 23:59:11.314796 2302 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 23:59:11.490556 kubelet[2302]: E0515 23:59:11.490386 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:11.590991 kubelet[2302]: I0515 23:59:11.590946 2302 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:59:11.625921 kubelet[2302]: E0515 23:59:11.625879 2302 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 23:59:11.625921 kubelet[2302]: I0515 23:59:11.625914 2302 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:59:11.627690 kubelet[2302]: E0515 23:59:11.627650 2302 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 23:59:11.627690 kubelet[2302]: I0515 23:59:11.627676 2302 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:11.629613 kubelet[2302]: E0515 23:59:11.629574 2302 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:11.668901 kubelet[2302]: I0515 23:59:11.668840 2302 apiserver.go:52] "Watching apiserver" May 15 23:59:11.690238 kubelet[2302]: I0515 23:59:11.690132 2302 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 15 23:59:11.738299 kubelet[2302]: I0515 23:59:11.735740 2302 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:59:11.738299 kubelet[2302]: I0515 23:59:11.735862 2302 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:59:11.738607 kubelet[2302]: E0515 23:59:11.738578 2302 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 23:59:11.738659 kubelet[2302]: E0515 23:59:11.738578 2302 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 23:59:11.738821 kubelet[2302]: E0515 23:59:11.738794 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:11.738857 kubelet[2302]: E0515 23:59:11.738801 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:12.937576 kubelet[2302]: I0515 23:59:12.937509 2302 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:12.943438 kubelet[2302]: E0515 23:59:12.943402 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:12.961790 kubelet[2302]: I0515 23:59:12.961731 2302 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:59:12.966910 kubelet[2302]: E0515 23:59:12.966869 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:13.474217 systemd[1]: Reload requested from client PID 2593 ('systemctl') (unit session-7.scope)... May 15 23:59:13.474238 systemd[1]: Reloading... May 15 23:59:13.571347 zram_generator::config[2641]: No configuration found. May 15 23:59:13.688864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:59:13.739769 kubelet[2302]: E0515 23:59:13.739642 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:13.739949 kubelet[2302]: E0515 23:59:13.739906 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:13.821991 systemd[1]: Reloading finished in 347 ms. May 15 23:59:13.853814 kubelet[2302]: I0515 23:59:13.853688 2302 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:59:13.853722 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:59:13.875656 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:59:13.875984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:59:13.876037 systemd[1]: kubelet.service: Consumed 1.284s CPU time, 136M memory peak. May 15 23:59:13.877921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:59:14.093690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:59:14.102776 (kubelet)[2682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:59:14.152167 kubelet[2682]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:59:14.152717 kubelet[2682]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 23:59:14.152780 kubelet[2682]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:59:14.153013 kubelet[2682]: I0515 23:59:14.152970 2682 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:59:14.159531 kubelet[2682]: I0515 23:59:14.159500 2682 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 15 23:59:14.159531 kubelet[2682]: I0515 23:59:14.159524 2682 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:59:14.159791 kubelet[2682]: I0515 23:59:14.159774 2682 server.go:954] "Client rotation is on, will bootstrap in background" May 15 23:59:14.161041 kubelet[2682]: I0515 23:59:14.161009 2682 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 23:59:14.163133 kubelet[2682]: I0515 23:59:14.163112 2682 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:59:14.167658 kubelet[2682]: I0515 23:59:14.167637 2682 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 23:59:14.172725 kubelet[2682]: I0515 23:59:14.172674 2682 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:59:14.172966 kubelet[2682]: I0515 23:59:14.172928 2682 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:59:14.173126 kubelet[2682]: I0515 23:59:14.172961 2682 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:59:14.173209 kubelet[2682]: I0515 23:59:14.173131 2682 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:59:14.173209 kubelet[2682]: I0515 23:59:14.173152 2682 container_manager_linux.go:304] "Creating device plugin manager" May 15 23:59:14.173209 kubelet[2682]: I0515 23:59:14.173199 2682 state_mem.go:36] "Initialized new in-memory state store" May 15 23:59:14.173385 kubelet[2682]: I0515 23:59:14.173360 2682 kubelet.go:446] "Attempting to sync node with API server" May 15 23:59:14.173413 kubelet[2682]: I0515 23:59:14.173387 2682 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:59:14.173413 kubelet[2682]: I0515 23:59:14.173407 2682 kubelet.go:352] "Adding apiserver pod source" May 15 23:59:14.173459 kubelet[2682]: I0515 23:59:14.173418 2682 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:59:14.177878 kubelet[2682]: I0515 23:59:14.177847 2682 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 15 23:59:14.178240 kubelet[2682]: I0515 23:59:14.178208 2682 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:59:14.178899 kubelet[2682]: I0515 23:59:14.178881 2682 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 23:59:14.178930 kubelet[2682]: I0515 23:59:14.178908 2682 server.go:1287] "Started kubelet" May 15 23:59:14.180385 kubelet[2682]: I0515 23:59:14.179108 2682 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:59:14.180385 kubelet[2682]: I0515 23:59:14.179838 2682 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:59:14.180385 kubelet[2682]: I0515 23:59:14.179944 2682 server.go:479] "Adding debug handlers to kubelet server" May 15 23:59:14.180385 kubelet[2682]: I0515 23:59:14.180128 2682 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:59:14.180385 kubelet[2682]: I0515 23:59:14.180220 2682 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:59:14.181684 kubelet[2682]: I0515 23:59:14.181645 2682 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 23:59:14.181851 kubelet[2682]: E0515 23:59:14.181820 2682 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:59:14.182274 kubelet[2682]: I0515 23:59:14.182243 2682 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 15 23:59:14.182474 kubelet[2682]: I0515 23:59:14.182449 2682 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:59:14.182619 kubelet[2682]: I0515 23:59:14.182589 2682 reconciler.go:26] "Reconciler: start to sync state" May 15 23:59:14.186868 kubelet[2682]: I0515 23:59:14.186723 2682 factory.go:221] Registration of the systemd container factory successfully May 15 23:59:14.186868 kubelet[2682]: I0515 23:59:14.186861 2682 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:59:14.189587 kubelet[2682]: I0515 23:59:14.188889 2682 factory.go:221] Registration of the containerd container factory successfully May 15 23:59:14.196595 kubelet[2682]: I0515 23:59:14.196512 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:59:14.198902 kubelet[2682]: I0515 23:59:14.197775 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:59:14.198902 kubelet[2682]: I0515 23:59:14.197810 2682 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 23:59:14.198902 kubelet[2682]: I0515 23:59:14.197832 2682 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 23:59:14.198902 kubelet[2682]: I0515 23:59:14.197839 2682 kubelet.go:2382] "Starting kubelet main sync loop" May 15 23:59:14.198902 kubelet[2682]: E0515 23:59:14.197887 2682 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:59:14.233360 kubelet[2682]: I0515 23:59:14.232990 2682 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 23:59:14.233360 kubelet[2682]: I0515 23:59:14.233015 2682 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 23:59:14.233360 kubelet[2682]: I0515 23:59:14.233043 2682 state_mem.go:36] "Initialized new in-memory state store" May 15 23:59:14.233360 kubelet[2682]: I0515 23:59:14.233257 2682 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 23:59:14.233360 kubelet[2682]: I0515 23:59:14.233270 2682 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 23:59:14.233360 kubelet[2682]: I0515 23:59:14.233292 2682 policy_none.go:49] "None policy: Start" May 15 23:59:14.233360 kubelet[2682]: I0515 23:59:14.233339 2682 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 23:59:14.233360 kubelet[2682]: I0515 23:59:14.233355 2682 state_mem.go:35] "Initializing new in-memory state store" May 15 23:59:14.233734 kubelet[2682]: I0515 23:59:14.233498 2682 state_mem.go:75] "Updated machine memory state" May 15 23:59:14.237990 kubelet[2682]: I0515 23:59:14.237958 2682 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:59:14.238354 kubelet[2682]: I0515 23:59:14.238145 2682 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:59:14.238354 kubelet[2682]: I0515 23:59:14.238162 2682 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:59:14.238439 kubelet[2682]: I0515 23:59:14.238398 2682 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:59:14.241633 kubelet[2682]: E0515 23:59:14.241595 2682 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 23:59:14.299084 kubelet[2682]: I0515 23:59:14.299039 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:59:14.299084 kubelet[2682]: I0515 23:59:14.299082 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:59:14.299334 kubelet[2682]: I0515 23:59:14.299121 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:14.307357 kubelet[2682]: E0515 23:59:14.307235 2682 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 23:59:14.307669 kubelet[2682]: E0515 23:59:14.307648 2682 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 23:59:14.344031 kubelet[2682]: I0515 23:59:14.343926 2682 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 15 23:59:14.350114 kubelet[2682]: I0515 23:59:14.350088 2682 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 15 23:59:14.350234 kubelet[2682]: I0515 23:59:14.350186 2682 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 15 23:59:14.383617 kubelet[2682]: I0515 23:59:14.383577 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf381f622f5c997e0c3471d802863608-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bf381f622f5c997e0c3471d802863608\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:14.383617 kubelet[2682]: I0515 23:59:14.383615 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:14.383617 kubelet[2682]: I0515 23:59:14.383638 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:14.383911 kubelet[2682]: I0515 23:59:14.383654 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 15 23:59:14.383911 kubelet[2682]: I0515 23:59:14.383671 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf381f622f5c997e0c3471d802863608-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf381f622f5c997e0c3471d802863608\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:14.383911 kubelet[2682]: I0515 23:59:14.383717 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf381f622f5c997e0c3471d802863608-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf381f622f5c997e0c3471d802863608\") " pod="kube-system/kube-apiserver-localhost" May 15 23:59:14.383911 kubelet[2682]: I0515 23:59:14.383749 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:14.383911 kubelet[2682]: I0515 23:59:14.383772 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:14.384074 kubelet[2682]: I0515 23:59:14.383794 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:59:14.472711 sudo[2720]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 23:59:14.473159 sudo[2720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 23:59:14.608193 kubelet[2682]: E0515 23:59:14.608001 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:14.608193 kubelet[2682]: E0515 23:59:14.608031 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:14.608365 kubelet[2682]: E0515 23:59:14.608264 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:14.997791 sudo[2720]: pam_unix(sudo:session): session closed for user root May 15 23:59:15.174075 kubelet[2682]: I0515 23:59:15.174036 2682 apiserver.go:52] "Watching apiserver" May 15 23:59:15.215067 kubelet[2682]: I0515 23:59:15.214998 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 23:59:15.215253 kubelet[2682]: I0515 23:59:15.215218 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:59:15.345181 kubelet[2682]: E0515 23:59:15.345002 2682 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 23:59:15.345328 kubelet[2682]: E0515 23:59:15.345188 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:15.346047 kubelet[2682]: E0515 23:59:15.345761 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:15.346636 kubelet[2682]: E0515 23:59:15.346534 2682 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 23:59:15.346826 kubelet[2682]: E0515 23:59:15.346674 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:15.382838 kubelet[2682]: I0515 23:59:15.382788 2682 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 15 23:59:15.400226 kubelet[2682]: I0515 23:59:15.400092 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.400069314 podStartE2EDuration="1.400069314s" podCreationTimestamp="2025-05-15 23:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:59:15.399884607 +0000 UTC m=+1.290685934" watchObservedRunningTime="2025-05-15 23:59:15.400069314 +0000 UTC m=+1.290870641" May 15 23:59:15.400477 kubelet[2682]: I0515 23:59:15.400288 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.400279982 podStartE2EDuration="3.400279982s" podCreationTimestamp="2025-05-15 23:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:59:15.388209184 +0000 UTC m=+1.279010511" watchObservedRunningTime="2025-05-15 23:59:15.400279982 +0000 UTC m=+1.291081309" May 15 23:59:15.410068 kubelet[2682]: I0515 23:59:15.409852 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.409826284 podStartE2EDuration="3.409826284s" podCreationTimestamp="2025-05-15 23:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:59:15.409662014 +0000 UTC m=+1.300463341" watchObservedRunningTime="2025-05-15 23:59:15.409826284 +0000 UTC m=+1.300627601" May 15 23:59:16.220408 kubelet[2682]: I0515 23:59:16.220364 2682 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 23:59:16.221186 kubelet[2682]: E0515 23:59:16.220649 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:16.234855 kubelet[2682]: E0515 23:59:16.234804 2682 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 23:59:16.235040 kubelet[2682]: E0515 23:59:16.235017 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:16.877234 sudo[1720]: pam_unix(sudo:session): session closed for user root May 15 23:59:16.878821 sshd[1719]: Connection closed by 10.0.0.1 port 48600 May 15 23:59:16.885741 sshd-session[1716]: pam_unix(sshd:session): session closed for user core May 15 23:59:16.904667 systemd[1]: sshd@6-10.0.0.27:22-10.0.0.1:48600.service: Deactivated successfully. May 15 23:59:16.906673 systemd[1]: session-7.scope: Deactivated successfully. May 15 23:59:16.906882 systemd[1]: session-7.scope: Consumed 4.646s CPU time, 253M memory peak. May 15 23:59:16.908216 systemd-logind[1505]: Session 7 logged out. Waiting for processes to exit. May 15 23:59:16.909242 systemd-logind[1505]: Removed session 7. May 15 23:59:17.221518 kubelet[2682]: E0515 23:59:17.221483 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:19.134986 kubelet[2682]: I0515 23:59:19.134947 2682 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 23:59:19.135523 containerd[1523]: time="2025-05-15T23:59:19.135356467Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 23:59:19.135825 kubelet[2682]: I0515 23:59:19.135678 2682 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 23:59:20.129684 systemd[1]: Created slice kubepods-besteffort-pode4144af0_3797_4d68_8108_1aaa90edce43.slice - libcontainer container kubepods-besteffort-pode4144af0_3797_4d68_8108_1aaa90edce43.slice. May 15 23:59:20.221460 kubelet[2682]: I0515 23:59:20.221406 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4144af0-3797-4d68-8108-1aaa90edce43-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-v892q\" (UID: \"e4144af0-3797-4d68-8108-1aaa90edce43\") " pod="kube-system/cilium-operator-6c4d7847fc-v892q" May 15 23:59:20.221460 kubelet[2682]: I0515 23:59:20.221460 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmsk9\" (UniqueName: \"kubernetes.io/projected/e4144af0-3797-4d68-8108-1aaa90edce43-kube-api-access-cmsk9\") pod \"cilium-operator-6c4d7847fc-v892q\" (UID: \"e4144af0-3797-4d68-8108-1aaa90edce43\") " pod="kube-system/cilium-operator-6c4d7847fc-v892q" May 15 23:59:20.374819 systemd[1]: Created slice kubepods-besteffort-pod471d7d58_83b4_479b_96f5_783a1647c6af.slice - libcontainer container kubepods-besteffort-pod471d7d58_83b4_479b_96f5_783a1647c6af.slice. May 15 23:59:20.389962 systemd[1]: Created slice kubepods-burstable-podedd21679_b92a_47c7_9bb7_9c163e58b396.slice - libcontainer container kubepods-burstable-podedd21679_b92a_47c7_9bb7_9c163e58b396.slice. May 15 23:59:20.413793 kubelet[2682]: E0515 23:59:20.413742 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:20.422857 kubelet[2682]: I0515 23:59:20.422796 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-etc-cni-netd\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.422857 kubelet[2682]: I0515 23:59:20.422851 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-lib-modules\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423046 kubelet[2682]: I0515 23:59:20.422876 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/471d7d58-83b4-479b-96f5-783a1647c6af-kube-proxy\") pod \"kube-proxy-xmr8m\" (UID: \"471d7d58-83b4-479b-96f5-783a1647c6af\") " pod="kube-system/kube-proxy-xmr8m" May 15 23:59:20.423046 kubelet[2682]: I0515 23:59:20.422901 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cni-path\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423046 kubelet[2682]: I0515 23:59:20.422927 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-config-path\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423046 kubelet[2682]: I0515 23:59:20.422956 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8v8w\" (UniqueName: \"kubernetes.io/projected/edd21679-b92a-47c7-9bb7-9c163e58b396-kube-api-access-d8v8w\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423046 kubelet[2682]: I0515 23:59:20.422981 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/471d7d58-83b4-479b-96f5-783a1647c6af-lib-modules\") pod \"kube-proxy-xmr8m\" (UID: \"471d7d58-83b4-479b-96f5-783a1647c6af\") " pod="kube-system/kube-proxy-xmr8m" May 15 23:59:20.423176 kubelet[2682]: I0515 23:59:20.423090 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-bpf-maps\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423209 kubelet[2682]: I0515 23:59:20.423176 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-hostproc\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423250 kubelet[2682]: I0515 23:59:20.423225 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-run\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423278 kubelet[2682]: I0515 23:59:20.423259 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-host-proc-sys-kernel\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423301 kubelet[2682]: I0515 23:59:20.423287 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edd21679-b92a-47c7-9bb7-9c163e58b396-hubble-tls\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423340 kubelet[2682]: I0515 23:59:20.423323 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/471d7d58-83b4-479b-96f5-783a1647c6af-xtables-lock\") pod \"kube-proxy-xmr8m\" (UID: \"471d7d58-83b4-479b-96f5-783a1647c6af\") " pod="kube-system/kube-proxy-xmr8m" May 15 23:59:20.423373 kubelet[2682]: I0515 23:59:20.423352 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-host-proc-sys-net\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423399 kubelet[2682]: I0515 23:59:20.423374 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-cgroup\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423432 kubelet[2682]: I0515 23:59:20.423396 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edd21679-b92a-47c7-9bb7-9c163e58b396-clustermesh-secrets\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.423432 kubelet[2682]: I0515 23:59:20.423419 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzzkd\" (UniqueName: \"kubernetes.io/projected/471d7d58-83b4-479b-96f5-783a1647c6af-kube-api-access-jzzkd\") pod \"kube-proxy-xmr8m\" (UID: \"471d7d58-83b4-479b-96f5-783a1647c6af\") " pod="kube-system/kube-proxy-xmr8m" May 15 23:59:20.423487 kubelet[2682]: I0515 23:59:20.423456 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-xtables-lock\") pod \"cilium-fkt5l\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " pod="kube-system/cilium-fkt5l" May 15 23:59:20.441814 kubelet[2682]: E0515 23:59:20.441749 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:20.442648 containerd[1523]: time="2025-05-15T23:59:20.442560876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v892q,Uid:e4144af0-3797-4d68-8108-1aaa90edce43,Namespace:kube-system,Attempt:0,}" May 15 23:59:20.465468 containerd[1523]: time="2025-05-15T23:59:20.465399394Z" level=info msg="connecting to shim 9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090" address="unix:///run/containerd/s/c058ba96751ef01dfd5394bddd57c07c5824d630d6d8d283750a1ee3f27d2f0a" namespace=k8s.io protocol=ttrpc version=3 May 15 23:59:20.525653 systemd[1]: Started cri-containerd-9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090.scope - libcontainer container 9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090. May 15 23:59:20.589238 containerd[1523]: time="2025-05-15T23:59:20.589194784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v892q,Uid:e4144af0-3797-4d68-8108-1aaa90edce43,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\"" May 15 23:59:20.590443 kubelet[2682]: E0515 23:59:20.590407 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:20.592232 containerd[1523]: time="2025-05-15T23:59:20.592145618Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 23:59:20.679739 kubelet[2682]: E0515 23:59:20.679702 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:20.680246 containerd[1523]: time="2025-05-15T23:59:20.680203129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xmr8m,Uid:471d7d58-83b4-479b-96f5-783a1647c6af,Namespace:kube-system,Attempt:0,}" May 15 23:59:20.693727 kubelet[2682]: E0515 23:59:20.693686 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:20.694759 containerd[1523]: time="2025-05-15T23:59:20.694704414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fkt5l,Uid:edd21679-b92a-47c7-9bb7-9c163e58b396,Namespace:kube-system,Attempt:0,}" May 15 23:59:20.726876 containerd[1523]: time="2025-05-15T23:59:20.726720739Z" level=info msg="connecting to shim b62763e22dad93291aaddb97c7f215b54680f6cc349d86355a24f8e59f378a01" address="unix:///run/containerd/s/ee742b20471cb65fcfae16948c19ed0373d1769d4bc0effe1e0f57f302cc5611" namespace=k8s.io protocol=ttrpc version=3 May 15 23:59:20.726876 containerd[1523]: time="2025-05-15T23:59:20.726774741Z" level=info msg="connecting to shim 1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4" address="unix:///run/containerd/s/518a6dd66d0b26806b07aea0a1215b5da01cc7819a4f35d1d5edfe2e2fb8073f" namespace=k8s.io protocol=ttrpc version=3 May 15 23:59:20.751466 systemd[1]: Started cri-containerd-1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4.scope - libcontainer container 1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4. May 15 23:59:20.754646 systemd[1]: Started cri-containerd-b62763e22dad93291aaddb97c7f215b54680f6cc349d86355a24f8e59f378a01.scope - libcontainer container b62763e22dad93291aaddb97c7f215b54680f6cc349d86355a24f8e59f378a01. May 15 23:59:20.796597 containerd[1523]: time="2025-05-15T23:59:20.796426055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fkt5l,Uid:edd21679-b92a-47c7-9bb7-9c163e58b396,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\"" May 15 23:59:20.797205 kubelet[2682]: E0515 23:59:20.797172 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:20.799909 containerd[1523]: time="2025-05-15T23:59:20.799871511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xmr8m,Uid:471d7d58-83b4-479b-96f5-783a1647c6af,Namespace:kube-system,Attempt:0,} returns sandbox id \"b62763e22dad93291aaddb97c7f215b54680f6cc349d86355a24f8e59f378a01\"" May 15 23:59:20.800500 kubelet[2682]: E0515 23:59:20.800478 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:20.802249 containerd[1523]: time="2025-05-15T23:59:20.802221714Z" level=info msg="CreateContainer within sandbox \"b62763e22dad93291aaddb97c7f215b54680f6cc349d86355a24f8e59f378a01\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 23:59:20.811799 containerd[1523]: time="2025-05-15T23:59:20.811740731Z" level=info msg="Container 24b7c77cedd6a0c9543eff0b8234204f98c97f8513eac211ca0751d5ad380b75: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:20.821184 containerd[1523]: time="2025-05-15T23:59:20.821131037Z" level=info msg="CreateContainer within sandbox \"b62763e22dad93291aaddb97c7f215b54680f6cc349d86355a24f8e59f378a01\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24b7c77cedd6a0c9543eff0b8234204f98c97f8513eac211ca0751d5ad380b75\"" May 15 23:59:20.821792 containerd[1523]: time="2025-05-15T23:59:20.821747257Z" level=info msg="StartContainer for \"24b7c77cedd6a0c9543eff0b8234204f98c97f8513eac211ca0751d5ad380b75\"" May 15 23:59:20.823477 containerd[1523]: time="2025-05-15T23:59:20.823433210Z" level=info msg="connecting to shim 24b7c77cedd6a0c9543eff0b8234204f98c97f8513eac211ca0751d5ad380b75" address="unix:///run/containerd/s/ee742b20471cb65fcfae16948c19ed0373d1769d4bc0effe1e0f57f302cc5611" protocol=ttrpc version=3 May 15 23:59:20.847572 systemd[1]: Started cri-containerd-24b7c77cedd6a0c9543eff0b8234204f98c97f8513eac211ca0751d5ad380b75.scope - libcontainer container 24b7c77cedd6a0c9543eff0b8234204f98c97f8513eac211ca0751d5ad380b75. May 15 23:59:20.904083 containerd[1523]: time="2025-05-15T23:59:20.904031751Z" level=info msg="StartContainer for \"24b7c77cedd6a0c9543eff0b8234204f98c97f8513eac211ca0751d5ad380b75\" returns successfully" May 15 23:59:21.342032 kubelet[2682]: E0515 23:59:21.341990 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:21.344760 kubelet[2682]: E0515 23:59:21.344701 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:21.511718 kubelet[2682]: I0515 23:59:21.511652 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xmr8m" podStartSLOduration=1.511629221 podStartE2EDuration="1.511629221s" podCreationTimestamp="2025-05-15 23:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:59:21.511443061 +0000 UTC m=+7.402244388" watchObservedRunningTime="2025-05-15 23:59:21.511629221 +0000 UTC m=+7.402430548" May 15 23:59:21.784977 kubelet[2682]: E0515 23:59:21.784946 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:22.346376 kubelet[2682]: E0515 23:59:22.346342 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:22.829958 kubelet[2682]: E0515 23:59:22.829919 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:22.897969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount587970045.mount: Deactivated successfully. May 15 23:59:23.236277 containerd[1523]: time="2025-05-15T23:59:23.236214207Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:23.237185 containerd[1523]: time="2025-05-15T23:59:23.237093691Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 23:59:23.238492 containerd[1523]: time="2025-05-15T23:59:23.238424064Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:23.239741 containerd[1523]: time="2025-05-15T23:59:23.239701779Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.647483142s" May 15 23:59:23.239741 containerd[1523]: time="2025-05-15T23:59:23.239740311Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 23:59:23.240944 containerd[1523]: time="2025-05-15T23:59:23.240903570Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 23:59:23.242501 containerd[1523]: time="2025-05-15T23:59:23.242467553Z" level=info msg="CreateContainer within sandbox \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 23:59:23.253505 containerd[1523]: time="2025-05-15T23:59:23.253434940Z" level=info msg="Container 4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:23.261357 containerd[1523]: time="2025-05-15T23:59:23.261302494Z" level=info msg="CreateContainer within sandbox \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\"" May 15 23:59:23.261888 containerd[1523]: time="2025-05-15T23:59:23.261854983Z" level=info msg="StartContainer for \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\"" May 15 23:59:23.262893 containerd[1523]: time="2025-05-15T23:59:23.262863711Z" level=info msg="connecting to shim 4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026" address="unix:///run/containerd/s/c058ba96751ef01dfd5394bddd57c07c5824d630d6d8d283750a1ee3f27d2f0a" protocol=ttrpc version=3 May 15 23:59:23.288647 systemd[1]: Started cri-containerd-4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026.scope - libcontainer container 4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026. May 15 23:59:23.322591 containerd[1523]: time="2025-05-15T23:59:23.322552925Z" level=info msg="StartContainer for \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\" returns successfully" May 15 23:59:23.351765 kubelet[2682]: E0515 23:59:23.350223 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:23.351765 kubelet[2682]: E0515 23:59:23.350453 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:23.376695 kubelet[2682]: I0515 23:59:23.376602 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-v892q" podStartSLOduration=0.727312304 podStartE2EDuration="3.376575788s" podCreationTimestamp="2025-05-15 23:59:20 +0000 UTC" firstStartedPulling="2025-05-15 23:59:20.591474064 +0000 UTC m=+6.482275391" lastFinishedPulling="2025-05-15 23:59:23.240737538 +0000 UTC m=+9.131538875" observedRunningTime="2025-05-15 23:59:23.36575135 +0000 UTC m=+9.256552677" watchObservedRunningTime="2025-05-15 23:59:23.376575788 +0000 UTC m=+9.267377115" May 15 23:59:24.351863 kubelet[2682]: E0515 23:59:24.351764 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:24.352557 kubelet[2682]: E0515 23:59:24.351978 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:33.298664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1025975818.mount: Deactivated successfully. May 15 23:59:39.201798 containerd[1523]: time="2025-05-15T23:59:39.201709747Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:39.220505 containerd[1523]: time="2025-05-15T23:59:39.220426398Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 23:59:39.232385 containerd[1523]: time="2025-05-15T23:59:39.232336986Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:59:39.233955 containerd[1523]: time="2025-05-15T23:59:39.233918369Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.992971667s" May 15 23:59:39.234052 containerd[1523]: time="2025-05-15T23:59:39.233956430Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 23:59:39.236877 containerd[1523]: time="2025-05-15T23:59:39.236844007Z" level=info msg="CreateContainer within sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:59:39.248576 containerd[1523]: time="2025-05-15T23:59:39.248399979Z" level=info msg="Container c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:39.253144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount483037324.mount: Deactivated successfully. May 15 23:59:39.256152 containerd[1523]: time="2025-05-15T23:59:39.256106535Z" level=info msg="CreateContainer within sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\"" May 15 23:59:39.256687 containerd[1523]: time="2025-05-15T23:59:39.256622475Z" level=info msg="StartContainer for \"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\"" May 15 23:59:39.257626 containerd[1523]: time="2025-05-15T23:59:39.257601966Z" level=info msg="connecting to shim c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074" address="unix:///run/containerd/s/518a6dd66d0b26806b07aea0a1215b5da01cc7819a4f35d1d5edfe2e2fb8073f" protocol=ttrpc version=3 May 15 23:59:39.285598 systemd[1]: Started cri-containerd-c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074.scope - libcontainer container c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074. May 15 23:59:39.352806 containerd[1523]: time="2025-05-15T23:59:39.352757299Z" level=info msg="StartContainer for \"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\" returns successfully" May 15 23:59:39.363585 systemd[1]: cri-containerd-c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074.scope: Deactivated successfully. May 15 23:59:39.365415 containerd[1523]: time="2025-05-15T23:59:39.365367692Z" level=info msg="received exit event container_id:\"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\" id:\"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\" pid:3153 exited_at:{seconds:1747353579 nanos:364902217}" May 15 23:59:39.365487 containerd[1523]: time="2025-05-15T23:59:39.365448784Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\" id:\"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\" pid:3153 exited_at:{seconds:1747353579 nanos:364902217}" May 15 23:59:39.385375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074-rootfs.mount: Deactivated successfully. May 15 23:59:39.941896 kubelet[2682]: E0515 23:59:39.939782 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:40.942147 kubelet[2682]: E0515 23:59:40.942100 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:41.945372 kubelet[2682]: E0515 23:59:41.945334 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:41.948069 containerd[1523]: time="2025-05-15T23:59:41.948012901Z" level=info msg="CreateContainer within sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:59:42.255777 containerd[1523]: time="2025-05-15T23:59:42.255638555Z" level=info msg="Container ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:42.270162 systemd[1]: Started sshd@7-10.0.0.27:22-10.0.0.1:56874.service - OpenSSH per-connection server daemon (10.0.0.1:56874). May 15 23:59:42.284516 containerd[1523]: time="2025-05-15T23:59:42.284457420Z" level=info msg="CreateContainer within sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\"" May 15 23:59:42.285181 containerd[1523]: time="2025-05-15T23:59:42.285127009Z" level=info msg="StartContainer for \"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\"" May 15 23:59:42.286233 containerd[1523]: time="2025-05-15T23:59:42.286185568Z" level=info msg="connecting to shim ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031" address="unix:///run/containerd/s/518a6dd66d0b26806b07aea0a1215b5da01cc7819a4f35d1d5edfe2e2fb8073f" protocol=ttrpc version=3 May 15 23:59:42.310858 systemd[1]: Started cri-containerd-ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031.scope - libcontainer container ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031. May 15 23:59:42.362450 sshd[3186]: Accepted publickey for core from 10.0.0.1 port 56874 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 15 23:59:42.364656 sshd-session[3186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:42.365613 containerd[1523]: time="2025-05-15T23:59:42.364960826Z" level=info msg="StartContainer for \"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\" returns successfully" May 15 23:59:42.367409 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:59:42.367752 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:59:42.367951 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 23:59:42.370728 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:59:42.373163 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 23:59:42.373777 systemd[1]: cri-containerd-ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031.scope: Deactivated successfully. May 15 23:59:42.376909 containerd[1523]: time="2025-05-15T23:59:42.376659473Z" level=info msg="received exit event container_id:\"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\" id:\"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\" pid:3199 exited_at:{seconds:1747353582 nanos:376337758}" May 15 23:59:42.376909 containerd[1523]: time="2025-05-15T23:59:42.376867585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\" id:\"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\" pid:3199 exited_at:{seconds:1747353582 nanos:376337758}" May 15 23:59:42.383250 systemd-logind[1505]: New session 8 of user core. May 15 23:59:42.386014 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 23:59:42.400646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031-rootfs.mount: Deactivated successfully. May 15 23:59:42.418816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:59:42.668784 sshd[3233]: Connection closed by 10.0.0.1 port 56874 May 15 23:59:42.669146 sshd-session[3186]: pam_unix(sshd:session): session closed for user core May 15 23:59:42.673772 systemd[1]: sshd@7-10.0.0.27:22-10.0.0.1:56874.service: Deactivated successfully. May 15 23:59:42.675899 systemd[1]: session-8.scope: Deactivated successfully. May 15 23:59:42.676728 systemd-logind[1505]: Session 8 logged out. Waiting for processes to exit. May 15 23:59:42.677714 systemd-logind[1505]: Removed session 8. May 15 23:59:42.949376 kubelet[2682]: E0515 23:59:42.949218 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:42.951524 containerd[1523]: time="2025-05-15T23:59:42.951478262Z" level=info msg="CreateContainer within sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:59:43.256195 containerd[1523]: time="2025-05-15T23:59:43.256087410Z" level=info msg="Container 21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:43.410052 containerd[1523]: time="2025-05-15T23:59:43.409987094Z" level=info msg="CreateContainer within sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\"" May 15 23:59:43.410697 containerd[1523]: time="2025-05-15T23:59:43.410655070Z" level=info msg="StartContainer for \"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\"" May 15 23:59:43.412334 containerd[1523]: time="2025-05-15T23:59:43.412288590Z" level=info msg="connecting to shim 21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723" address="unix:///run/containerd/s/518a6dd66d0b26806b07aea0a1215b5da01cc7819a4f35d1d5edfe2e2fb8073f" protocol=ttrpc version=3 May 15 23:59:43.437476 systemd[1]: Started cri-containerd-21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723.scope - libcontainer container 21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723. May 15 23:59:43.484860 systemd[1]: cri-containerd-21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723.scope: Deactivated successfully. May 15 23:59:43.485808 containerd[1523]: time="2025-05-15T23:59:43.485767799Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\" id:\"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\" pid:3262 exited_at:{seconds:1747353583 nanos:485508993}" May 15 23:59:43.650408 containerd[1523]: time="2025-05-15T23:59:43.650354580Z" level=info msg="received exit event container_id:\"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\" id:\"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\" pid:3262 exited_at:{seconds:1747353583 nanos:485508993}" May 15 23:59:43.660646 containerd[1523]: time="2025-05-15T23:59:43.660603162Z" level=info msg="StartContainer for \"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\" returns successfully" May 15 23:59:43.673878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723-rootfs.mount: Deactivated successfully. May 15 23:59:43.953426 kubelet[2682]: E0515 23:59:43.953288 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:44.958579 kubelet[2682]: E0515 23:59:44.958541 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:44.960652 containerd[1523]: time="2025-05-15T23:59:44.960605988Z" level=info msg="CreateContainer within sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:59:45.298952 containerd[1523]: time="2025-05-15T23:59:45.298815410Z" level=info msg="Container f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:45.361290 containerd[1523]: time="2025-05-15T23:59:45.361233304Z" level=info msg="CreateContainer within sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\"" May 15 23:59:45.361987 containerd[1523]: time="2025-05-15T23:59:45.361946094Z" level=info msg="StartContainer for \"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\"" May 15 23:59:45.363186 containerd[1523]: time="2025-05-15T23:59:45.363140379Z" level=info msg="connecting to shim f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d" address="unix:///run/containerd/s/518a6dd66d0b26806b07aea0a1215b5da01cc7819a4f35d1d5edfe2e2fb8073f" protocol=ttrpc version=3 May 15 23:59:45.389704 systemd[1]: Started cri-containerd-f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d.scope - libcontainer container f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d. May 15 23:59:45.421291 systemd[1]: cri-containerd-f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d.scope: Deactivated successfully. May 15 23:59:45.422262 containerd[1523]: time="2025-05-15T23:59:45.422211373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\" id:\"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\" pid:3300 exited_at:{seconds:1747353585 nanos:421666809}" May 15 23:59:45.593907 containerd[1523]: time="2025-05-15T23:59:45.593754039Z" level=info msg="received exit event container_id:\"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\" id:\"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\" pid:3300 exited_at:{seconds:1747353585 nanos:421666809}" May 15 23:59:45.603952 containerd[1523]: time="2025-05-15T23:59:45.603905737Z" level=info msg="StartContainer for \"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\" returns successfully" May 15 23:59:45.615449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d-rootfs.mount: Deactivated successfully. May 15 23:59:45.965358 kubelet[2682]: E0515 23:59:45.965321 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:45.966921 containerd[1523]: time="2025-05-15T23:59:45.966857466Z" level=info msg="CreateContainer within sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:59:46.320882 containerd[1523]: time="2025-05-15T23:59:46.319884329Z" level=info msg="Container 6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:46.324726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1163251135.mount: Deactivated successfully. May 15 23:59:46.377212 containerd[1523]: time="2025-05-15T23:59:46.377133747Z" level=info msg="CreateContainer within sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\"" May 15 23:59:46.377812 containerd[1523]: time="2025-05-15T23:59:46.377782646Z" level=info msg="StartContainer for \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\"" May 15 23:59:46.378958 containerd[1523]: time="2025-05-15T23:59:46.378928089Z" level=info msg="connecting to shim 6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7" address="unix:///run/containerd/s/518a6dd66d0b26806b07aea0a1215b5da01cc7819a4f35d1d5edfe2e2fb8073f" protocol=ttrpc version=3 May 15 23:59:46.406586 systemd[1]: Started cri-containerd-6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7.scope - libcontainer container 6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7. May 15 23:59:46.568634 containerd[1523]: time="2025-05-15T23:59:46.568569731Z" level=info msg="StartContainer for \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" returns successfully" May 15 23:59:46.645802 containerd[1523]: time="2025-05-15T23:59:46.645758190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" id:\"2840e29ee863b179663811d90c2af6b29069eca659ac8a250af1327694afb4f0\" pid:3379 exited_at:{seconds:1747353586 nanos:645393825}" May 15 23:59:46.746374 kubelet[2682]: I0515 23:59:46.745698 2682 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 15 23:59:46.912566 systemd[1]: Created slice kubepods-burstable-podb77d49ab_69de_46b0_842f_16376caf8fec.slice - libcontainer container kubepods-burstable-podb77d49ab_69de_46b0_842f_16376caf8fec.slice. May 15 23:59:46.923278 systemd[1]: Created slice kubepods-burstable-podb317a42f_347d_4f5d_bd41_58442509ce8d.slice - libcontainer container kubepods-burstable-podb317a42f_347d_4f5d_bd41_58442509ce8d.slice. May 15 23:59:46.971739 kubelet[2682]: E0515 23:59:46.971684 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:46.986214 kubelet[2682]: I0515 23:59:46.986121 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fkt5l" podStartSLOduration=8.548088747 podStartE2EDuration="26.98593936s" podCreationTimestamp="2025-05-15 23:59:20 +0000 UTC" firstStartedPulling="2025-05-15 23:59:20.797696086 +0000 UTC m=+6.688497413" lastFinishedPulling="2025-05-15 23:59:39.235546699 +0000 UTC m=+25.126348026" observedRunningTime="2025-05-15 23:59:46.984984456 +0000 UTC m=+32.875785793" watchObservedRunningTime="2025-05-15 23:59:46.98593936 +0000 UTC m=+32.876740687" May 15 23:59:47.001343 kubelet[2682]: I0515 23:59:47.001227 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b317a42f-347d-4f5d-bd41-58442509ce8d-config-volume\") pod \"coredns-668d6bf9bc-56kxq\" (UID: \"b317a42f-347d-4f5d-bd41-58442509ce8d\") " pod="kube-system/coredns-668d6bf9bc-56kxq" May 15 23:59:47.001343 kubelet[2682]: I0515 23:59:47.001297 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9tqn\" (UniqueName: \"kubernetes.io/projected/b317a42f-347d-4f5d-bd41-58442509ce8d-kube-api-access-w9tqn\") pod \"coredns-668d6bf9bc-56kxq\" (UID: \"b317a42f-347d-4f5d-bd41-58442509ce8d\") " pod="kube-system/coredns-668d6bf9bc-56kxq" May 15 23:59:47.001343 kubelet[2682]: I0515 23:59:47.001352 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b77d49ab-69de-46b0-842f-16376caf8fec-config-volume\") pod \"coredns-668d6bf9bc-9dq45\" (UID: \"b77d49ab-69de-46b0-842f-16376caf8fec\") " pod="kube-system/coredns-668d6bf9bc-9dq45" May 15 23:59:47.001743 kubelet[2682]: I0515 23:59:47.001378 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tlc\" (UniqueName: \"kubernetes.io/projected/b77d49ab-69de-46b0-842f-16376caf8fec-kube-api-access-c2tlc\") pod \"coredns-668d6bf9bc-9dq45\" (UID: \"b77d49ab-69de-46b0-842f-16376caf8fec\") " pod="kube-system/coredns-668d6bf9bc-9dq45" May 15 23:59:47.219286 kubelet[2682]: E0515 23:59:47.219094 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:47.220078 containerd[1523]: time="2025-05-15T23:59:47.220032549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9dq45,Uid:b77d49ab-69de-46b0-842f-16376caf8fec,Namespace:kube-system,Attempt:0,}" May 15 23:59:47.228480 kubelet[2682]: E0515 23:59:47.228403 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:47.229071 containerd[1523]: time="2025-05-15T23:59:47.229014920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-56kxq,Uid:b317a42f-347d-4f5d-bd41-58442509ce8d,Namespace:kube-system,Attempt:0,}" May 15 23:59:47.687375 systemd[1]: Started sshd@8-10.0.0.27:22-10.0.0.1:39962.service - OpenSSH per-connection server daemon (10.0.0.1:39962). May 15 23:59:47.753698 sshd[3467]: Accepted publickey for core from 10.0.0.1 port 39962 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 15 23:59:47.755659 sshd-session[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:47.760469 systemd-logind[1505]: New session 9 of user core. May 15 23:59:47.770515 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 23:59:47.887266 sshd[3469]: Connection closed by 10.0.0.1 port 39962 May 15 23:59:47.887676 sshd-session[3467]: pam_unix(sshd:session): session closed for user core May 15 23:59:47.892941 systemd[1]: sshd@8-10.0.0.27:22-10.0.0.1:39962.service: Deactivated successfully. May 15 23:59:47.895200 systemd[1]: session-9.scope: Deactivated successfully. May 15 23:59:47.896058 systemd-logind[1505]: Session 9 logged out. Waiting for processes to exit. May 15 23:59:47.897167 systemd-logind[1505]: Removed session 9. May 15 23:59:48.689007 systemd-networkd[1411]: cilium_host: Link UP May 15 23:59:48.689178 systemd-networkd[1411]: cilium_net: Link UP May 15 23:59:48.689428 systemd-networkd[1411]: cilium_net: Gained carrier May 15 23:59:48.689656 systemd-networkd[1411]: cilium_host: Gained carrier May 15 23:59:48.694967 kubelet[2682]: E0515 23:59:48.694903 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:48.834717 systemd-networkd[1411]: cilium_vxlan: Link UP May 15 23:59:48.834728 systemd-networkd[1411]: cilium_vxlan: Gained carrier May 15 23:59:49.092349 kernel: NET: Registered PF_ALG protocol family May 15 23:59:49.486590 systemd-networkd[1411]: cilium_net: Gained IPv6LL May 15 23:59:49.615540 systemd-networkd[1411]: cilium_host: Gained IPv6LL May 15 23:59:49.825743 systemd-networkd[1411]: lxc_health: Link UP May 15 23:59:49.826105 systemd-networkd[1411]: lxc_health: Gained carrier May 15 23:59:50.358276 systemd-networkd[1411]: lxcdd3989ef6703: Link UP May 15 23:59:50.359354 kernel: eth0: renamed from tmp83792 May 15 23:59:50.375447 kernel: eth0: renamed from tmpd92a6 May 15 23:59:50.379563 systemd-networkd[1411]: lxcdd3989ef6703: Gained carrier May 15 23:59:50.379749 systemd-networkd[1411]: lxc605b1382178b: Link UP May 15 23:59:50.383237 systemd-networkd[1411]: lxc605b1382178b: Gained carrier May 15 23:59:50.575105 systemd-networkd[1411]: cilium_vxlan: Gained IPv6LL May 15 23:59:50.695381 kubelet[2682]: E0515 23:59:50.695343 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:50.978840 kubelet[2682]: E0515 23:59:50.978706 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:51.598505 systemd-networkd[1411]: lxc_health: Gained IPv6LL May 15 23:59:51.981242 kubelet[2682]: E0515 23:59:51.981167 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:52.174481 systemd-networkd[1411]: lxcdd3989ef6703: Gained IPv6LL May 15 23:59:52.302615 systemd-networkd[1411]: lxc605b1382178b: Gained IPv6LL May 15 23:59:52.904069 systemd[1]: Started sshd@9-10.0.0.27:22-10.0.0.1:39976.service - OpenSSH per-connection server daemon (10.0.0.1:39976). May 15 23:59:52.957039 sshd[3864]: Accepted publickey for core from 10.0.0.1 port 39976 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 15 23:59:52.958565 sshd-session[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:52.962940 systemd-logind[1505]: New session 10 of user core. May 15 23:59:52.969446 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 23:59:53.132462 sshd[3866]: Connection closed by 10.0.0.1 port 39976 May 15 23:59:53.132815 sshd-session[3864]: pam_unix(sshd:session): session closed for user core May 15 23:59:53.137143 systemd[1]: sshd@9-10.0.0.27:22-10.0.0.1:39976.service: Deactivated successfully. May 15 23:59:53.139239 systemd[1]: session-10.scope: Deactivated successfully. May 15 23:59:53.139975 systemd-logind[1505]: Session 10 logged out. Waiting for processes to exit. May 15 23:59:53.140936 systemd-logind[1505]: Removed session 10. May 15 23:59:55.316161 containerd[1523]: time="2025-05-15T23:59:55.316108576Z" level=info msg="connecting to shim d92a6e18c05d4a4234a19ad8eabb9a6dc9d10897d2b3b22a16e3ae90fd569966" address="unix:///run/containerd/s/0e681bf342ee80722fe712ff10e919f0649fd912c26ad3e7ebb12c31f8f90dcd" namespace=k8s.io protocol=ttrpc version=3 May 15 23:59:55.379572 systemd[1]: Started cri-containerd-d92a6e18c05d4a4234a19ad8eabb9a6dc9d10897d2b3b22a16e3ae90fd569966.scope - libcontainer container d92a6e18c05d4a4234a19ad8eabb9a6dc9d10897d2b3b22a16e3ae90fd569966. May 15 23:59:55.391744 containerd[1523]: time="2025-05-15T23:59:55.391683460Z" level=info msg="connecting to shim 83792837e2ba10764998d318b8d4e4d79c530977096504092b33e80a32eb6604" address="unix:///run/containerd/s/707b3247a01383ef08d6d23b60171ea8195c60e5879286633447ea5177d23748" namespace=k8s.io protocol=ttrpc version=3 May 15 23:59:55.398041 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:59:55.417497 systemd[1]: Started cri-containerd-83792837e2ba10764998d318b8d4e4d79c530977096504092b33e80a32eb6604.scope - libcontainer container 83792837e2ba10764998d318b8d4e4d79c530977096504092b33e80a32eb6604. May 15 23:59:55.435349 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:59:55.609380 containerd[1523]: time="2025-05-15T23:59:55.609193207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-56kxq,Uid:b317a42f-347d-4f5d-bd41-58442509ce8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d92a6e18c05d4a4234a19ad8eabb9a6dc9d10897d2b3b22a16e3ae90fd569966\"" May 15 23:59:55.610325 kubelet[2682]: E0515 23:59:55.610270 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:55.612296 containerd[1523]: time="2025-05-15T23:59:55.612243339Z" level=info msg="CreateContainer within sandbox \"d92a6e18c05d4a4234a19ad8eabb9a6dc9d10897d2b3b22a16e3ae90fd569966\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:59:55.632949 containerd[1523]: time="2025-05-15T23:59:55.632888631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9dq45,Uid:b77d49ab-69de-46b0-842f-16376caf8fec,Namespace:kube-system,Attempt:0,} returns sandbox id \"83792837e2ba10764998d318b8d4e4d79c530977096504092b33e80a32eb6604\"" May 15 23:59:55.633677 kubelet[2682]: E0515 23:59:55.633650 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:55.635233 containerd[1523]: time="2025-05-15T23:59:55.635207107Z" level=info msg="CreateContainer within sandbox \"83792837e2ba10764998d318b8d4e4d79c530977096504092b33e80a32eb6604\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:59:56.212396 containerd[1523]: time="2025-05-15T23:59:56.212057778Z" level=info msg="Container 22ee833a05dcb5de6e65ecd75e2c090ebca1d64facab403c5d19d444c9208670: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:56.310240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134721539.mount: Deactivated successfully. May 15 23:59:56.355726 containerd[1523]: time="2025-05-15T23:59:56.355627399Z" level=info msg="Container 673786ae8fa8b4e24f29bf4e4d7514da2367d83273f5313d8214b01052c92948: CDI devices from CRI Config.CDIDevices: []" May 15 23:59:56.605864 containerd[1523]: time="2025-05-15T23:59:56.605746835Z" level=info msg="CreateContainer within sandbox \"d92a6e18c05d4a4234a19ad8eabb9a6dc9d10897d2b3b22a16e3ae90fd569966\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22ee833a05dcb5de6e65ecd75e2c090ebca1d64facab403c5d19d444c9208670\"" May 15 23:59:56.606389 containerd[1523]: time="2025-05-15T23:59:56.606303310Z" level=info msg="StartContainer for \"22ee833a05dcb5de6e65ecd75e2c090ebca1d64facab403c5d19d444c9208670\"" May 15 23:59:56.607528 containerd[1523]: time="2025-05-15T23:59:56.607490080Z" level=info msg="connecting to shim 22ee833a05dcb5de6e65ecd75e2c090ebca1d64facab403c5d19d444c9208670" address="unix:///run/containerd/s/0e681bf342ee80722fe712ff10e919f0649fd912c26ad3e7ebb12c31f8f90dcd" protocol=ttrpc version=3 May 15 23:59:56.633456 systemd[1]: Started cri-containerd-22ee833a05dcb5de6e65ecd75e2c090ebca1d64facab403c5d19d444c9208670.scope - libcontainer container 22ee833a05dcb5de6e65ecd75e2c090ebca1d64facab403c5d19d444c9208670. May 15 23:59:57.074900 containerd[1523]: time="2025-05-15T23:59:57.074854338Z" level=info msg="CreateContainer within sandbox \"83792837e2ba10764998d318b8d4e4d79c530977096504092b33e80a32eb6604\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"673786ae8fa8b4e24f29bf4e4d7514da2367d83273f5313d8214b01052c92948\"" May 15 23:59:57.075390 containerd[1523]: time="2025-05-15T23:59:57.075338559Z" level=info msg="StartContainer for \"673786ae8fa8b4e24f29bf4e4d7514da2367d83273f5313d8214b01052c92948\"" May 15 23:59:57.075964 containerd[1523]: time="2025-05-15T23:59:57.075935821Z" level=info msg="StartContainer for \"22ee833a05dcb5de6e65ecd75e2c090ebca1d64facab403c5d19d444c9208670\" returns successfully" May 15 23:59:57.076508 containerd[1523]: time="2025-05-15T23:59:57.076187143Z" level=info msg="connecting to shim 673786ae8fa8b4e24f29bf4e4d7514da2367d83273f5313d8214b01052c92948" address="unix:///run/containerd/s/707b3247a01383ef08d6d23b60171ea8195c60e5879286633447ea5177d23748" protocol=ttrpc version=3 May 15 23:59:57.079112 kubelet[2682]: E0515 23:59:57.079034 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:57.112577 systemd[1]: Started cri-containerd-673786ae8fa8b4e24f29bf4e4d7514da2367d83273f5313d8214b01052c92948.scope - libcontainer container 673786ae8fa8b4e24f29bf4e4d7514da2367d83273f5313d8214b01052c92948. May 15 23:59:57.275453 containerd[1523]: time="2025-05-15T23:59:57.275391857Z" level=info msg="StartContainer for \"673786ae8fa8b4e24f29bf4e4d7514da2367d83273f5313d8214b01052c92948\" returns successfully" May 15 23:59:58.112103 kubelet[2682]: E0515 23:59:58.111937 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:58.112103 kubelet[2682]: E0515 23:59:58.111950 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:58.149853 systemd[1]: Started sshd@10-10.0.0.27:22-10.0.0.1:56010.service - OpenSSH per-connection server daemon (10.0.0.1:56010). May 15 23:59:58.211588 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 56010 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 15 23:59:58.213588 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:59:58.219026 systemd-logind[1505]: New session 11 of user core. May 15 23:59:58.232525 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 23:59:58.561124 kubelet[2682]: I0515 23:59:58.561032 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9dq45" podStartSLOduration=38.561003102 podStartE2EDuration="38.561003102s" podCreationTimestamp="2025-05-15 23:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:59:58.55870805 +0000 UTC m=+44.449509388" watchObservedRunningTime="2025-05-15 23:59:58.561003102 +0000 UTC m=+44.451804449" May 15 23:59:58.561500 kubelet[2682]: I0515 23:59:58.561167 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-56kxq" podStartSLOduration=38.561158865 podStartE2EDuration="38.561158865s" podCreationTimestamp="2025-05-15 23:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:59:57.307265406 +0000 UTC m=+43.198066743" watchObservedRunningTime="2025-05-15 23:59:58.561158865 +0000 UTC m=+44.451960212" May 15 23:59:58.565370 sshd[4052]: Connection closed by 10.0.0.1 port 56010 May 15 23:59:58.565787 sshd-session[4050]: pam_unix(sshd:session): session closed for user core May 15 23:59:58.570960 systemd[1]: sshd@10-10.0.0.27:22-10.0.0.1:56010.service: Deactivated successfully. May 15 23:59:58.573096 systemd[1]: session-11.scope: Deactivated successfully. May 15 23:59:58.574048 systemd-logind[1505]: Session 11 logged out. Waiting for processes to exit. May 15 23:59:58.575368 systemd-logind[1505]: Removed session 11. May 15 23:59:59.113855 kubelet[2682]: E0515 23:59:59.113763 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:59:59.113855 kubelet[2682]: E0515 23:59:59.113763 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:00.116026 kubelet[2682]: E0516 00:00:00.115981 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:00.116757 kubelet[2682]: E0516 00:00:00.116717 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:01.117713 kubelet[2682]: E0516 00:00:01.117647 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:03.579348 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 16 00:00:03.580605 systemd[1]: Started sshd@11-10.0.0.27:22-10.0.0.1:56024.service - OpenSSH per-connection server daemon (10.0.0.1:56024). May 16 00:00:03.636186 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 56024 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:03.638122 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:03.640235 systemd[1]: logrotate.service: Deactivated successfully. May 16 00:00:03.644399 systemd-logind[1505]: New session 12 of user core. May 16 00:00:03.652557 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 00:00:03.785781 sshd[4079]: Connection closed by 10.0.0.1 port 56024 May 16 00:00:03.786102 sshd-session[4076]: pam_unix(sshd:session): session closed for user core May 16 00:00:03.790187 systemd[1]: sshd@11-10.0.0.27:22-10.0.0.1:56024.service: Deactivated successfully. May 16 00:00:03.792322 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:00:03.793099 systemd-logind[1505]: Session 12 logged out. Waiting for processes to exit. May 16 00:00:03.793928 systemd-logind[1505]: Removed session 12. May 16 00:00:08.800726 systemd[1]: Started sshd@12-10.0.0.27:22-10.0.0.1:54636.service - OpenSSH per-connection server daemon (10.0.0.1:54636). May 16 00:00:08.855058 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 54636 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:08.856498 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:08.861139 systemd-logind[1505]: New session 13 of user core. May 16 00:00:08.876580 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 00:00:08.996975 sshd[4096]: Connection closed by 10.0.0.1 port 54636 May 16 00:00:08.997355 sshd-session[4094]: pam_unix(sshd:session): session closed for user core May 16 00:00:09.009743 systemd[1]: sshd@12-10.0.0.27:22-10.0.0.1:54636.service: Deactivated successfully. May 16 00:00:09.011921 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:00:09.013964 systemd-logind[1505]: Session 13 logged out. Waiting for processes to exit. May 16 00:00:09.015630 systemd[1]: Started sshd@13-10.0.0.27:22-10.0.0.1:54650.service - OpenSSH per-connection server daemon (10.0.0.1:54650). May 16 00:00:09.016824 systemd-logind[1505]: Removed session 13. May 16 00:00:09.069011 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 54650 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:09.070639 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:09.075626 systemd-logind[1505]: New session 14 of user core. May 16 00:00:09.086463 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 00:00:09.249179 sshd[4112]: Connection closed by 10.0.0.1 port 54650 May 16 00:00:09.249555 sshd-session[4109]: pam_unix(sshd:session): session closed for user core May 16 00:00:09.260954 systemd[1]: sshd@13-10.0.0.27:22-10.0.0.1:54650.service: Deactivated successfully. May 16 00:00:09.264202 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:00:09.266973 systemd-logind[1505]: Session 14 logged out. Waiting for processes to exit. May 16 00:00:09.270883 systemd[1]: Started sshd@14-10.0.0.27:22-10.0.0.1:54662.service - OpenSSH per-connection server daemon (10.0.0.1:54662). May 16 00:00:09.274649 systemd-logind[1505]: Removed session 14. May 16 00:00:09.312554 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 54662 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:09.314081 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:09.318390 systemd-logind[1505]: New session 15 of user core. May 16 00:00:09.329476 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 00:00:09.451249 sshd[4125]: Connection closed by 10.0.0.1 port 54662 May 16 00:00:09.451567 sshd-session[4122]: pam_unix(sshd:session): session closed for user core May 16 00:00:09.455234 systemd[1]: sshd@14-10.0.0.27:22-10.0.0.1:54662.service: Deactivated successfully. May 16 00:00:09.457264 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:00:09.458061 systemd-logind[1505]: Session 15 logged out. Waiting for processes to exit. May 16 00:00:09.458916 systemd-logind[1505]: Removed session 15. May 16 00:00:14.467431 systemd[1]: Started sshd@15-10.0.0.27:22-10.0.0.1:54666.service - OpenSSH per-connection server daemon (10.0.0.1:54666). May 16 00:00:14.523157 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 54666 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:14.524965 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:14.529671 systemd-logind[1505]: New session 16 of user core. May 16 00:00:14.536460 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 00:00:14.652403 sshd[4144]: Connection closed by 10.0.0.1 port 54666 May 16 00:00:14.652755 sshd-session[4142]: pam_unix(sshd:session): session closed for user core May 16 00:00:14.657379 systemd[1]: sshd@15-10.0.0.27:22-10.0.0.1:54666.service: Deactivated successfully. May 16 00:00:14.659396 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:00:14.660259 systemd-logind[1505]: Session 16 logged out. Waiting for processes to exit. May 16 00:00:14.661432 systemd-logind[1505]: Removed session 16. May 16 00:00:19.667628 systemd[1]: Started sshd@16-10.0.0.27:22-10.0.0.1:50434.service - OpenSSH per-connection server daemon (10.0.0.1:50434). May 16 00:00:19.719896 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 50434 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:19.721637 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:19.726165 systemd-logind[1505]: New session 17 of user core. May 16 00:00:19.737486 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 00:00:19.860185 sshd[4160]: Connection closed by 10.0.0.1 port 50434 May 16 00:00:19.860682 sshd-session[4158]: pam_unix(sshd:session): session closed for user core May 16 00:00:19.866604 systemd[1]: sshd@16-10.0.0.27:22-10.0.0.1:50434.service: Deactivated successfully. May 16 00:00:19.869050 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:00:19.869844 systemd-logind[1505]: Session 17 logged out. Waiting for processes to exit. May 16 00:00:19.870955 systemd-logind[1505]: Removed session 17. May 16 00:00:24.878542 systemd[1]: Started sshd@17-10.0.0.27:22-10.0.0.1:50442.service - OpenSSH per-connection server daemon (10.0.0.1:50442). May 16 00:00:24.933182 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 50442 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:24.935153 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:24.940947 systemd-logind[1505]: New session 18 of user core. May 16 00:00:24.948624 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 00:00:25.070111 sshd[4177]: Connection closed by 10.0.0.1 port 50442 May 16 00:00:25.070412 sshd-session[4175]: pam_unix(sshd:session): session closed for user core May 16 00:00:25.075559 systemd[1]: sshd@17-10.0.0.27:22-10.0.0.1:50442.service: Deactivated successfully. May 16 00:00:25.078060 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:00:25.079160 systemd-logind[1505]: Session 18 logged out. Waiting for processes to exit. May 16 00:00:25.080446 systemd-logind[1505]: Removed session 18. May 16 00:00:30.082599 systemd[1]: Started sshd@18-10.0.0.27:22-10.0.0.1:36242.service - OpenSSH per-connection server daemon (10.0.0.1:36242). May 16 00:00:30.132113 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 36242 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:30.133908 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:30.138324 systemd-logind[1505]: New session 19 of user core. May 16 00:00:30.154565 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 00:00:30.266411 sshd[4192]: Connection closed by 10.0.0.1 port 36242 May 16 00:00:30.266926 sshd-session[4190]: pam_unix(sshd:session): session closed for user core May 16 00:00:30.279358 systemd[1]: sshd@18-10.0.0.27:22-10.0.0.1:36242.service: Deactivated successfully. May 16 00:00:30.281434 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:00:30.283412 systemd-logind[1505]: Session 19 logged out. Waiting for processes to exit. May 16 00:00:30.285119 systemd[1]: Started sshd@19-10.0.0.27:22-10.0.0.1:36246.service - OpenSSH per-connection server daemon (10.0.0.1:36246). May 16 00:00:30.286047 systemd-logind[1505]: Removed session 19. May 16 00:00:30.339149 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 36246 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:30.340944 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:30.345601 systemd-logind[1505]: New session 20 of user core. May 16 00:00:30.353620 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 00:00:31.162888 sshd[4207]: Connection closed by 10.0.0.1 port 36246 May 16 00:00:31.163188 sshd-session[4204]: pam_unix(sshd:session): session closed for user core May 16 00:00:31.176429 systemd[1]: sshd@19-10.0.0.27:22-10.0.0.1:36246.service: Deactivated successfully. May 16 00:00:31.178532 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:00:31.180848 systemd-logind[1505]: Session 20 logged out. Waiting for processes to exit. May 16 00:00:31.182203 systemd[1]: Started sshd@20-10.0.0.27:22-10.0.0.1:36252.service - OpenSSH per-connection server daemon (10.0.0.1:36252). May 16 00:00:31.184423 systemd-logind[1505]: Removed session 20. May 16 00:00:31.232397 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 36252 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:31.234060 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:31.238899 systemd-logind[1505]: New session 21 of user core. May 16 00:00:31.249544 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 00:00:32.780720 sshd[4221]: Connection closed by 10.0.0.1 port 36252 May 16 00:00:32.781266 sshd-session[4218]: pam_unix(sshd:session): session closed for user core May 16 00:00:32.793073 systemd[1]: sshd@20-10.0.0.27:22-10.0.0.1:36252.service: Deactivated successfully. May 16 00:00:32.796043 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:00:32.797116 systemd-logind[1505]: Session 21 logged out. Waiting for processes to exit. May 16 00:00:32.800587 systemd[1]: Started sshd@21-10.0.0.27:22-10.0.0.1:36254.service - OpenSSH per-connection server daemon (10.0.0.1:36254). May 16 00:00:32.802835 systemd-logind[1505]: Removed session 21. May 16 00:00:32.849124 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 36254 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:32.850761 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:32.855420 systemd-logind[1505]: New session 22 of user core. May 16 00:00:32.863461 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 00:00:33.466859 sshd[4242]: Connection closed by 10.0.0.1 port 36254 May 16 00:00:33.467345 sshd-session[4239]: pam_unix(sshd:session): session closed for user core May 16 00:00:33.477265 systemd[1]: sshd@21-10.0.0.27:22-10.0.0.1:36254.service: Deactivated successfully. May 16 00:00:33.479590 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:00:33.481157 systemd-logind[1505]: Session 22 logged out. Waiting for processes to exit. May 16 00:00:33.482809 systemd[1]: Started sshd@22-10.0.0.27:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266). May 16 00:00:33.483729 systemd-logind[1505]: Removed session 22. May 16 00:00:33.526780 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:33.528858 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:33.534245 systemd-logind[1505]: New session 23 of user core. May 16 00:00:33.543505 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 00:00:33.707994 sshd[4255]: Connection closed by 10.0.0.1 port 36266 May 16 00:00:33.708369 sshd-session[4252]: pam_unix(sshd:session): session closed for user core May 16 00:00:33.712230 systemd[1]: sshd@22-10.0.0.27:22-10.0.0.1:36266.service: Deactivated successfully. May 16 00:00:33.714124 systemd[1]: session-23.scope: Deactivated successfully. May 16 00:00:33.714972 systemd-logind[1505]: Session 23 logged out. Waiting for processes to exit. May 16 00:00:33.715915 systemd-logind[1505]: Removed session 23. May 16 00:00:37.198560 kubelet[2682]: E0516 00:00:37.198480 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:38.727832 systemd[1]: Started sshd@23-10.0.0.27:22-10.0.0.1:45544.service - OpenSSH per-connection server daemon (10.0.0.1:45544). May 16 00:00:38.784393 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 45544 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:38.786234 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:38.791740 systemd-logind[1505]: New session 24 of user core. May 16 00:00:38.796465 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 00:00:38.921462 sshd[4271]: Connection closed by 10.0.0.1 port 45544 May 16 00:00:38.921919 sshd-session[4269]: pam_unix(sshd:session): session closed for user core May 16 00:00:38.926752 systemd[1]: sshd@23-10.0.0.27:22-10.0.0.1:45544.service: Deactivated successfully. May 16 00:00:38.929152 systemd[1]: session-24.scope: Deactivated successfully. May 16 00:00:38.930116 systemd-logind[1505]: Session 24 logged out. Waiting for processes to exit. May 16 00:00:38.931175 systemd-logind[1505]: Removed session 24. May 16 00:00:39.198790 kubelet[2682]: E0516 00:00:39.198751 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:43.934640 systemd[1]: Started sshd@24-10.0.0.27:22-10.0.0.1:45556.service - OpenSSH per-connection server daemon (10.0.0.1:45556). May 16 00:00:43.979866 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 45556 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:43.982037 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:43.987840 systemd-logind[1505]: New session 25 of user core. May 16 00:00:44.003544 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 00:00:44.125541 sshd[4288]: Connection closed by 10.0.0.1 port 45556 May 16 00:00:44.125977 sshd-session[4286]: pam_unix(sshd:session): session closed for user core May 16 00:00:44.130804 systemd[1]: sshd@24-10.0.0.27:22-10.0.0.1:45556.service: Deactivated successfully. May 16 00:00:44.133193 systemd[1]: session-25.scope: Deactivated successfully. May 16 00:00:44.133979 systemd-logind[1505]: Session 25 logged out. Waiting for processes to exit. May 16 00:00:44.135108 systemd-logind[1505]: Removed session 25. May 16 00:00:48.199401 kubelet[2682]: E0516 00:00:48.199365 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:49.142412 systemd[1]: Started sshd@25-10.0.0.27:22-10.0.0.1:38258.service - OpenSSH per-connection server daemon (10.0.0.1:38258). May 16 00:00:49.199162 kubelet[2682]: E0516 00:00:49.199131 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:49.206224 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 38258 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:49.208113 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:49.215098 systemd-logind[1505]: New session 26 of user core. May 16 00:00:49.228637 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 00:00:49.345655 sshd[4305]: Connection closed by 10.0.0.1 port 38258 May 16 00:00:49.346001 sshd-session[4303]: pam_unix(sshd:session): session closed for user core May 16 00:00:49.349799 systemd[1]: sshd@25-10.0.0.27:22-10.0.0.1:38258.service: Deactivated successfully. May 16 00:00:49.351772 systemd[1]: session-26.scope: Deactivated successfully. May 16 00:00:49.352486 systemd-logind[1505]: Session 26 logged out. Waiting for processes to exit. May 16 00:00:49.353434 systemd-logind[1505]: Removed session 26. May 16 00:00:50.202026 kubelet[2682]: E0516 00:00:50.201988 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:54.363620 systemd[1]: Started sshd@26-10.0.0.27:22-10.0.0.1:38262.service - OpenSSH per-connection server daemon (10.0.0.1:38262). May 16 00:00:54.405124 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 38262 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:54.407118 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:54.412783 systemd-logind[1505]: New session 27 of user core. May 16 00:00:54.420495 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 00:00:54.537800 sshd[4323]: Connection closed by 10.0.0.1 port 38262 May 16 00:00:54.538236 sshd-session[4321]: pam_unix(sshd:session): session closed for user core May 16 00:00:54.552665 systemd[1]: sshd@26-10.0.0.27:22-10.0.0.1:38262.service: Deactivated successfully. May 16 00:00:54.554547 systemd[1]: session-27.scope: Deactivated successfully. May 16 00:00:54.556245 systemd-logind[1505]: Session 27 logged out. Waiting for processes to exit. May 16 00:00:54.557749 systemd[1]: Started sshd@27-10.0.0.27:22-10.0.0.1:38278.service - OpenSSH per-connection server daemon (10.0.0.1:38278). May 16 00:00:54.558738 systemd-logind[1505]: Removed session 27. May 16 00:00:54.608945 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 38278 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:54.611036 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:54.616599 systemd-logind[1505]: New session 28 of user core. May 16 00:00:54.626593 systemd[1]: Started session-28.scope - Session 28 of User core. May 16 00:00:56.407128 containerd[1523]: time="2025-05-16T00:00:56.407078545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" id:\"944b8b5fb6b8498940b36c651f559fee7e0cbbd056f14c75a2d44c638d5b194d\" pid:4358 exited_at:{seconds:1747353656 nanos:406649563}" May 16 00:00:56.409782 containerd[1523]: time="2025-05-16T00:00:56.409734883Z" level=info msg="StopContainer for \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" with timeout 2 (s)" May 16 00:00:56.410190 containerd[1523]: time="2025-05-16T00:00:56.410158935Z" level=info msg="Stop container \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" with signal terminated" May 16 00:00:56.415792 containerd[1523]: time="2025-05-16T00:00:56.415695014Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:00:56.419158 systemd-networkd[1411]: lxc_health: Link DOWN May 16 00:00:56.419169 systemd-networkd[1411]: lxc_health: Lost carrier May 16 00:00:56.430870 containerd[1523]: time="2025-05-16T00:00:56.430594604Z" level=info msg="StopContainer for \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\" with timeout 30 (s)" May 16 00:00:56.431570 containerd[1523]: time="2025-05-16T00:00:56.431516769Z" level=info msg="Stop container \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\" with signal terminated" May 16 00:00:56.443199 systemd[1]: cri-containerd-6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7.scope: Deactivated successfully. May 16 00:00:56.443671 systemd[1]: cri-containerd-6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7.scope: Consumed 7.341s CPU time, 122.4M memory peak, 152K read from disk, 13.3M written to disk. May 16 00:00:56.444972 containerd[1523]: time="2025-05-16T00:00:56.444480546Z" level=info msg="received exit event container_id:\"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" id:\"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" pid:3338 exited_at:{seconds:1747353656 nanos:443298759}" May 16 00:00:56.444972 containerd[1523]: time="2025-05-16T00:00:56.444733215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" id:\"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" pid:3338 exited_at:{seconds:1747353656 nanos:443298759}" May 16 00:00:56.445890 systemd[1]: cri-containerd-4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026.scope: Deactivated successfully. May 16 00:00:56.447671 containerd[1523]: time="2025-05-16T00:00:56.447123519Z" level=info msg="received exit event container_id:\"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\" id:\"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\" pid:3090 exited_at:{seconds:1747353656 nanos:446855982}" May 16 00:00:56.447671 containerd[1523]: time="2025-05-16T00:00:56.447364916Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\" id:\"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\" pid:3090 exited_at:{seconds:1747353656 nanos:446855982}" May 16 00:00:56.470451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7-rootfs.mount: Deactivated successfully. May 16 00:00:56.473035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026-rootfs.mount: Deactivated successfully. May 16 00:00:56.623773 containerd[1523]: time="2025-05-16T00:00:56.623694206Z" level=info msg="StopContainer for \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" returns successfully" May 16 00:00:56.624621 containerd[1523]: time="2025-05-16T00:00:56.624577838Z" level=info msg="StopPodSandbox for \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\"" May 16 00:00:56.641045 containerd[1523]: time="2025-05-16T00:00:56.640931090Z" level=info msg="Container to stop \"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:00:56.641045 containerd[1523]: time="2025-05-16T00:00:56.641017093Z" level=info msg="Container to stop \"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:00:56.641045 containerd[1523]: time="2025-05-16T00:00:56.641031450Z" level=info msg="Container to stop \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:00:56.641045 containerd[1523]: time="2025-05-16T00:00:56.641042511Z" level=info msg="Container to stop \"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:00:56.641045 containerd[1523]: time="2025-05-16T00:00:56.641051458Z" level=info msg="Container to stop \"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:00:56.646157 containerd[1523]: time="2025-05-16T00:00:56.646101156Z" level=info msg="StopContainer for \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\" returns successfully" May 16 00:00:56.646703 containerd[1523]: time="2025-05-16T00:00:56.646664141Z" level=info msg="StopPodSandbox for \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\"" May 16 00:00:56.646902 containerd[1523]: time="2025-05-16T00:00:56.646868768Z" level=info msg="Container to stop \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:00:56.649402 systemd[1]: cri-containerd-1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4.scope: Deactivated successfully. May 16 00:00:56.651692 containerd[1523]: time="2025-05-16T00:00:56.651572992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" id:\"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" pid:2874 exit_status:137 exited_at:{seconds:1747353656 nanos:651102762}" May 16 00:00:56.666935 systemd[1]: cri-containerd-9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090.scope: Deactivated successfully. May 16 00:00:56.683690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4-rootfs.mount: Deactivated successfully. May 16 00:00:56.697118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090-rootfs.mount: Deactivated successfully. May 16 00:00:56.709917 containerd[1523]: time="2025-05-16T00:00:56.709853230Z" level=info msg="shim disconnected" id=9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090 namespace=k8s.io May 16 00:00:56.711164 containerd[1523]: time="2025-05-16T00:00:56.710054220Z" level=warning msg="cleaning up after shim disconnected" id=9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090 namespace=k8s.io May 16 00:00:56.711164 containerd[1523]: time="2025-05-16T00:00:56.710069359Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:00:56.711164 containerd[1523]: time="2025-05-16T00:00:56.710019865Z" level=info msg="shim disconnected" id=1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4 namespace=k8s.io May 16 00:00:56.711164 containerd[1523]: time="2025-05-16T00:00:56.710177794Z" level=warning msg="cleaning up after shim disconnected" id=1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4 namespace=k8s.io May 16 00:00:56.711164 containerd[1523]: time="2025-05-16T00:00:56.710189566Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:00:56.741701 containerd[1523]: time="2025-05-16T00:00:56.741648639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\" id:\"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\" pid:2798 exit_status:137 exited_at:{seconds:1747353656 nanos:668892423}" May 16 00:00:56.745885 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4-shm.mount: Deactivated successfully. May 16 00:00:56.746034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090-shm.mount: Deactivated successfully. May 16 00:00:56.756625 containerd[1523]: time="2025-05-16T00:00:56.756559862Z" level=info msg="TearDown network for sandbox \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\" successfully" May 16 00:00:56.756625 containerd[1523]: time="2025-05-16T00:00:56.756605578Z" level=info msg="StopPodSandbox for \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\" returns successfully" May 16 00:00:56.759223 containerd[1523]: time="2025-05-16T00:00:56.759166896Z" level=info msg="TearDown network for sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" successfully" May 16 00:00:56.759223 containerd[1523]: time="2025-05-16T00:00:56.759218965Z" level=info msg="StopPodSandbox for \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" returns successfully" May 16 00:00:56.764813 containerd[1523]: time="2025-05-16T00:00:56.764777386Z" level=info msg="received exit event sandbox_id:\"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\" exit_status:137 exited_at:{seconds:1747353656 nanos:668892423}" May 16 00:00:56.765068 containerd[1523]: time="2025-05-16T00:00:56.765001029Z" level=info msg="received exit event sandbox_id:\"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" exit_status:137 exited_at:{seconds:1747353656 nanos:651102762}" May 16 00:00:56.833502 kubelet[2682]: I0516 00:00:56.833449 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-lib-modules\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.833502 kubelet[2682]: I0516 00:00:56.833496 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-run\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.834087 kubelet[2682]: I0516 00:00:56.833534 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edd21679-b92a-47c7-9bb7-9c163e58b396-clustermesh-secrets\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.834087 kubelet[2682]: I0516 00:00:56.833567 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmsk9\" (UniqueName: \"kubernetes.io/projected/e4144af0-3797-4d68-8108-1aaa90edce43-kube-api-access-cmsk9\") pod \"e4144af0-3797-4d68-8108-1aaa90edce43\" (UID: \"e4144af0-3797-4d68-8108-1aaa90edce43\") " May 16 00:00:56.834087 kubelet[2682]: I0516 00:00:56.833585 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-etc-cni-netd\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.834087 kubelet[2682]: I0516 00:00:56.833600 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cni-path\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.834087 kubelet[2682]: I0516 00:00:56.833603 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:00:56.834087 kubelet[2682]: I0516 00:00:56.833621 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-config-path\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.834288 kubelet[2682]: I0516 00:00:56.833608 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:00:56.834288 kubelet[2682]: I0516 00:00:56.833650 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-hostproc\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.834288 kubelet[2682]: I0516 00:00:56.833677 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:00:56.834288 kubelet[2682]: I0516 00:00:56.833680 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cni-path" (OuterVolumeSpecName: "cni-path") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:00:56.834288 kubelet[2682]: I0516 00:00:56.833799 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-hostproc" (OuterVolumeSpecName: "hostproc") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:00:56.836982 kubelet[2682]: I0516 00:00:56.833682 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8v8w\" (UniqueName: \"kubernetes.io/projected/edd21679-b92a-47c7-9bb7-9c163e58b396-kube-api-access-d8v8w\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.836982 kubelet[2682]: I0516 00:00:56.834971 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-bpf-maps\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.836982 kubelet[2682]: I0516 00:00:56.834998 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edd21679-b92a-47c7-9bb7-9c163e58b396-hubble-tls\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.836982 kubelet[2682]: I0516 00:00:56.835012 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-xtables-lock\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.836982 kubelet[2682]: I0516 00:00:56.835029 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4144af0-3797-4d68-8108-1aaa90edce43-cilium-config-path\") pod \"e4144af0-3797-4d68-8108-1aaa90edce43\" (UID: \"e4144af0-3797-4d68-8108-1aaa90edce43\") " May 16 00:00:56.836982 kubelet[2682]: I0516 00:00:56.835049 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-host-proc-sys-net\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.837352 kubelet[2682]: I0516 00:00:56.835064 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-cgroup\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.837352 kubelet[2682]: I0516 00:00:56.835079 2682 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-host-proc-sys-kernel\") pod \"edd21679-b92a-47c7-9bb7-9c163e58b396\" (UID: \"edd21679-b92a-47c7-9bb7-9c163e58b396\") " May 16 00:00:56.837352 kubelet[2682]: I0516 00:00:56.835122 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.837352 kubelet[2682]: I0516 00:00:56.835132 2682 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.837352 kubelet[2682]: I0516 00:00:56.835141 2682 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.837352 kubelet[2682]: I0516 00:00:56.835151 2682 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.837352 kubelet[2682]: I0516 00:00:56.835159 2682 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.837622 kubelet[2682]: I0516 00:00:56.835183 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:00:56.837622 kubelet[2682]: I0516 00:00:56.835203 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:00:56.837925 kubelet[2682]: I0516 00:00:56.837880 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:00:56.838061 kubelet[2682]: I0516 00:00:56.837933 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:00:56.838061 kubelet[2682]: I0516 00:00:56.837963 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:00:56.838061 kubelet[2682]: I0516 00:00:56.838005 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:00:56.838927 kubelet[2682]: I0516 00:00:56.838738 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd21679-b92a-47c7-9bb7-9c163e58b396-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:00:56.839478 kubelet[2682]: I0516 00:00:56.839435 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd21679-b92a-47c7-9bb7-9c163e58b396-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:00:56.839714 kubelet[2682]: I0516 00:00:56.839630 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd21679-b92a-47c7-9bb7-9c163e58b396-kube-api-access-d8v8w" (OuterVolumeSpecName: "kube-api-access-d8v8w") pod "edd21679-b92a-47c7-9bb7-9c163e58b396" (UID: "edd21679-b92a-47c7-9bb7-9c163e58b396"). InnerVolumeSpecName "kube-api-access-d8v8w". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:00:56.839799 kubelet[2682]: I0516 00:00:56.839771 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4144af0-3797-4d68-8108-1aaa90edce43-kube-api-access-cmsk9" (OuterVolumeSpecName: "kube-api-access-cmsk9") pod "e4144af0-3797-4d68-8108-1aaa90edce43" (UID: "e4144af0-3797-4d68-8108-1aaa90edce43"). InnerVolumeSpecName "kube-api-access-cmsk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:00:56.842142 kubelet[2682]: I0516 00:00:56.842087 2682 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4144af0-3797-4d68-8108-1aaa90edce43-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e4144af0-3797-4d68-8108-1aaa90edce43" (UID: "e4144af0-3797-4d68-8108-1aaa90edce43"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:00:56.936542 kubelet[2682]: I0516 00:00:56.936293 2682 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edd21679-b92a-47c7-9bb7-9c163e58b396-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.936542 kubelet[2682]: I0516 00:00:56.936388 2682 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.936542 kubelet[2682]: I0516 00:00:56.936404 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4144af0-3797-4d68-8108-1aaa90edce43-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.936542 kubelet[2682]: I0516 00:00:56.936418 2682 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.936542 kubelet[2682]: I0516 00:00:56.936429 2682 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.936542 kubelet[2682]: I0516 00:00:56.936440 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.936542 kubelet[2682]: I0516 00:00:56.936451 2682 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edd21679-b92a-47c7-9bb7-9c163e58b396-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.936542 kubelet[2682]: I0516 00:00:56.936462 2682 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cmsk9\" (UniqueName: \"kubernetes.io/projected/e4144af0-3797-4d68-8108-1aaa90edce43-kube-api-access-cmsk9\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.936952 kubelet[2682]: I0516 00:00:56.936474 2682 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edd21679-b92a-47c7-9bb7-9c163e58b396-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.936952 kubelet[2682]: I0516 00:00:56.936484 2682 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d8v8w\" (UniqueName: \"kubernetes.io/projected/edd21679-b92a-47c7-9bb7-9c163e58b396-kube-api-access-d8v8w\") on node \"localhost\" DevicePath \"\"" May 16 00:00:56.936952 kubelet[2682]: I0516 00:00:56.936495 2682 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edd21679-b92a-47c7-9bb7-9c163e58b396-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 00:00:57.233028 kubelet[2682]: I0516 00:00:57.232850 2682 scope.go:117] "RemoveContainer" containerID="4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026" May 16 00:00:57.234998 containerd[1523]: time="2025-05-16T00:00:57.234950379Z" level=info msg="RemoveContainer for \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\"" May 16 00:00:57.241135 systemd[1]: Removed slice kubepods-besteffort-pode4144af0_3797_4d68_8108_1aaa90edce43.slice - libcontainer container kubepods-besteffort-pode4144af0_3797_4d68_8108_1aaa90edce43.slice. May 16 00:00:57.245009 systemd[1]: Removed slice kubepods-burstable-podedd21679_b92a_47c7_9bb7_9c163e58b396.slice - libcontainer container kubepods-burstable-podedd21679_b92a_47c7_9bb7_9c163e58b396.slice. May 16 00:00:57.245107 systemd[1]: kubepods-burstable-podedd21679_b92a_47c7_9bb7_9c163e58b396.slice: Consumed 7.471s CPU time, 122.9M memory peak, 514K read from disk, 13.3M written to disk. May 16 00:00:57.302522 containerd[1523]: time="2025-05-16T00:00:57.302457073Z" level=info msg="RemoveContainer for \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\" returns successfully" May 16 00:00:57.302927 kubelet[2682]: I0516 00:00:57.302884 2682 scope.go:117] "RemoveContainer" containerID="4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026" May 16 00:00:57.303296 containerd[1523]: time="2025-05-16T00:00:57.303252168Z" level=error msg="ContainerStatus for \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\": not found" May 16 00:00:57.307109 kubelet[2682]: E0516 00:00:57.307053 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\": not found" containerID="4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026" May 16 00:00:57.307191 kubelet[2682]: I0516 00:00:57.307104 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026"} err="failed to get container status \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d96b9696a0288233a7a7fd77101bd35ded4043c4f9e3d6c643939430a8a3026\": not found" May 16 00:00:57.307235 kubelet[2682]: I0516 00:00:57.307199 2682 scope.go:117] "RemoveContainer" containerID="6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7" May 16 00:00:57.309233 containerd[1523]: time="2025-05-16T00:00:57.309195725Z" level=info msg="RemoveContainer for \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\"" May 16 00:00:57.399652 containerd[1523]: time="2025-05-16T00:00:57.399545900Z" level=info msg="RemoveContainer for \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" returns successfully" May 16 00:00:57.399915 kubelet[2682]: I0516 00:00:57.399871 2682 scope.go:117] "RemoveContainer" containerID="f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d" May 16 00:00:57.401390 containerd[1523]: time="2025-05-16T00:00:57.401359773Z" level=info msg="RemoveContainer for \"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\"" May 16 00:00:57.463962 containerd[1523]: time="2025-05-16T00:00:57.463894497Z" level=info msg="RemoveContainer for \"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\" returns successfully" May 16 00:00:57.464471 kubelet[2682]: I0516 00:00:57.464233 2682 scope.go:117] "RemoveContainer" containerID="21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723" May 16 00:00:57.467305 containerd[1523]: time="2025-05-16T00:00:57.467270465Z" level=info msg="RemoveContainer for \"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\"" May 16 00:00:57.470287 systemd[1]: var-lib-kubelet-pods-edd21679\x2db92a\x2d47c7\x2d9bb7\x2d9c163e58b396-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd8v8w.mount: Deactivated successfully. May 16 00:00:57.470448 systemd[1]: var-lib-kubelet-pods-edd21679\x2db92a\x2d47c7\x2d9bb7\x2d9c163e58b396-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:00:57.470555 systemd[1]: var-lib-kubelet-pods-edd21679\x2db92a\x2d47c7\x2d9bb7\x2d9c163e58b396-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:00:57.470652 systemd[1]: var-lib-kubelet-pods-e4144af0\x2d3797\x2d4d68\x2d8108\x2d1aaa90edce43-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcmsk9.mount: Deactivated successfully. May 16 00:00:57.527164 containerd[1523]: time="2025-05-16T00:00:57.527003197Z" level=info msg="RemoveContainer for \"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\" returns successfully" May 16 00:00:57.527527 kubelet[2682]: I0516 00:00:57.527406 2682 scope.go:117] "RemoveContainer" containerID="ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031" May 16 00:00:57.529820 containerd[1523]: time="2025-05-16T00:00:57.529782848Z" level=info msg="RemoveContainer for \"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\"" May 16 00:00:57.590027 containerd[1523]: time="2025-05-16T00:00:57.589960932Z" level=info msg="RemoveContainer for \"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\" returns successfully" May 16 00:00:57.590292 kubelet[2682]: I0516 00:00:57.590255 2682 scope.go:117] "RemoveContainer" containerID="c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074" May 16 00:00:57.591979 containerd[1523]: time="2025-05-16T00:00:57.591891736Z" level=info msg="RemoveContainer for \"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\"" May 16 00:00:57.665174 containerd[1523]: time="2025-05-16T00:00:57.665100831Z" level=info msg="RemoveContainer for \"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\" returns successfully" May 16 00:00:57.665460 kubelet[2682]: I0516 00:00:57.665415 2682 scope.go:117] "RemoveContainer" containerID="6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7" May 16 00:00:57.665793 containerd[1523]: time="2025-05-16T00:00:57.665722758Z" level=error msg="ContainerStatus for \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\": not found" May 16 00:00:57.665917 kubelet[2682]: E0516 00:00:57.665875 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\": not found" containerID="6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7" May 16 00:00:57.665965 kubelet[2682]: I0516 00:00:57.665919 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7"} err="failed to get container status \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ac3a3bae2188cab192300bc07af31153a73ba3348a285a8aef03dbdc2fc58a7\": not found" May 16 00:00:57.665965 kubelet[2682]: I0516 00:00:57.665949 2682 scope.go:117] "RemoveContainer" containerID="f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d" May 16 00:00:57.666500 containerd[1523]: time="2025-05-16T00:00:57.666122073Z" level=error msg="ContainerStatus for \"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\": not found" May 16 00:00:57.666687 kubelet[2682]: E0516 00:00:57.666634 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\": not found" containerID="f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d" May 16 00:00:57.666687 kubelet[2682]: I0516 00:00:57.666662 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d"} err="failed to get container status \"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f301bd2751bc08ddfc945d4592996b3648981d3eb38663e5e9cf5a3b29ba600d\": not found" May 16 00:00:57.666687 kubelet[2682]: I0516 00:00:57.666680 2682 scope.go:117] "RemoveContainer" containerID="21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723" May 16 00:00:57.667057 containerd[1523]: time="2025-05-16T00:00:57.666992250Z" level=error msg="ContainerStatus for \"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\": not found" May 16 00:00:57.667200 kubelet[2682]: E0516 00:00:57.667178 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\": not found" containerID="21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723" May 16 00:00:57.667255 kubelet[2682]: I0516 00:00:57.667203 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723"} err="failed to get container status \"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\": rpc error: code = NotFound desc = an error occurred when try to find container \"21c91621b8ae0dd21bddbb3aacd0bee4a4fba9c8eba12b082f0a62b9e4deb723\": not found" May 16 00:00:57.667255 kubelet[2682]: I0516 00:00:57.667220 2682 scope.go:117] "RemoveContainer" containerID="ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031" May 16 00:00:57.667457 containerd[1523]: time="2025-05-16T00:00:57.667418807Z" level=error msg="ContainerStatus for \"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\": not found" May 16 00:00:57.667621 kubelet[2682]: E0516 00:00:57.667600 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\": not found" containerID="ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031" May 16 00:00:57.667689 kubelet[2682]: I0516 00:00:57.667623 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031"} err="failed to get container status \"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce3602803b6a2bb03b737cae32c483cc7ae8aeda6926ab85914b3b5b37bf6031\": not found" May 16 00:00:57.667689 kubelet[2682]: I0516 00:00:57.667644 2682 scope.go:117] "RemoveContainer" containerID="c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074" May 16 00:00:57.667854 containerd[1523]: time="2025-05-16T00:00:57.667794759Z" level=error msg="ContainerStatus for \"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\": not found" May 16 00:00:57.667939 kubelet[2682]: E0516 00:00:57.667915 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\": not found" containerID="c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074" May 16 00:00:57.667982 kubelet[2682]: I0516 00:00:57.667940 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074"} err="failed to get container status \"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\": rpc error: code = NotFound desc = an error occurred when try to find container \"c49bfda820042ccd0a948c246da6835e72d71e786627fa39f051ac0bde0ca074\": not found" May 16 00:00:58.201545 kubelet[2682]: I0516 00:00:58.201469 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4144af0-3797-4d68-8108-1aaa90edce43" path="/var/lib/kubelet/pods/e4144af0-3797-4d68-8108-1aaa90edce43/volumes" May 16 00:00:58.202227 kubelet[2682]: I0516 00:00:58.202181 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd21679-b92a-47c7-9bb7-9c163e58b396" path="/var/lib/kubelet/pods/edd21679-b92a-47c7-9bb7-9c163e58b396/volumes" May 16 00:00:58.255680 sshd[4338]: Connection closed by 10.0.0.1 port 38278 May 16 00:00:58.256169 sshd-session[4335]: pam_unix(sshd:session): session closed for user core May 16 00:00:58.266912 systemd[1]: sshd@27-10.0.0.27:22-10.0.0.1:38278.service: Deactivated successfully. May 16 00:00:58.269775 systemd[1]: session-28.scope: Deactivated successfully. May 16 00:00:58.271985 systemd-logind[1505]: Session 28 logged out. Waiting for processes to exit. May 16 00:00:58.273797 systemd[1]: Started sshd@28-10.0.0.27:22-10.0.0.1:39536.service - OpenSSH per-connection server daemon (10.0.0.1:39536). May 16 00:00:58.275242 systemd-logind[1505]: Removed session 28. May 16 00:00:58.323678 sshd[4494]: Accepted publickey for core from 10.0.0.1 port 39536 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:58.325712 sshd-session[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:58.330895 systemd-logind[1505]: New session 29 of user core. May 16 00:00:58.339458 systemd[1]: Started session-29.scope - Session 29 of User core. May 16 00:00:58.966860 sshd[4497]: Connection closed by 10.0.0.1 port 39536 May 16 00:00:58.967286 sshd-session[4494]: pam_unix(sshd:session): session closed for user core May 16 00:00:58.983099 systemd[1]: sshd@28-10.0.0.27:22-10.0.0.1:39536.service: Deactivated successfully. May 16 00:00:58.985573 systemd[1]: session-29.scope: Deactivated successfully. May 16 00:00:58.992472 kubelet[2682]: I0516 00:00:58.989361 2682 memory_manager.go:355] "RemoveStaleState removing state" podUID="e4144af0-3797-4d68-8108-1aaa90edce43" containerName="cilium-operator" May 16 00:00:58.992472 kubelet[2682]: I0516 00:00:58.989390 2682 memory_manager.go:355] "RemoveStaleState removing state" podUID="edd21679-b92a-47c7-9bb7-9c163e58b396" containerName="cilium-agent" May 16 00:00:58.992430 systemd-logind[1505]: Session 29 logged out. Waiting for processes to exit. May 16 00:00:58.995657 systemd[1]: Started sshd@29-10.0.0.27:22-10.0.0.1:39550.service - OpenSSH per-connection server daemon (10.0.0.1:39550). May 16 00:00:59.007285 systemd-logind[1505]: Removed session 29. May 16 00:00:59.016393 systemd[1]: Created slice kubepods-burstable-pod40067d51_59ad_40a5_9fa3_e5bd37c515bc.slice - libcontainer container kubepods-burstable-pod40067d51_59ad_40a5_9fa3_e5bd37c515bc.slice. May 16 00:00:59.050696 kubelet[2682]: I0516 00:00:59.050652 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40067d51-59ad-40a5-9fa3-e5bd37c515bc-host-proc-sys-net\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.050696 kubelet[2682]: I0516 00:00:59.050692 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40067d51-59ad-40a5-9fa3-e5bd37c515bc-hostproc\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.050696 kubelet[2682]: I0516 00:00:59.050709 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40067d51-59ad-40a5-9fa3-e5bd37c515bc-hubble-tls\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.050902 kubelet[2682]: I0516 00:00:59.050724 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40067d51-59ad-40a5-9fa3-e5bd37c515bc-cilium-cgroup\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.050902 kubelet[2682]: I0516 00:00:59.050785 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40067d51-59ad-40a5-9fa3-e5bd37c515bc-xtables-lock\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.050902 kubelet[2682]: I0516 00:00:59.050817 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40067d51-59ad-40a5-9fa3-e5bd37c515bc-cilium-config-path\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.050990 kubelet[2682]: I0516 00:00:59.050946 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40067d51-59ad-40a5-9fa3-e5bd37c515bc-cilium-run\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.051024 kubelet[2682]: I0516 00:00:59.050995 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40067d51-59ad-40a5-9fa3-e5bd37c515bc-bpf-maps\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.051024 kubelet[2682]: I0516 00:00:59.051014 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40067d51-59ad-40a5-9fa3-e5bd37c515bc-etc-cni-netd\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.051096 kubelet[2682]: I0516 00:00:59.051036 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40067d51-59ad-40a5-9fa3-e5bd37c515bc-clustermesh-secrets\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.051096 kubelet[2682]: I0516 00:00:59.051072 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40067d51-59ad-40a5-9fa3-e5bd37c515bc-cni-path\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.051156 kubelet[2682]: I0516 00:00:59.051099 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrq8k\" (UniqueName: \"kubernetes.io/projected/40067d51-59ad-40a5-9fa3-e5bd37c515bc-kube-api-access-nrq8k\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.051156 kubelet[2682]: I0516 00:00:59.051120 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40067d51-59ad-40a5-9fa3-e5bd37c515bc-lib-modules\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.051156 kubelet[2682]: I0516 00:00:59.051138 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/40067d51-59ad-40a5-9fa3-e5bd37c515bc-cilium-ipsec-secrets\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.051236 kubelet[2682]: I0516 00:00:59.051160 2682 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40067d51-59ad-40a5-9fa3-e5bd37c515bc-host-proc-sys-kernel\") pod \"cilium-kt9c6\" (UID: \"40067d51-59ad-40a5-9fa3-e5bd37c515bc\") " pod="kube-system/cilium-kt9c6" May 16 00:00:59.052066 sshd[4508]: Accepted publickey for core from 10.0.0.1 port 39550 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:59.053834 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:59.058473 systemd-logind[1505]: New session 30 of user core. May 16 00:00:59.067508 systemd[1]: Started session-30.scope - Session 30 of User core. May 16 00:00:59.119291 sshd[4511]: Connection closed by 10.0.0.1 port 39550 May 16 00:00:59.119812 sshd-session[4508]: pam_unix(sshd:session): session closed for user core May 16 00:00:59.133287 systemd[1]: sshd@29-10.0.0.27:22-10.0.0.1:39550.service: Deactivated successfully. May 16 00:00:59.135269 systemd[1]: session-30.scope: Deactivated successfully. May 16 00:00:59.137162 systemd-logind[1505]: Session 30 logged out. Waiting for processes to exit. May 16 00:00:59.138954 systemd[1]: Started sshd@30-10.0.0.27:22-10.0.0.1:39558.service - OpenSSH per-connection server daemon (10.0.0.1:39558). May 16 00:00:59.139970 systemd-logind[1505]: Removed session 30. May 16 00:00:59.202098 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 39558 ssh2: RSA SHA256:XsJn4T+/RYxuNUuIxGTEUZjANF5ZJTtbZPekMS904A4 May 16 00:00:59.203908 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:00:59.208676 systemd-logind[1505]: New session 31 of user core. May 16 00:00:59.223638 systemd[1]: Started session-31.scope - Session 31 of User core. May 16 00:00:59.263420 kubelet[2682]: E0516 00:00:59.263354 2682 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:00:59.323346 kubelet[2682]: E0516 00:00:59.323175 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:59.323958 containerd[1523]: time="2025-05-16T00:00:59.323897012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kt9c6,Uid:40067d51-59ad-40a5-9fa3-e5bd37c515bc,Namespace:kube-system,Attempt:0,}" May 16 00:00:59.346439 containerd[1523]: time="2025-05-16T00:00:59.346387473Z" level=info msg="connecting to shim 198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb" address="unix:///run/containerd/s/80280cc260fdc457603a85096b717c36f783ebb1787a4ce9d5fbae9128383592" namespace=k8s.io protocol=ttrpc version=3 May 16 00:00:59.379507 systemd[1]: Started cri-containerd-198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb.scope - libcontainer container 198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb. May 16 00:00:59.407279 containerd[1523]: time="2025-05-16T00:00:59.407203967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kt9c6,Uid:40067d51-59ad-40a5-9fa3-e5bd37c515bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb\"" May 16 00:00:59.408152 kubelet[2682]: E0516 00:00:59.408119 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:00:59.410965 containerd[1523]: time="2025-05-16T00:00:59.410933844Z" level=info msg="CreateContainer within sandbox \"198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:00:59.419165 containerd[1523]: time="2025-05-16T00:00:59.419133286Z" level=info msg="Container 5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913: CDI devices from CRI Config.CDIDevices: []" May 16 00:00:59.428776 containerd[1523]: time="2025-05-16T00:00:59.428730003Z" level=info msg="CreateContainer within sandbox \"198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913\"" May 16 00:00:59.429392 containerd[1523]: time="2025-05-16T00:00:59.429336250Z" level=info msg="StartContainer for \"5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913\"" May 16 00:00:59.430348 containerd[1523]: time="2025-05-16T00:00:59.430294874Z" level=info msg="connecting to shim 5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913" address="unix:///run/containerd/s/80280cc260fdc457603a85096b717c36f783ebb1787a4ce9d5fbae9128383592" protocol=ttrpc version=3 May 16 00:00:59.458687 systemd[1]: Started cri-containerd-5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913.scope - libcontainer container 5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913. May 16 00:00:59.497233 containerd[1523]: time="2025-05-16T00:00:59.496662830Z" level=info msg="StartContainer for \"5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913\" returns successfully" May 16 00:00:59.504597 systemd[1]: cri-containerd-5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913.scope: Deactivated successfully. May 16 00:00:59.505950 containerd[1523]: time="2025-05-16T00:00:59.505907261Z" level=info msg="received exit event container_id:\"5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913\" id:\"5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913\" pid:4590 exited_at:{seconds:1747353659 nanos:505503317}" May 16 00:00:59.506272 containerd[1523]: time="2025-05-16T00:00:59.506058036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913\" id:\"5f8323e9de2cbee3310d72bb6cbb1bcbf012fd4b10e66a09ff8755181e8a0913\" pid:4590 exited_at:{seconds:1747353659 nanos:505503317}" May 16 00:01:00.248217 kubelet[2682]: E0516 00:01:00.248165 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:00.250456 containerd[1523]: time="2025-05-16T00:01:00.250399554Z" level=info msg="CreateContainer within sandbox \"198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:01:00.544178 containerd[1523]: time="2025-05-16T00:01:00.543995693Z" level=info msg="Container d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63: CDI devices from CRI Config.CDIDevices: []" May 16 00:01:00.602920 containerd[1523]: time="2025-05-16T00:01:00.602867284Z" level=info msg="CreateContainer within sandbox \"198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63\"" May 16 00:01:00.603555 containerd[1523]: time="2025-05-16T00:01:00.603504458Z" level=info msg="StartContainer for \"d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63\"" May 16 00:01:00.604410 containerd[1523]: time="2025-05-16T00:01:00.604369094Z" level=info msg="connecting to shim d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63" address="unix:///run/containerd/s/80280cc260fdc457603a85096b717c36f783ebb1787a4ce9d5fbae9128383592" protocol=ttrpc version=3 May 16 00:01:00.626563 systemd[1]: Started cri-containerd-d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63.scope - libcontainer container d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63. May 16 00:01:00.661451 containerd[1523]: time="2025-05-16T00:01:00.661407797Z" level=info msg="StartContainer for \"d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63\" returns successfully" May 16 00:01:00.667177 systemd[1]: cri-containerd-d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63.scope: Deactivated successfully. May 16 00:01:00.667817 containerd[1523]: time="2025-05-16T00:01:00.667768339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63\" id:\"d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63\" pid:4634 exited_at:{seconds:1747353660 nanos:667414340}" May 16 00:01:00.667817 containerd[1523]: time="2025-05-16T00:01:00.667941958Z" level=info msg="received exit event container_id:\"d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63\" id:\"d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63\" pid:4634 exited_at:{seconds:1747353660 nanos:667414340}" May 16 00:01:00.692957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d625ef70823e9c5f9ceb52fb01122f1e5764c3af7568ce1386b8e2a00fd16a63-rootfs.mount: Deactivated successfully. May 16 00:01:01.253852 kubelet[2682]: E0516 00:01:01.253815 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:01.255968 containerd[1523]: time="2025-05-16T00:01:01.255920837Z" level=info msg="CreateContainer within sandbox \"198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:01:01.268949 containerd[1523]: time="2025-05-16T00:01:01.268881913Z" level=info msg="Container cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12: CDI devices from CRI Config.CDIDevices: []" May 16 00:01:01.282293 containerd[1523]: time="2025-05-16T00:01:01.282219061Z" level=info msg="CreateContainer within sandbox \"198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12\"" May 16 00:01:01.282840 containerd[1523]: time="2025-05-16T00:01:01.282802775Z" level=info msg="StartContainer for \"cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12\"" May 16 00:01:01.285334 containerd[1523]: time="2025-05-16T00:01:01.284712879Z" level=info msg="connecting to shim cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12" address="unix:///run/containerd/s/80280cc260fdc457603a85096b717c36f783ebb1787a4ce9d5fbae9128383592" protocol=ttrpc version=3 May 16 00:01:01.308569 systemd[1]: Started cri-containerd-cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12.scope - libcontainer container cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12. May 16 00:01:01.354871 systemd[1]: cri-containerd-cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12.scope: Deactivated successfully. May 16 00:01:01.355906 containerd[1523]: time="2025-05-16T00:01:01.355836260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12\" id:\"cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12\" pid:4677 exited_at:{seconds:1747353661 nanos:355413620}" May 16 00:01:01.355977 containerd[1523]: time="2025-05-16T00:01:01.355933714Z" level=info msg="received exit event container_id:\"cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12\" id:\"cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12\" pid:4677 exited_at:{seconds:1747353661 nanos:355413620}" May 16 00:01:01.356223 containerd[1523]: time="2025-05-16T00:01:01.356182164Z" level=info msg="StartContainer for \"cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12\" returns successfully" May 16 00:01:01.383482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfb11ec5c58bbe10dcfd6f34ce2711bc71e1bdfa5ba2cecd9e97a252ba891f12-rootfs.mount: Deactivated successfully. May 16 00:01:02.258789 kubelet[2682]: E0516 00:01:02.258755 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:02.260766 containerd[1523]: time="2025-05-16T00:01:02.260704392Z" level=info msg="CreateContainer within sandbox \"198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:01:02.279804 containerd[1523]: time="2025-05-16T00:01:02.279753279Z" level=info msg="Container 5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b: CDI devices from CRI Config.CDIDevices: []" May 16 00:01:02.287559 containerd[1523]: time="2025-05-16T00:01:02.287508347Z" level=info msg="CreateContainer within sandbox \"198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b\"" May 16 00:01:02.288053 containerd[1523]: time="2025-05-16T00:01:02.288029914Z" level=info msg="StartContainer for \"5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b\"" May 16 00:01:02.288964 containerd[1523]: time="2025-05-16T00:01:02.288935516Z" level=info msg="connecting to shim 5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b" address="unix:///run/containerd/s/80280cc260fdc457603a85096b717c36f783ebb1787a4ce9d5fbae9128383592" protocol=ttrpc version=3 May 16 00:01:02.308463 systemd[1]: Started cri-containerd-5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b.scope - libcontainer container 5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b. May 16 00:01:02.336304 systemd[1]: cri-containerd-5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b.scope: Deactivated successfully. May 16 00:01:02.337081 containerd[1523]: time="2025-05-16T00:01:02.336816724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b\" id:\"5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b\" pid:4716 exited_at:{seconds:1747353662 nanos:336569727}" May 16 00:01:02.339225 containerd[1523]: time="2025-05-16T00:01:02.339197937Z" level=info msg="received exit event container_id:\"5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b\" id:\"5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b\" pid:4716 exited_at:{seconds:1747353662 nanos:336569727}" May 16 00:01:02.347176 containerd[1523]: time="2025-05-16T00:01:02.347137614Z" level=info msg="StartContainer for \"5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b\" returns successfully" May 16 00:01:02.361737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ee7eee23d7c0751eca31ba66db8e2a4e611f357599cb31281be67a2b7b2e76b-rootfs.mount: Deactivated successfully. May 16 00:01:03.264065 kubelet[2682]: E0516 00:01:03.264021 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:03.265771 containerd[1523]: time="2025-05-16T00:01:03.265733373Z" level=info msg="CreateContainer within sandbox \"198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:01:03.336724 containerd[1523]: time="2025-05-16T00:01:03.336660273Z" level=info msg="Container 58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e: CDI devices from CRI Config.CDIDevices: []" May 16 00:01:03.339192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468899372.mount: Deactivated successfully. May 16 00:01:03.447267 containerd[1523]: time="2025-05-16T00:01:03.447223829Z" level=info msg="CreateContainer within sandbox \"198d1257380bdd2fab25cc2ee96bdd6e5d0f88542c865615931c41b4dc0e7ecb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e\"" May 16 00:01:03.447755 containerd[1523]: time="2025-05-16T00:01:03.447734144Z" level=info msg="StartContainer for \"58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e\"" May 16 00:01:03.448672 containerd[1523]: time="2025-05-16T00:01:03.448640708Z" level=info msg="connecting to shim 58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e" address="unix:///run/containerd/s/80280cc260fdc457603a85096b717c36f783ebb1787a4ce9d5fbae9128383592" protocol=ttrpc version=3 May 16 00:01:03.469444 systemd[1]: Started cri-containerd-58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e.scope - libcontainer container 58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e. May 16 00:01:03.634667 containerd[1523]: time="2025-05-16T00:01:03.634537898Z" level=info msg="StartContainer for \"58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e\" returns successfully" May 16 00:01:03.712597 containerd[1523]: time="2025-05-16T00:01:03.712539516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e\" id:\"4740941a0597ea1f3b8282f29a53dd7e3b83961af762288cce8c7b6e5d0ced9a\" pid:4792 exited_at:{seconds:1747353663 nanos:712228067}" May 16 00:01:03.905347 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 16 00:01:04.269781 kubelet[2682]: E0516 00:01:04.269628 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:04.284538 kubelet[2682]: I0516 00:01:04.284439 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kt9c6" podStartSLOduration=6.284418687 podStartE2EDuration="6.284418687s" podCreationTimestamp="2025-05-16 00:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:01:04.284357681 +0000 UTC m=+110.175159038" watchObservedRunningTime="2025-05-16 00:01:04.284418687 +0000 UTC m=+110.175220014" May 16 00:01:05.325004 kubelet[2682]: E0516 00:01:05.324931 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:06.089684 containerd[1523]: time="2025-05-16T00:01:06.089631289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e\" id:\"bd1b74abacd0773994926aa5a79f1f081552ac6f314f19ae565dcd4499ea7b8f\" pid:5041 exit_status:1 exited_at:{seconds:1747353666 nanos:88931747}" May 16 00:01:07.139017 systemd-networkd[1411]: lxc_health: Link UP May 16 00:01:07.139360 systemd-networkd[1411]: lxc_health: Gained carrier May 16 00:01:07.326446 kubelet[2682]: E0516 00:01:07.326386 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:08.223096 containerd[1523]: time="2025-05-16T00:01:08.222762840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e\" id:\"ac3dc60af714ace775a8e0e67a44709fd84f5c999a42ab3426b4abc17d850737\" pid:5354 exited_at:{seconds:1747353668 nanos:222249210}" May 16 00:01:08.279015 kubelet[2682]: E0516 00:01:08.278984 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:09.041431 systemd-networkd[1411]: lxc_health: Gained IPv6LL May 16 00:01:09.283679 kubelet[2682]: E0516 00:01:09.283617 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:01:10.410657 containerd[1523]: time="2025-05-16T00:01:10.410548794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e\" id:\"35ac8888915958f6916f37fb5099064aaa21fdcde02ed3dae5470db5488d7af1\" pid:5385 exited_at:{seconds:1747353670 nanos:410132237}" May 16 00:01:12.548717 containerd[1523]: time="2025-05-16T00:01:12.548657006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58b6c9a4c6400de8e4a82a7f0264d434d4117678da88230866daa3a5e07dc63e\" id:\"72e9dcfa36a6a5845391aec15af46c99561659ff093f623340ffb1817187d6e5\" pid:5417 exited_at:{seconds:1747353672 nanos:548067262}" May 16 00:01:12.566137 sshd[4525]: Connection closed by 10.0.0.1 port 39558 May 16 00:01:12.566587 sshd-session[4517]: pam_unix(sshd:session): session closed for user core May 16 00:01:12.571587 systemd[1]: sshd@30-10.0.0.27:22-10.0.0.1:39558.service: Deactivated successfully. May 16 00:01:12.573836 systemd[1]: session-31.scope: Deactivated successfully. May 16 00:01:12.574914 systemd-logind[1505]: Session 31 logged out. Waiting for processes to exit. May 16 00:01:12.576398 systemd-logind[1505]: Removed session 31. May 16 00:01:14.201440 containerd[1523]: time="2025-05-16T00:01:14.201393099Z" level=info msg="StopPodSandbox for \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\"" May 16 00:01:14.201965 containerd[1523]: time="2025-05-16T00:01:14.201558602Z" level=info msg="TearDown network for sandbox \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\" successfully" May 16 00:01:14.201965 containerd[1523]: time="2025-05-16T00:01:14.201573710Z" level=info msg="StopPodSandbox for \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\" returns successfully" May 16 00:01:14.202064 containerd[1523]: time="2025-05-16T00:01:14.201967494Z" level=info msg="RemovePodSandbox for \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\"" May 16 00:01:14.202064 containerd[1523]: time="2025-05-16T00:01:14.201999294Z" level=info msg="Forcibly stopping sandbox \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\"" May 16 00:01:14.202129 containerd[1523]: time="2025-05-16T00:01:14.202077612Z" level=info msg="TearDown network for sandbox \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\" successfully" May 16 00:01:14.203934 containerd[1523]: time="2025-05-16T00:01:14.203905826Z" level=info msg="Ensure that sandbox 9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090 in task-service has been cleanup successfully" May 16 00:01:14.278125 containerd[1523]: time="2025-05-16T00:01:14.278085995Z" level=info msg="RemovePodSandbox \"9ce190bf600220883ad684923b2b65da234f161abd1f9eecff6bee75ea500090\" returns successfully" May 16 00:01:14.278486 containerd[1523]: time="2025-05-16T00:01:14.278457848Z" level=info msg="StopPodSandbox for \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\"" May 16 00:01:14.278599 containerd[1523]: time="2025-05-16T00:01:14.278572184Z" level=info msg="TearDown network for sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" successfully" May 16 00:01:14.278599 containerd[1523]: time="2025-05-16T00:01:14.278594035Z" level=info msg="StopPodSandbox for \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" returns successfully" May 16 00:01:14.278863 containerd[1523]: time="2025-05-16T00:01:14.278821716Z" level=info msg="RemovePodSandbox for \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\"" May 16 00:01:14.278863 containerd[1523]: time="2025-05-16T00:01:14.278851171Z" level=info msg="Forcibly stopping sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\"" May 16 00:01:14.279103 containerd[1523]: time="2025-05-16T00:01:14.278926072Z" level=info msg="TearDown network for sandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" successfully" May 16 00:01:14.291694 containerd[1523]: time="2025-05-16T00:01:14.291629462Z" level=info msg="Ensure that sandbox 1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4 in task-service has been cleanup successfully" May 16 00:01:14.365344 containerd[1523]: time="2025-05-16T00:01:14.365270634Z" level=info msg="RemovePodSandbox \"1c1d701defb9c7a89ffcda204b9e2a7355fcc9d9a61f3cdc36ea62203c70f7d4\" returns successfully"