May 16 16:44:36.844269 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 14:52:24 -00 2025 May 16 16:44:36.844289 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:44:36.844300 kernel: BIOS-provided physical RAM map: May 16 16:44:36.844307 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 16:44:36.844313 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 16:44:36.844319 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 16:44:36.844327 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 16 16:44:36.844334 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 16:44:36.844342 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 16:44:36.844349 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 16:44:36.844355 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 16 16:44:36.844362 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 16:44:36.844368 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 16:44:36.844375 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 16:44:36.844385 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 16:44:36.844392 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 16:44:36.844399 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 16 16:44:36.844406 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 16 16:44:36.844413 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 16 16:44:36.844434 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 16 16:44:36.844441 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 16:44:36.844448 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 16:44:36.844455 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 16:44:36.844462 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 16:44:36.844469 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 16:44:36.844478 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 16:44:36.844485 kernel: NX (Execute Disable) protection: active May 16 16:44:36.844492 kernel: APIC: Static calls initialized May 16 16:44:36.844499 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 16 16:44:36.844506 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 16 16:44:36.844513 kernel: extended physical RAM map: May 16 16:44:36.844520 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 16:44:36.844527 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 16:44:36.844534 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 16:44:36.844541 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 16 16:44:36.844548 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 16:44:36.844557 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 16:44:36.844564 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 16:44:36.844571 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 16 16:44:36.844578 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 16 16:44:36.844588 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 16 16:44:36.844596 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 16 16:44:36.844605 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 16 16:44:36.844613 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 16:44:36.844620 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 16:44:36.844627 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 16:44:36.844634 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 16:44:36.844642 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 16:44:36.844649 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 16 16:44:36.844656 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 16 16:44:36.844663 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 16 16:44:36.844672 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 16 16:44:36.844680 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 16:44:36.844687 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 16:44:36.844694 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 16:44:36.844701 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 16:44:36.844709 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 16:44:36.844716 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 16:44:36.844723 kernel: efi: EFI v2.7 by EDK II May 16 16:44:36.844730 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 16 16:44:36.844738 kernel: random: crng init done May 16 16:44:36.844745 kernel: efi: Remove mem149: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 16 16:44:36.844752 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 16 16:44:36.844762 kernel: secureboot: Secure boot disabled May 16 16:44:36.844769 kernel: SMBIOS 2.8 present. May 16 16:44:36.844776 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 16 16:44:36.844783 kernel: DMI: Memory slots populated: 1/1 May 16 16:44:36.844790 kernel: Hypervisor detected: KVM May 16 16:44:36.844798 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 16:44:36.844805 kernel: kvm-clock: using sched offset of 4438746109 cycles May 16 16:44:36.844821 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 16:44:36.844829 kernel: tsc: Detected 2794.748 MHz processor May 16 16:44:36.844837 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 16:44:36.844844 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 16:44:36.844854 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 16 16:44:36.844862 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 16 16:44:36.844870 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 16:44:36.844877 kernel: Using GB pages for direct mapping May 16 16:44:36.844884 kernel: ACPI: Early table checksum verification disabled May 16 16:44:36.844892 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 16 16:44:36.844899 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 16 16:44:36.844907 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:44:36.844914 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:44:36.844924 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 16 16:44:36.844931 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:44:36.844939 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:44:36.844946 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:44:36.844954 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:44:36.844961 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 16 16:44:36.844968 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 16 16:44:36.844976 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 16 16:44:36.844986 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 16 16:44:36.844993 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 16 16:44:36.845000 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 16 16:44:36.845008 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 16 16:44:36.845015 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 16 16:44:36.845022 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 16 16:44:36.845029 kernel: No NUMA configuration found May 16 16:44:36.845037 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 16 16:44:36.845044 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 16 16:44:36.845052 kernel: Zone ranges: May 16 16:44:36.845061 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 16:44:36.845068 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 16 16:44:36.845076 kernel: Normal empty May 16 16:44:36.845083 kernel: Device empty May 16 16:44:36.845090 kernel: Movable zone start for each node May 16 16:44:36.845098 kernel: Early memory node ranges May 16 16:44:36.845105 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 16 16:44:36.845112 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 16 16:44:36.845120 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 16 16:44:36.845129 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 16 16:44:36.845136 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 16 16:44:36.845144 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 16 16:44:36.845151 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 16 16:44:36.845158 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 16 16:44:36.845166 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 16 16:44:36.845173 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 16:44:36.845181 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 16 16:44:36.845197 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 16 16:44:36.845205 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 16:44:36.845212 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 16 16:44:36.845220 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 16 16:44:36.845230 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 16 16:44:36.845237 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 16 16:44:36.845245 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 16 16:44:36.845253 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 16:44:36.845261 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 16:44:36.845270 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 16:44:36.845278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 16:44:36.845286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 16:44:36.845294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 16:44:36.845301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 16:44:36.845309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 16:44:36.845317 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 16:44:36.845324 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 16:44:36.845332 kernel: TSC deadline timer available May 16 16:44:36.845340 kernel: CPU topo: Max. logical packages: 1 May 16 16:44:36.845350 kernel: CPU topo: Max. logical dies: 1 May 16 16:44:36.845357 kernel: CPU topo: Max. dies per package: 1 May 16 16:44:36.845365 kernel: CPU topo: Max. threads per core: 1 May 16 16:44:36.845373 kernel: CPU topo: Num. cores per package: 4 May 16 16:44:36.845380 kernel: CPU topo: Num. threads per package: 4 May 16 16:44:36.845388 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 16 16:44:36.845396 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 16:44:36.845403 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 16:44:36.845411 kernel: kvm-guest: setup PV sched yield May 16 16:44:36.845439 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 16 16:44:36.845447 kernel: Booting paravirtualized kernel on KVM May 16 16:44:36.845455 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 16:44:36.845463 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 16:44:36.845471 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 16 16:44:36.845479 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 16 16:44:36.845486 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 16:44:36.845494 kernel: kvm-guest: PV spinlocks enabled May 16 16:44:36.845502 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 16:44:36.845513 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:44:36.845522 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 16:44:36.845529 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 16:44:36.845537 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 16:44:36.845545 kernel: Fallback order for Node 0: 0 May 16 16:44:36.845553 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 16 16:44:36.845560 kernel: Policy zone: DMA32 May 16 16:44:36.845568 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 16:44:36.845578 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 16:44:36.845586 kernel: ftrace: allocating 40065 entries in 157 pages May 16 16:44:36.845594 kernel: ftrace: allocated 157 pages with 5 groups May 16 16:44:36.845601 kernel: Dynamic Preempt: voluntary May 16 16:44:36.845609 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 16:44:36.845617 kernel: rcu: RCU event tracing is enabled. May 16 16:44:36.845625 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 16:44:36.845633 kernel: Trampoline variant of Tasks RCU enabled. May 16 16:44:36.845641 kernel: Rude variant of Tasks RCU enabled. May 16 16:44:36.845651 kernel: Tracing variant of Tasks RCU enabled. May 16 16:44:36.845659 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 16:44:36.845667 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 16:44:36.845675 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:44:36.845683 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:44:36.845691 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:44:36.845699 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 16:44:36.845706 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 16:44:36.845714 kernel: Console: colour dummy device 80x25 May 16 16:44:36.845724 kernel: printk: legacy console [ttyS0] enabled May 16 16:44:36.845731 kernel: ACPI: Core revision 20240827 May 16 16:44:36.845739 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 16:44:36.845747 kernel: APIC: Switch to symmetric I/O mode setup May 16 16:44:36.845755 kernel: x2apic enabled May 16 16:44:36.845762 kernel: APIC: Switched APIC routing to: physical x2apic May 16 16:44:36.845770 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 16:44:36.845778 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 16:44:36.845786 kernel: kvm-guest: setup PV IPIs May 16 16:44:36.845796 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 16:44:36.845804 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 16:44:36.845819 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 16:44:36.845827 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 16:44:36.845835 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 16:44:36.845842 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 16:44:36.845851 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 16:44:36.845858 kernel: Spectre V2 : Mitigation: Retpolines May 16 16:44:36.845866 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 16 16:44:36.845876 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 16 16:44:36.845884 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 16:44:36.845892 kernel: RETBleed: Mitigation: untrained return thunk May 16 16:44:36.845900 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 16:44:36.845908 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 16:44:36.845916 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 16:44:36.845924 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 16:44:36.845932 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 16:44:36.845942 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 16:44:36.845950 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 16:44:36.845957 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 16:44:36.845965 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 16:44:36.845973 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 16:44:36.845981 kernel: Freeing SMP alternatives memory: 32K May 16 16:44:36.845988 kernel: pid_max: default: 32768 minimum: 301 May 16 16:44:36.845996 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 16:44:36.846004 kernel: landlock: Up and running. May 16 16:44:36.846014 kernel: SELinux: Initializing. May 16 16:44:36.846022 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:44:36.846030 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:44:36.846037 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 16:44:36.846045 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 16:44:36.846053 kernel: ... version: 0 May 16 16:44:36.846061 kernel: ... bit width: 48 May 16 16:44:36.846068 kernel: ... generic registers: 6 May 16 16:44:36.846076 kernel: ... value mask: 0000ffffffffffff May 16 16:44:36.846086 kernel: ... max period: 00007fffffffffff May 16 16:44:36.846093 kernel: ... fixed-purpose events: 0 May 16 16:44:36.846101 kernel: ... event mask: 000000000000003f May 16 16:44:36.846109 kernel: signal: max sigframe size: 1776 May 16 16:44:36.846116 kernel: rcu: Hierarchical SRCU implementation. May 16 16:44:36.846124 kernel: rcu: Max phase no-delay instances is 400. May 16 16:44:36.846132 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 16:44:36.846140 kernel: smp: Bringing up secondary CPUs ... May 16 16:44:36.846148 kernel: smpboot: x86: Booting SMP configuration: May 16 16:44:36.846157 kernel: .... node #0, CPUs: #1 #2 #3 May 16 16:44:36.846165 kernel: smp: Brought up 1 node, 4 CPUs May 16 16:44:36.846173 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 16:44:36.846181 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 137196K reserved, 0K cma-reserved) May 16 16:44:36.846189 kernel: devtmpfs: initialized May 16 16:44:36.846196 kernel: x86/mm: Memory block size: 128MB May 16 16:44:36.846204 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 16 16:44:36.846212 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 16 16:44:36.846220 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 16 16:44:36.846230 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 16 16:44:36.846237 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 16 16:44:36.846245 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 16 16:44:36.846253 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 16:44:36.846261 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 16:44:36.846269 kernel: pinctrl core: initialized pinctrl subsystem May 16 16:44:36.846277 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 16:44:36.846284 kernel: audit: initializing netlink subsys (disabled) May 16 16:44:36.846292 kernel: audit: type=2000 audit(1747413875.213:1): state=initialized audit_enabled=0 res=1 May 16 16:44:36.846302 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 16:44:36.846310 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 16:44:36.846317 kernel: cpuidle: using governor menu May 16 16:44:36.846325 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 16:44:36.846333 kernel: dca service started, version 1.12.1 May 16 16:44:36.846341 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 16 16:44:36.846348 kernel: PCI: Using configuration type 1 for base access May 16 16:44:36.846356 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 16:44:36.846364 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 16:44:36.846374 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 16:44:36.846382 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 16:44:36.846389 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 16:44:36.846397 kernel: ACPI: Added _OSI(Module Device) May 16 16:44:36.846405 kernel: ACPI: Added _OSI(Processor Device) May 16 16:44:36.846413 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 16:44:36.846434 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 16:44:36.846442 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 16:44:36.846449 kernel: ACPI: Interpreter enabled May 16 16:44:36.846459 kernel: ACPI: PM: (supports S0 S3 S5) May 16 16:44:36.846467 kernel: ACPI: Using IOAPIC for interrupt routing May 16 16:44:36.846475 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 16:44:36.846483 kernel: PCI: Using E820 reservations for host bridge windows May 16 16:44:36.846490 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 16:44:36.846498 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 16:44:36.846671 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 16:44:36.846796 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 16:44:36.846929 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 16:44:36.846940 kernel: PCI host bridge to bus 0000:00 May 16 16:44:36.847065 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 16:44:36.847175 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 16:44:36.847287 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 16:44:36.847395 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 16 16:44:36.847527 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 16 16:44:36.847640 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 16 16:44:36.847749 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 16:44:36.847900 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 16 16:44:36.848031 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 16 16:44:36.848152 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 16 16:44:36.848330 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 16 16:44:36.848508 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 16 16:44:36.848628 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 16:44:36.848756 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 16:44:36.848886 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 16 16:44:36.849006 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 16 16:44:36.849126 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 16 16:44:36.849253 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 16 16:44:36.849380 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 16 16:44:36.849517 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 16 16:44:36.849637 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 16 16:44:36.849777 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 16 16:44:36.849932 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 16 16:44:36.850112 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 16 16:44:36.850259 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 16 16:44:36.850400 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 16 16:44:36.850563 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 16 16:44:36.850692 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 16:44:36.850833 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 16 16:44:36.850953 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 16 16:44:36.851079 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 16 16:44:36.851237 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 16 16:44:36.851362 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 16 16:44:36.851372 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 16:44:36.851381 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 16:44:36.851389 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 16:44:36.851396 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 16:44:36.851404 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 16:44:36.851412 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 16:44:36.851441 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 16:44:36.851449 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 16:44:36.851457 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 16:44:36.851464 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 16:44:36.851472 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 16:44:36.851480 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 16:44:36.851487 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 16:44:36.851495 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 16:44:36.851505 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 16:44:36.851516 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 16:44:36.851525 kernel: iommu: Default domain type: Translated May 16 16:44:36.851533 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 16:44:36.851541 kernel: efivars: Registered efivars operations May 16 16:44:36.851548 kernel: PCI: Using ACPI for IRQ routing May 16 16:44:36.851556 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 16:44:36.851564 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 16 16:44:36.851571 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 16 16:44:36.851579 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 16 16:44:36.851586 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 16 16:44:36.851596 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 16 16:44:36.851604 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 16 16:44:36.851611 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 16 16:44:36.851636 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 16 16:44:36.851821 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 16:44:36.851945 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 16:44:36.852064 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 16:44:36.852084 kernel: vgaarb: loaded May 16 16:44:36.852093 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 16:44:36.852100 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 16:44:36.852108 kernel: clocksource: Switched to clocksource kvm-clock May 16 16:44:36.852116 kernel: VFS: Disk quotas dquot_6.6.0 May 16 16:44:36.852124 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 16:44:36.852132 kernel: pnp: PnP ACPI init May 16 16:44:36.852285 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 16 16:44:36.852302 kernel: pnp: PnP ACPI: found 6 devices May 16 16:44:36.852311 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 16:44:36.852319 kernel: NET: Registered PF_INET protocol family May 16 16:44:36.852327 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 16:44:36.852335 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 16:44:36.852343 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 16:44:36.852351 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 16:44:36.852359 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 16:44:36.852367 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 16:44:36.852378 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:44:36.852386 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:44:36.852394 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 16:44:36.852402 kernel: NET: Registered PF_XDP protocol family May 16 16:44:36.852554 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 16 16:44:36.852710 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 16 16:44:36.852832 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 16:44:36.852943 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 16:44:36.853057 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 16:44:36.853165 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 16 16:44:36.853273 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 16 16:44:36.853381 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 16 16:44:36.853392 kernel: PCI: CLS 0 bytes, default 64 May 16 16:44:36.853400 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 16:44:36.853408 kernel: Initialise system trusted keyrings May 16 16:44:36.853435 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 16:44:36.853444 kernel: Key type asymmetric registered May 16 16:44:36.853452 kernel: Asymmetric key parser 'x509' registered May 16 16:44:36.853460 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 16 16:44:36.853468 kernel: io scheduler mq-deadline registered May 16 16:44:36.853476 kernel: io scheduler kyber registered May 16 16:44:36.853484 kernel: io scheduler bfq registered May 16 16:44:36.853492 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 16:44:36.853504 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 16:44:36.853512 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 16:44:36.853522 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 16:44:36.853530 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 16:44:36.853538 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 16:44:36.853546 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 16:44:36.853554 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 16:44:36.853562 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 16:44:36.853690 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 16:44:36.853819 kernel: rtc_cmos 00:04: registered as rtc0 May 16 16:44:36.853934 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T16:44:36 UTC (1747413876) May 16 16:44:36.854046 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 16:44:36.854056 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 16:44:36.854065 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 16:44:36.854073 kernel: efifb: probing for efifb May 16 16:44:36.854081 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 16 16:44:36.854092 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 16 16:44:36.854100 kernel: efifb: scrolling: redraw May 16 16:44:36.854108 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 16:44:36.854116 kernel: Console: switching to colour frame buffer device 160x50 May 16 16:44:36.854124 kernel: fb0: EFI VGA frame buffer device May 16 16:44:36.854132 kernel: pstore: Using crash dump compression: deflate May 16 16:44:36.854140 kernel: pstore: Registered efi_pstore as persistent store backend May 16 16:44:36.854149 kernel: NET: Registered PF_INET6 protocol family May 16 16:44:36.854157 kernel: Segment Routing with IPv6 May 16 16:44:36.854164 kernel: In-situ OAM (IOAM) with IPv6 May 16 16:44:36.854175 kernel: NET: Registered PF_PACKET protocol family May 16 16:44:36.854183 kernel: Key type dns_resolver registered May 16 16:44:36.854191 kernel: IPI shorthand broadcast: enabled May 16 16:44:36.854199 kernel: sched_clock: Marking stable (2799001850, 158578182)->(2992171158, -34591126) May 16 16:44:36.854207 kernel: registered taskstats version 1 May 16 16:44:36.854215 kernel: Loading compiled-in X.509 certificates May 16 16:44:36.854223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 310304ddc2cf6c43796c9bf79d11c0543afdf71f' May 16 16:44:36.854231 kernel: Demotion targets for Node 0: null May 16 16:44:36.854239 kernel: Key type .fscrypt registered May 16 16:44:36.854249 kernel: Key type fscrypt-provisioning registered May 16 16:44:36.854257 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 16:44:36.854265 kernel: ima: Allocated hash algorithm: sha1 May 16 16:44:36.854273 kernel: ima: No architecture policies found May 16 16:44:36.854281 kernel: clk: Disabling unused clocks May 16 16:44:36.854289 kernel: Warning: unable to open an initial console. May 16 16:44:36.854297 kernel: Freeing unused kernel image (initmem) memory: 54416K May 16 16:44:36.854305 kernel: Write protecting the kernel read-only data: 24576k May 16 16:44:36.854316 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 16 16:44:36.854324 kernel: Run /init as init process May 16 16:44:36.854332 kernel: with arguments: May 16 16:44:36.854339 kernel: /init May 16 16:44:36.854347 kernel: with environment: May 16 16:44:36.854355 kernel: HOME=/ May 16 16:44:36.854363 kernel: TERM=linux May 16 16:44:36.854371 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 16:44:36.854380 systemd[1]: Successfully made /usr/ read-only. May 16 16:44:36.854393 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:44:36.854403 systemd[1]: Detected virtualization kvm. May 16 16:44:36.854411 systemd[1]: Detected architecture x86-64. May 16 16:44:36.854450 systemd[1]: Running in initrd. May 16 16:44:36.854459 systemd[1]: No hostname configured, using default hostname. May 16 16:44:36.854468 systemd[1]: Hostname set to . May 16 16:44:36.854476 systemd[1]: Initializing machine ID from VM UUID. May 16 16:44:36.854488 systemd[1]: Queued start job for default target initrd.target. May 16 16:44:36.854497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:44:36.854506 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:44:36.854515 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 16:44:36.854523 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:44:36.854532 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 16:44:36.854541 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 16:44:36.854554 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 16:44:36.854562 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 16:44:36.854571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:44:36.854579 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:44:36.854588 systemd[1]: Reached target paths.target - Path Units. May 16 16:44:36.854596 systemd[1]: Reached target slices.target - Slice Units. May 16 16:44:36.854605 systemd[1]: Reached target swap.target - Swaps. May 16 16:44:36.854613 systemd[1]: Reached target timers.target - Timer Units. May 16 16:44:36.854621 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:44:36.854632 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:44:36.854641 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 16:44:36.854649 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 16:44:36.854658 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:44:36.854666 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:44:36.854675 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:44:36.854683 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:44:36.854692 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 16:44:36.854703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:44:36.854711 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 16:44:36.854721 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 16:44:36.854729 systemd[1]: Starting systemd-fsck-usr.service... May 16 16:44:36.854738 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:44:36.854746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:44:36.854754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:44:36.854763 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 16:44:36.854774 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:44:36.854783 systemd[1]: Finished systemd-fsck-usr.service. May 16 16:44:36.854820 systemd-journald[220]: Collecting audit messages is disabled. May 16 16:44:36.854845 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 16:44:36.854857 systemd-journald[220]: Journal started May 16 16:44:36.854882 systemd-journald[220]: Runtime Journal (/run/log/journal/dcbbc5b2a76c4019bf6720995fb2cb24) is 6M, max 48.5M, 42.4M free. May 16 16:44:36.850270 systemd-modules-load[221]: Inserted module 'overlay' May 16 16:44:36.857724 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:44:36.857962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:44:36.860795 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 16:44:36.866932 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 16:44:36.869390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:44:36.876543 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:44:36.882460 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 16:44:36.885076 systemd-modules-load[221]: Inserted module 'br_netfilter' May 16 16:44:36.885449 kernel: Bridge firewalling registered May 16 16:44:36.886991 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:44:36.888164 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 16:44:36.889125 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:44:36.894548 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:44:36.908571 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:44:36.909336 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:44:36.912993 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 16:44:36.926898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:44:36.928891 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:44:36.941614 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:44:36.973678 systemd-resolved[264]: Positive Trust Anchors: May 16 16:44:36.973695 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:44:36.973727 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:44:36.976308 systemd-resolved[264]: Defaulting to hostname 'linux'. May 16 16:44:36.977458 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:44:36.982719 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:44:37.056457 kernel: SCSI subsystem initialized May 16 16:44:37.065451 kernel: Loading iSCSI transport class v2.0-870. May 16 16:44:37.076456 kernel: iscsi: registered transport (tcp) May 16 16:44:37.098615 kernel: iscsi: registered transport (qla4xxx) May 16 16:44:37.098645 kernel: QLogic iSCSI HBA Driver May 16 16:44:37.120111 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:44:37.145115 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:44:37.146747 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:44:37.206508 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 16:44:37.209174 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 16:44:37.270457 kernel: raid6: avx2x4 gen() 29882 MB/s May 16 16:44:37.287445 kernel: raid6: avx2x2 gen() 31239 MB/s May 16 16:44:37.304532 kernel: raid6: avx2x1 gen() 25801 MB/s May 16 16:44:37.304556 kernel: raid6: using algorithm avx2x2 gen() 31239 MB/s May 16 16:44:37.322559 kernel: raid6: .... xor() 19866 MB/s, rmw enabled May 16 16:44:37.322578 kernel: raid6: using avx2x2 recovery algorithm May 16 16:44:37.354452 kernel: xor: automatically using best checksumming function avx May 16 16:44:37.518463 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 16:44:37.528322 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 16:44:37.530853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:44:37.571270 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 16 16:44:37.578122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:44:37.581991 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 16:44:37.606453 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation May 16 16:44:37.638397 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:44:37.640889 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:44:37.719775 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:44:37.724214 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 16:44:37.757352 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 16:44:37.801285 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 16:44:37.801560 kernel: cryptd: max_cpu_qlen set to 1000 May 16 16:44:37.801573 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 16:44:37.801584 kernel: libata version 3.00 loaded. May 16 16:44:37.801595 kernel: AES CTR mode by8 optimization enabled May 16 16:44:37.801605 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 16:44:37.801622 kernel: GPT:9289727 != 19775487 May 16 16:44:37.801632 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 16:44:37.801642 kernel: GPT:9289727 != 19775487 May 16 16:44:37.801655 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 16:44:37.801669 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:44:37.793918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:44:37.794072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:44:37.796304 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:44:37.800546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:44:37.810124 kernel: ahci 0000:00:1f.2: version 3.0 May 16 16:44:37.838757 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 16:44:37.838779 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 16 16:44:37.838980 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 16 16:44:37.839150 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 16:44:37.839316 kernel: scsi host0: ahci May 16 16:44:37.839572 kernel: scsi host1: ahci May 16 16:44:37.839749 kernel: scsi host2: ahci May 16 16:44:37.839939 kernel: scsi host3: ahci May 16 16:44:37.840110 kernel: scsi host4: ahci May 16 16:44:37.840278 kernel: scsi host5: ahci May 16 16:44:37.840459 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 16 16:44:37.840475 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 16 16:44:37.840494 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 16 16:44:37.840508 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 16 16:44:37.840522 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 16 16:44:37.840536 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 16 16:44:37.849760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:44:37.860222 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 16:44:37.876031 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 16:44:37.883811 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 16:44:37.884088 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 16:44:37.893685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:44:37.894940 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 16:44:37.932632 disk-uuid[634]: Primary Header is updated. May 16 16:44:37.932632 disk-uuid[634]: Secondary Entries is updated. May 16 16:44:37.932632 disk-uuid[634]: Secondary Header is updated. May 16 16:44:37.936470 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:44:38.147624 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 16:44:38.147714 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 16:44:38.147731 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 16:44:38.147745 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 16:44:38.148456 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 16:44:38.149454 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 16:44:38.150444 kernel: ata3.00: applying bridge limits May 16 16:44:38.150472 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 16:44:38.151454 kernel: ata3.00: configured for UDMA/100 May 16 16:44:38.152452 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 16:44:38.207026 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 16:44:38.224124 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 16:44:38.224143 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 16:44:38.698165 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 16:44:38.700978 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:44:38.703533 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:44:38.705873 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:44:38.708705 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 16:44:38.746127 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 16:44:38.945369 disk-uuid[635]: The operation has completed successfully. May 16 16:44:38.947019 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:44:38.978594 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 16:44:38.978750 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 16:44:39.020922 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 16:44:39.043720 sh[663]: Success May 16 16:44:39.061660 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 16:44:39.061700 kernel: device-mapper: uevent: version 1.0.3 May 16 16:44:39.062812 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 16:44:39.072526 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 16 16:44:39.104400 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 16:44:39.106628 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 16:44:39.120906 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 16:44:39.129830 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 16:44:39.129890 kernel: BTRFS: device fsid 85b2a34c-237f-4a0a-87d0-0a783de0f256 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (675) May 16 16:44:39.131285 kernel: BTRFS info (device dm-0): first mount of filesystem 85b2a34c-237f-4a0a-87d0-0a783de0f256 May 16 16:44:39.131311 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 16:44:39.132186 kernel: BTRFS info (device dm-0): using free-space-tree May 16 16:44:39.137768 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 16:44:39.140233 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 16:44:39.142691 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 16:44:39.145647 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 16:44:39.148796 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 16:44:39.173457 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (708) May 16 16:44:39.173517 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:44:39.174948 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:44:39.174973 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:44:39.183442 kernel: BTRFS info (device vda6): last unmount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:44:39.183992 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 16:44:39.185436 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 16:44:39.273556 ignition[748]: Ignition 2.21.0 May 16 16:44:39.273568 ignition[748]: Stage: fetch-offline May 16 16:44:39.273603 ignition[748]: no configs at "/usr/lib/ignition/base.d" May 16 16:44:39.273612 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:44:39.273689 ignition[748]: parsed url from cmdline: "" May 16 16:44:39.273693 ignition[748]: no config URL provided May 16 16:44:39.273698 ignition[748]: reading system config file "/usr/lib/ignition/user.ign" May 16 16:44:39.273706 ignition[748]: no config at "/usr/lib/ignition/user.ign" May 16 16:44:39.273726 ignition[748]: op(1): [started] loading QEMU firmware config module May 16 16:44:39.273731 ignition[748]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 16:44:39.284794 ignition[748]: op(1): [finished] loading QEMU firmware config module May 16 16:44:39.291772 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:44:39.295265 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:44:39.327203 ignition[748]: parsing config with SHA512: 167d1dbfae227361aa5bb1a474fef00dafa36ef892f6642db29c099f7220b33bdf9eb953199ecab209cee8046dc89b400e8bf740935d09929232e4afdb5487b4 May 16 16:44:39.332765 unknown[748]: fetched base config from "system" May 16 16:44:39.332778 unknown[748]: fetched user config from "qemu" May 16 16:44:39.333117 ignition[748]: fetch-offline: fetch-offline passed May 16 16:44:39.333166 ignition[748]: Ignition finished successfully May 16 16:44:39.338729 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:44:39.345632 systemd-networkd[853]: lo: Link UP May 16 16:44:39.345643 systemd-networkd[853]: lo: Gained carrier May 16 16:44:39.347178 systemd-networkd[853]: Enumeration completed May 16 16:44:39.347282 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:44:39.347544 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:44:39.347548 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:44:39.348355 systemd-networkd[853]: eth0: Link UP May 16 16:44:39.348359 systemd-networkd[853]: eth0: Gained carrier May 16 16:44:39.348367 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:44:39.348900 systemd[1]: Reached target network.target - Network. May 16 16:44:39.350462 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 16:44:39.351248 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 16:44:39.369486 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:44:39.385511 ignition[857]: Ignition 2.21.0 May 16 16:44:39.385525 ignition[857]: Stage: kargs May 16 16:44:39.385675 ignition[857]: no configs at "/usr/lib/ignition/base.d" May 16 16:44:39.385687 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:44:39.388022 ignition[857]: kargs: kargs passed May 16 16:44:39.388078 ignition[857]: Ignition finished successfully May 16 16:44:39.393274 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 16:44:39.395457 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 16:44:39.448329 ignition[867]: Ignition 2.21.0 May 16 16:44:39.448344 ignition[867]: Stage: disks May 16 16:44:39.448479 ignition[867]: no configs at "/usr/lib/ignition/base.d" May 16 16:44:39.448490 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:44:39.449569 ignition[867]: disks: disks passed May 16 16:44:39.450294 ignition[867]: Ignition finished successfully May 16 16:44:39.454522 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 16:44:39.473533 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 16:44:39.473808 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 16:44:39.474129 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:44:39.478726 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:44:39.479044 systemd[1]: Reached target basic.target - Basic System. May 16 16:44:39.480504 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 16:44:39.510695 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 16:44:39.518497 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 16:44:39.521149 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 16:44:39.636471 kernel: EXT4-fs (vda9): mounted filesystem 07293137-138a-42a3-a962-d767034e11a7 r/w with ordered data mode. Quota mode: none. May 16 16:44:39.637304 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 16:44:39.638859 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 16:44:39.641744 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:44:39.643513 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 16:44:39.644892 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 16:44:39.644932 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 16:44:39.644955 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:44:39.660847 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 16:44:39.662923 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 16:44:39.666445 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (885) May 16 16:44:39.669115 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:44:39.669136 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:44:39.669148 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:44:39.673257 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:44:39.702265 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory May 16 16:44:39.706453 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory May 16 16:44:39.711144 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory May 16 16:44:39.715810 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory May 16 16:44:39.803528 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 16:44:39.805787 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 16:44:39.806885 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 16:44:39.836444 kernel: BTRFS info (device vda6): last unmount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:44:39.852568 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 16:44:39.868007 ignition[1000]: INFO : Ignition 2.21.0 May 16 16:44:39.868007 ignition[1000]: INFO : Stage: mount May 16 16:44:39.869812 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:44:39.869812 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:44:39.874061 ignition[1000]: INFO : mount: mount passed May 16 16:44:39.874061 ignition[1000]: INFO : Ignition finished successfully May 16 16:44:39.876836 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 16:44:39.879120 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 16:44:40.128979 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 16:44:40.130641 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:44:40.250459 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1012) May 16 16:44:40.250503 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:44:40.253133 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:44:40.253160 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:44:40.256760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:44:40.285282 ignition[1029]: INFO : Ignition 2.21.0 May 16 16:44:40.285282 ignition[1029]: INFO : Stage: files May 16 16:44:40.287845 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:44:40.287845 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:44:40.287845 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping May 16 16:44:40.287845 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 16:44:40.287845 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 16:44:40.563867 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 16:44:40.563867 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 16:44:40.567199 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 16:44:40.567199 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 16 16:44:40.567199 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 16 16:44:40.563943 unknown[1029]: wrote ssh authorized keys file for user: core May 16 16:44:40.616532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 16:44:40.874458 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 16 16:44:40.874458 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 16:44:40.920388 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 16:44:41.275726 systemd-networkd[853]: eth0: Gained IPv6LL May 16 16:44:41.433965 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 16:44:41.529402 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 16:44:41.529402 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 16:44:41.533668 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 16:44:41.533668 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 16:44:41.533668 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 16:44:41.533668 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:44:41.533668 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:44:41.533668 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:44:41.533668 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:44:41.546506 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:44:41.546506 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:44:41.546506 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 16:44:41.546506 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 16:44:41.546506 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 16:44:41.546506 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 16 16:44:42.356015 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 16:44:42.793341 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 16:44:42.793341 ignition[1029]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 16:44:42.797465 ignition[1029]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:44:42.800331 ignition[1029]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:44:42.800331 ignition[1029]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 16:44:42.800331 ignition[1029]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 16:44:42.805271 ignition[1029]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:44:42.805271 ignition[1029]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:44:42.805271 ignition[1029]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 16:44:42.805271 ignition[1029]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 16:44:42.819596 ignition[1029]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:44:42.825804 ignition[1029]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:44:42.827402 ignition[1029]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 16:44:42.827402 ignition[1029]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 16:44:42.827402 ignition[1029]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 16:44:42.827402 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 16:44:42.827402 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 16:44:42.827402 ignition[1029]: INFO : files: files passed May 16 16:44:42.827402 ignition[1029]: INFO : Ignition finished successfully May 16 16:44:42.835944 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 16:44:42.838485 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 16:44:42.840711 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 16:44:42.855967 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 16:44:42.856113 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 16:44:42.859110 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory May 16 16:44:42.862964 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:44:42.862964 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 16:44:42.867094 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:44:42.870967 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:44:42.871477 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 16:44:42.875883 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 16:44:42.953895 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 16:44:42.955273 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 16:44:42.959138 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 16:44:42.961668 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 16:44:42.964168 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 16:44:42.967046 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 16:44:43.003727 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:44:43.006331 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 16:44:43.035413 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 16:44:43.035784 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:44:43.038027 systemd[1]: Stopped target timers.target - Timer Units. May 16 16:44:43.040262 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 16:44:43.040373 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:44:43.043790 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 16:44:43.044208 systemd[1]: Stopped target basic.target - Basic System. May 16 16:44:43.044721 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 16:44:43.045055 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:44:43.045384 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 16:44:43.045895 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 16:44:43.046225 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 16:44:43.046744 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:44:43.047085 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 16:44:43.047412 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 16:44:43.047944 systemd[1]: Stopped target swap.target - Swaps. May 16 16:44:43.048248 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 16:44:43.048359 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 16:44:43.067054 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 16:44:43.067842 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:44:43.068132 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 16:44:43.073114 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:44:43.073491 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 16:44:43.073636 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 16:44:43.077457 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 16:44:43.077583 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:44:43.078234 systemd[1]: Stopped target paths.target - Path Units. May 16 16:44:43.078659 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 16:44:43.079148 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:44:43.082817 systemd[1]: Stopped target slices.target - Slice Units. May 16 16:44:43.083132 systemd[1]: Stopped target sockets.target - Socket Units. May 16 16:44:43.083482 systemd[1]: iscsid.socket: Deactivated successfully. May 16 16:44:43.083580 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:44:43.083991 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 16:44:43.084078 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:44:43.090366 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 16:44:43.090506 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:44:43.092140 systemd[1]: ignition-files.service: Deactivated successfully. May 16 16:44:43.092252 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 16:44:43.096532 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 16:44:43.097703 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 16:44:43.099804 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 16:44:43.099922 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:44:43.100374 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 16:44:43.100492 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:44:43.110294 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 16:44:43.112233 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 16:44:43.129267 ignition[1084]: INFO : Ignition 2.21.0 May 16 16:44:43.129267 ignition[1084]: INFO : Stage: umount May 16 16:44:43.131239 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:44:43.131239 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:44:43.131239 ignition[1084]: INFO : umount: umount passed May 16 16:44:43.131239 ignition[1084]: INFO : Ignition finished successfully May 16 16:44:43.133019 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 16:44:43.133156 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 16:44:43.134564 systemd[1]: Stopped target network.target - Network. May 16 16:44:43.136002 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 16:44:43.136089 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 16:44:43.136342 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 16:44:43.136394 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 16:44:43.136846 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 16:44:43.136903 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 16:44:43.137162 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 16:44:43.137209 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 16:44:43.137829 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 16:44:43.146348 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 16:44:43.148100 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 16:44:43.157098 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 16:44:43.157228 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 16:44:43.162715 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 16:44:43.162983 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 16:44:43.163114 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 16:44:43.166823 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 16:44:43.167773 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 16:44:43.169046 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 16:44:43.169097 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 16:44:43.173018 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 16:44:43.175774 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 16:44:43.176945 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:44:43.177313 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 16:44:43.177377 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 16:44:43.180951 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 16:44:43.181005 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 16:44:43.182641 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 16:44:43.182699 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:44:43.186308 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:44:43.188311 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 16:44:43.188377 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 16:44:43.200097 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 16:44:43.200241 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 16:44:43.209237 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 16:44:43.209454 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:44:43.210110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 16:44:43.210160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 16:44:43.213136 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 16:44:43.213175 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:44:43.213465 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 16:44:43.213515 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 16:44:43.214315 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 16:44:43.214361 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 16:44:43.215137 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 16:44:43.215184 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:44:43.216774 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 16:44:43.225794 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 16:44:43.225853 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:44:43.229951 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 16:44:43.230017 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:44:43.233484 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 16:44:43.233531 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 16:44:43.236808 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 16:44:43.236856 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:44:43.237346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:44:43.237389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:44:43.243478 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 16 16:44:43.243539 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 16 16:44:43.243587 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 16:44:43.243636 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 16:44:43.257650 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 16:44:43.257777 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 16:44:43.353952 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 16:44:43.354091 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 16:44:43.356996 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 16:44:43.359069 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 16:44:43.360068 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 16:44:43.363019 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 16:44:43.392379 systemd[1]: Switching root. May 16 16:44:43.776117 systemd-journald[220]: Journal stopped May 16 16:44:45.780638 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 16 16:44:45.780718 kernel: SELinux: policy capability network_peer_controls=1 May 16 16:44:45.780737 kernel: SELinux: policy capability open_perms=1 May 16 16:44:45.780749 kernel: SELinux: policy capability extended_socket_class=1 May 16 16:44:45.780760 kernel: SELinux: policy capability always_check_network=0 May 16 16:44:45.780771 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 16:44:45.780783 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 16:44:45.780794 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 16:44:45.780805 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 16:44:45.780818 kernel: SELinux: policy capability userspace_initial_context=0 May 16 16:44:45.780830 kernel: audit: type=1403 audit(1747413884.803:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 16:44:45.780842 systemd[1]: Successfully loaded SELinux policy in 54.077ms. May 16 16:44:45.780860 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.959ms. May 16 16:44:45.780874 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:44:45.780886 systemd[1]: Detected virtualization kvm. May 16 16:44:45.780898 systemd[1]: Detected architecture x86-64. May 16 16:44:45.780910 systemd[1]: Detected first boot. May 16 16:44:45.780922 systemd[1]: Initializing machine ID from VM UUID. May 16 16:44:45.780936 zram_generator::config[1130]: No configuration found. May 16 16:44:45.780949 kernel: Guest personality initialized and is inactive May 16 16:44:45.780965 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 16:44:45.780976 kernel: Initialized host personality May 16 16:44:45.780987 kernel: NET: Registered PF_VSOCK protocol family May 16 16:44:45.780999 systemd[1]: Populated /etc with preset unit settings. May 16 16:44:45.781012 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 16:44:45.781024 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 16:44:45.781036 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 16:44:45.781050 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 16:44:45.781062 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 16:44:45.781075 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 16:44:45.781087 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 16:44:45.781099 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 16:44:45.781111 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 16:44:45.781123 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 16:44:45.781135 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 16:44:45.781149 systemd[1]: Created slice user.slice - User and Session Slice. May 16 16:44:45.781161 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:44:45.781173 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:44:45.781185 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 16:44:45.781197 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 16:44:45.781210 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 16:44:45.781222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:44:45.781234 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 16:44:45.781248 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:44:45.781260 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:44:45.781272 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 16:44:45.781284 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 16:44:45.781296 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 16:44:45.781308 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 16:44:45.781320 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:44:45.781331 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:44:45.781344 systemd[1]: Reached target slices.target - Slice Units. May 16 16:44:45.781358 systemd[1]: Reached target swap.target - Swaps. May 16 16:44:45.781370 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 16:44:45.781382 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 16:44:45.781394 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 16:44:45.781406 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:44:45.781431 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:44:45.781444 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:44:45.781456 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 16:44:45.781468 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 16:44:45.781483 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 16:44:45.781494 systemd[1]: Mounting media.mount - External Media Directory... May 16 16:44:45.781507 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:44:45.781519 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 16:44:45.781531 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 16:44:45.781543 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 16:44:45.781561 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 16:44:45.781573 systemd[1]: Reached target machines.target - Containers. May 16 16:44:45.781595 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 16:44:45.781607 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:44:45.781619 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:44:45.781631 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 16:44:45.781644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:44:45.781656 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:44:45.781668 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:44:45.781679 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 16:44:45.781692 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:44:45.781706 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 16:44:45.781718 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 16:44:45.781730 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 16:44:45.781743 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 16:44:45.781754 systemd[1]: Stopped systemd-fsck-usr.service. May 16 16:44:45.781767 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:44:45.781780 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:44:45.781792 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:44:45.781806 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:44:45.781818 kernel: loop: module loaded May 16 16:44:45.781829 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 16:44:45.781841 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 16:44:45.781853 kernel: fuse: init (API version 7.41) May 16 16:44:45.781865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:44:45.781879 systemd[1]: verity-setup.service: Deactivated successfully. May 16 16:44:45.781891 systemd[1]: Stopped verity-setup.service. May 16 16:44:45.781903 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:44:45.781915 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 16:44:45.781929 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 16:44:45.781943 systemd[1]: Mounted media.mount - External Media Directory. May 16 16:44:45.781955 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 16:44:45.781968 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 16:44:45.781980 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 16:44:45.781992 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:44:45.782004 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 16:44:45.782016 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 16:44:45.782027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:44:45.782041 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:44:45.782053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:44:45.782065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:44:45.782078 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 16:44:45.782089 kernel: ACPI: bus type drm_connector registered May 16 16:44:45.782101 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 16:44:45.782112 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:44:45.782124 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:44:45.782136 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:44:45.782150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:44:45.782161 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:44:45.782173 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 16:44:45.782185 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 16:44:45.782217 systemd-journald[1194]: Collecting audit messages is disabled. May 16 16:44:45.782246 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 16:44:45.782261 systemd-journald[1194]: Journal started May 16 16:44:45.782283 systemd-journald[1194]: Runtime Journal (/run/log/journal/dcbbc5b2a76c4019bf6720995fb2cb24) is 6M, max 48.5M, 42.4M free. May 16 16:44:45.417162 systemd[1]: Queued start job for default target multi-user.target. May 16 16:44:45.436377 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 16:44:45.436831 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 16:44:45.786870 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 16:44:45.786901 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 16:44:45.786916 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:44:45.792513 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 16:44:45.799457 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 16:44:45.801450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:44:45.805046 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 16:44:45.807462 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:44:45.811714 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 16:44:45.814551 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:44:45.818936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:44:45.819010 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 16:44:45.829351 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 16:44:45.833648 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:44:45.834731 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:44:45.836476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:44:45.842531 kernel: loop0: detected capacity change from 0 to 113872 May 16 16:44:45.840565 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 16:44:45.842009 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 16:44:45.856179 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:44:45.865458 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 16:44:45.866628 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:44:45.868237 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 16 16:44:45.868261 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 16 16:44:45.869619 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 16:44:45.874236 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 16:44:45.891186 systemd-journald[1194]: Time spent on flushing to /var/log/journal/dcbbc5b2a76c4019bf6720995fb2cb24 is 20.351ms for 1076 entries. May 16 16:44:45.891186 systemd-journald[1194]: System Journal (/var/log/journal/dcbbc5b2a76c4019bf6720995fb2cb24) is 8M, max 195.6M, 187.6M free. May 16 16:44:46.390663 kernel: loop1: detected capacity change from 0 to 146240 May 16 16:44:46.390723 systemd-journald[1194]: Received client request to flush runtime journal. May 16 16:44:46.390763 kernel: loop2: detected capacity change from 0 to 229808 May 16 16:44:46.390782 kernel: loop3: detected capacity change from 0 to 113872 May 16 16:44:46.390798 kernel: loop4: detected capacity change from 0 to 146240 May 16 16:44:46.390815 kernel: loop5: detected capacity change from 0 to 229808 May 16 16:44:46.390831 zram_generator::config[1295]: No configuration found. May 16 16:44:45.978352 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 16:44:45.980115 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 16:44:45.983362 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 16:44:46.147187 (sd-merge)[1265]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 16:44:46.147781 (sd-merge)[1265]: Merged extensions into '/usr'. May 16 16:44:46.151793 systemd[1]: Reload requested from client PID 1217 ('systemd-sysext') (unit systemd-sysext.service)... May 16 16:44:46.151805 systemd[1]: Reloading... May 16 16:44:46.351387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:44:46.396975 ldconfig[1213]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 16:44:46.432849 systemd[1]: Reloading finished in 280 ms. May 16 16:44:46.460243 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 16:44:46.461927 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 16:44:46.463637 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 16:44:46.476113 systemd[1]: Starting ensure-sysext.service... May 16 16:44:46.478209 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 16:44:46.496325 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 16:44:46.502166 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... May 16 16:44:46.502186 systemd[1]: Reloading... May 16 16:44:46.548443 zram_generator::config[1363]: No configuration found. May 16 16:44:46.643877 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:44:46.726660 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 16:44:46.726877 systemd[1]: Reloading finished in 224 ms. May 16 16:44:46.747218 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 16:44:46.765841 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 16:44:46.775573 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:44:46.778035 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:44:46.781208 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:44:46.781496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:44:46.787638 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:44:46.790076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:44:46.793032 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:44:46.794353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:44:46.794494 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:44:46.794611 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:44:46.802218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:44:46.802379 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:44:46.802561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:44:46.802645 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:44:46.802737 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:44:46.805787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:44:46.806008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:44:46.807621 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:44:46.810614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:44:46.810725 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:44:46.810870 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:44:46.811010 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 16:44:46.811293 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 16:44:46.811575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:44:46.811615 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 16:44:46.811790 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:44:46.812050 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 16:44:46.812917 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 16:44:46.813085 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. May 16 16:44:46.813114 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. May 16 16:44:46.813608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:44:46.813755 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. May 16 16:44:46.813829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:44:46.813963 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. May 16 16:44:46.815892 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:44:46.816130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:44:46.818543 systemd-tmpfiles[1402]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:44:46.818629 systemd-tmpfiles[1402]: Skipping /boot May 16 16:44:46.828806 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 16:44:46.831031 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:44:46.831253 systemd-tmpfiles[1402]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:44:46.831268 systemd-tmpfiles[1402]: Skipping /boot May 16 16:44:46.832984 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:44:46.833213 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:44:46.837783 systemd[1]: Finished ensure-sysext.service. May 16 16:44:46.841893 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:44:46.841961 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:44:46.843730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:44:46.866916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:44:46.871274 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:44:46.874332 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 16:44:46.879207 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 16:44:46.888037 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:44:46.893576 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 16:44:46.896627 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 16:44:46.903144 systemd-udevd[1416]: Using default interface naming scheme 'v255'. May 16 16:44:46.907607 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 16:44:46.916346 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 16:44:46.924125 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 16:44:46.929141 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 16:44:46.945708 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:44:46.951518 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:44:46.956787 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 16:44:46.962663 augenrules[1452]: No rules May 16 16:44:46.965566 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:44:46.966000 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:44:46.973095 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 16:44:46.982059 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 16:44:46.986383 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 16:44:47.073552 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 16:44:47.106165 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:44:47.109120 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 16:44:47.144477 kernel: mousedev: PS/2 mouse device common for all mice May 16 16:44:47.148673 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 16 16:44:47.147688 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 16:44:47.154439 kernel: ACPI: button: Power Button [PWRF] May 16 16:44:47.170667 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 16:44:47.172191 systemd[1]: Reached target time-set.target - System Time Set. May 16 16:44:47.172531 systemd-networkd[1451]: lo: Link UP May 16 16:44:47.172536 systemd-networkd[1451]: lo: Gained carrier May 16 16:44:47.174999 systemd-networkd[1451]: Enumeration completed May 16 16:44:47.175107 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:44:47.176886 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:44:47.176895 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:44:47.180471 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 16 16:44:47.182204 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 16:44:47.182398 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 16:44:47.178324 systemd-networkd[1451]: eth0: Link UP May 16 16:44:47.178565 systemd-networkd[1451]: eth0: Gained carrier May 16 16:44:47.178589 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:44:47.180754 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 16:44:47.182123 systemd-resolved[1421]: Positive Trust Anchors: May 16 16:44:47.182134 systemd-resolved[1421]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:44:47.182166 systemd-resolved[1421]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:44:47.185554 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 16:44:47.186023 systemd-resolved[1421]: Defaulting to hostname 'linux'. May 16 16:44:47.188124 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:44:47.189317 systemd[1]: Reached target network.target - Network. May 16 16:44:47.190245 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:44:47.191476 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:44:47.192584 systemd-networkd[1451]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:44:47.192779 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 16:44:47.194070 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 16:44:47.195388 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 16 16:44:47.196744 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 16:44:47.197979 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 16:44:47.198527 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 16 16:44:47.199301 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 16:44:47.200641 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 16:44:47.200672 systemd[1]: Reached target paths.target - Path Units. May 16 16:44:48.759280 systemd-timesyncd[1422]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 16:44:48.759322 systemd-timesyncd[1422]: Initial clock synchronization to Fri 2025-05-16 16:44:48.759204 UTC. May 16 16:44:48.759402 systemd[1]: Reached target timers.target - Timer Units. May 16 16:44:48.762242 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 16:44:48.762901 systemd-resolved[1421]: Clock change detected. Flushing caches. May 16 16:44:48.765037 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 16:44:48.772880 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 16:44:48.775509 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 16:44:48.778397 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 16:44:48.787060 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 16:44:48.788652 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 16:44:48.791196 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 16:44:48.794112 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:44:48.795150 systemd[1]: Reached target basic.target - Basic System. May 16 16:44:48.797184 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 16:44:48.797211 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 16:44:48.801240 systemd[1]: Starting containerd.service - containerd container runtime... May 16 16:44:48.806267 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 16:44:48.816661 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 16:44:48.819208 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 16:44:48.823580 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 16:44:48.824646 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 16:44:48.832203 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 16 16:44:48.834370 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 16:44:48.838156 jq[1525]: false May 16 16:44:48.840130 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 16:44:48.843255 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 16:44:48.846965 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing passwd entry cache May 16 16:44:48.846884 oslogin_cache_refresh[1527]: Refreshing passwd entry cache May 16 16:44:48.847474 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 16:44:48.850422 extend-filesystems[1526]: Found loop3 May 16 16:44:48.851738 extend-filesystems[1526]: Found loop4 May 16 16:44:48.851738 extend-filesystems[1526]: Found loop5 May 16 16:44:48.851738 extend-filesystems[1526]: Found sr0 May 16 16:44:48.851738 extend-filesystems[1526]: Found vda May 16 16:44:48.851738 extend-filesystems[1526]: Found vda1 May 16 16:44:48.851738 extend-filesystems[1526]: Found vda2 May 16 16:44:48.851738 extend-filesystems[1526]: Found vda3 May 16 16:44:48.851738 extend-filesystems[1526]: Found usr May 16 16:44:48.851738 extend-filesystems[1526]: Found vda4 May 16 16:44:48.851738 extend-filesystems[1526]: Found vda6 May 16 16:44:48.851738 extend-filesystems[1526]: Found vda7 May 16 16:44:48.851738 extend-filesystems[1526]: Found vda9 May 16 16:44:48.851738 extend-filesystems[1526]: Checking size of /dev/vda9 May 16 16:44:48.879372 extend-filesystems[1526]: Resized partition /dev/vda9 May 16 16:44:48.860856 oslogin_cache_refresh[1527]: Failure getting users, quitting May 16 16:44:48.880313 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting users, quitting May 16 16:44:48.880313 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 16:44:48.880313 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing group entry cache May 16 16:44:48.880313 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting groups, quitting May 16 16:44:48.880313 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 16:44:48.855240 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 16:44:48.880464 extend-filesystems[1539]: resize2fs 1.47.2 (1-Jan-2025) May 16 16:44:48.860873 oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 16:44:48.862369 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 16:44:48.860917 oslogin_cache_refresh[1527]: Refreshing group entry cache May 16 16:44:48.862882 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 16:44:48.874501 oslogin_cache_refresh[1527]: Failure getting groups, quitting May 16 16:44:48.865283 systemd[1]: Starting update-engine.service - Update Engine... May 16 16:44:48.874512 oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 16:44:48.867948 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 16:44:48.896082 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 16:44:48.906421 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 16:44:48.912211 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 16:44:48.913581 jq[1540]: true May 16 16:44:48.913974 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 16:44:48.914247 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 16:44:48.914609 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 16 16:44:48.914841 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 16 16:44:48.923079 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 16:44:48.923396 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 16:44:48.923662 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 16:44:48.937541 update_engine[1538]: I20250516 16:44:48.937433 1538 main.cc:92] Flatcar Update Engine starting May 16 16:44:48.948311 jq[1548]: true May 16 16:44:48.950492 extend-filesystems[1539]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 16:44:48.950492 extend-filesystems[1539]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 16:44:48.950492 extend-filesystems[1539]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 16:44:48.954096 extend-filesystems[1526]: Resized filesystem in /dev/vda9 May 16 16:44:48.952186 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 16:44:48.953293 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 16:44:48.959543 systemd[1]: motdgen.service: Deactivated successfully. May 16 16:44:48.959820 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 16:44:48.960954 (ntainerd)[1561]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 16:44:48.975898 tar[1546]: linux-amd64/LICENSE May 16 16:44:48.976574 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:44:48.978908 tar[1546]: linux-amd64/helm May 16 16:44:48.993376 kernel: kvm_amd: TSC scaling supported May 16 16:44:48.993455 kernel: kvm_amd: Nested Virtualization enabled May 16 16:44:48.993469 kernel: kvm_amd: Nested Paging enabled May 16 16:44:48.994316 kernel: kvm_amd: LBR virtualization supported May 16 16:44:48.995223 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 16:44:48.995249 kernel: kvm_amd: Virtual GIF supported May 16 16:44:49.000786 dbus-daemon[1522]: [system] SELinux support is enabled May 16 16:44:49.007847 update_engine[1538]: I20250516 16:44:49.003504 1538 update_check_scheduler.cc:74] Next update check in 5m9s May 16 16:44:49.000979 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 16:44:49.021808 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:44:49.023409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:44:49.060629 systemd[1]: Started update-engine.service - Update Engine. May 16 16:44:49.062096 bash[1586]: Updated "/home/core/.ssh/authorized_keys" May 16 16:44:49.062807 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 16:44:49.062833 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 16:44:49.066317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:44:49.067516 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 16:44:49.067535 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 16:44:49.070601 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 16:44:49.073165 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 16:44:49.075829 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 16:44:49.104076 kernel: EDAC MC: Ver: 3.0.0 May 16 16:44:49.121260 systemd-logind[1534]: Watching system buttons on /dev/input/event2 (Power Button) May 16 16:44:49.124364 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 16:44:49.121285 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 16:44:49.121699 systemd-logind[1534]: New seat seat0. May 16 16:44:49.122784 systemd[1]: Started systemd-logind.service - User Login Management. May 16 16:44:49.156086 locksmithd[1592]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 16:44:49.161109 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 16:44:49.163405 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 16:44:49.189207 systemd[1]: issuegen.service: Deactivated successfully. May 16 16:44:49.189833 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 16:44:49.192816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:44:49.198644 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 16:44:49.209774 containerd[1561]: time="2025-05-16T16:44:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 16:44:49.212747 containerd[1561]: time="2025-05-16T16:44:49.212665357Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 16:44:49.218290 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 16:44:49.220935 containerd[1561]: time="2025-05-16T16:44:49.220839630Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.357µs" May 16 16:44:49.220935 containerd[1561]: time="2025-05-16T16:44:49.220876990Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 16:44:49.220935 containerd[1561]: time="2025-05-16T16:44:49.220896006Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 16:44:49.221120 containerd[1561]: time="2025-05-16T16:44:49.221102854Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 16:44:49.221190 containerd[1561]: time="2025-05-16T16:44:49.221122480Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 16:44:49.221190 containerd[1561]: time="2025-05-16T16:44:49.221146255Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:44:49.221393 containerd[1561]: time="2025-05-16T16:44:49.221236384Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:44:49.221393 containerd[1561]: time="2025-05-16T16:44:49.221256402Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:44:49.221571 containerd[1561]: time="2025-05-16T16:44:49.221544963Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:44:49.221571 containerd[1561]: time="2025-05-16T16:44:49.221565351Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:44:49.221643 containerd[1561]: time="2025-05-16T16:44:49.221578506Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:44:49.221643 containerd[1561]: time="2025-05-16T16:44:49.221589827Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 16:44:49.221716 containerd[1561]: time="2025-05-16T16:44:49.221693602Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 16:44:49.222015 containerd[1561]: time="2025-05-16T16:44:49.221943260Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:44:49.222015 containerd[1561]: time="2025-05-16T16:44:49.221985970Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:44:49.222015 containerd[1561]: time="2025-05-16T16:44:49.222000297Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 16:44:49.222871 containerd[1561]: time="2025-05-16T16:44:49.222037176Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 16:44:49.222871 containerd[1561]: time="2025-05-16T16:44:49.222318965Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 16:44:49.222871 containerd[1561]: time="2025-05-16T16:44:49.222400187Z" level=info msg="metadata content store policy set" policy=shared May 16 16:44:49.222196 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 16:44:49.225238 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 16:44:49.226735 systemd[1]: Reached target getty.target - Login Prompts. May 16 16:44:49.320237 containerd[1561]: time="2025-05-16T16:44:49.320167951Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 16:44:49.320237 containerd[1561]: time="2025-05-16T16:44:49.320241469Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 16:44:49.320237 containerd[1561]: time="2025-05-16T16:44:49.320257479Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 16:44:49.320427 containerd[1561]: time="2025-05-16T16:44:49.320268941Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 16:44:49.320427 containerd[1561]: time="2025-05-16T16:44:49.320281685Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 16:44:49.320751 containerd[1561]: time="2025-05-16T16:44:49.320678639Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 16:44:49.320805 containerd[1561]: time="2025-05-16T16:44:49.320787644Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 16:44:49.320837 containerd[1561]: time="2025-05-16T16:44:49.320809695Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 16:44:49.320837 containerd[1561]: time="2025-05-16T16:44:49.320828751Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 16:44:49.320879 containerd[1561]: time="2025-05-16T16:44:49.320845733Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 16:44:49.320879 containerd[1561]: time="2025-05-16T16:44:49.320860380Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 16:44:49.320914 containerd[1561]: time="2025-05-16T16:44:49.320879075Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 16:44:49.321621 containerd[1561]: time="2025-05-16T16:44:49.321589909Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 16:44:49.321659 containerd[1561]: time="2025-05-16T16:44:49.321632048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 16:44:49.321659 containerd[1561]: time="2025-05-16T16:44:49.321653778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 16:44:49.321707 containerd[1561]: time="2025-05-16T16:44:49.321666472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 16:44:49.321707 containerd[1561]: time="2025-05-16T16:44:49.321680499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 16:44:49.321707 containerd[1561]: time="2025-05-16T16:44:49.321694415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 16:44:49.321763 containerd[1561]: time="2025-05-16T16:44:49.321716827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 16:44:49.321763 containerd[1561]: time="2025-05-16T16:44:49.321730492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 16:44:49.321763 containerd[1561]: time="2025-05-16T16:44:49.321745030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 16:44:49.321763 containerd[1561]: time="2025-05-16T16:44:49.321760148Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 16:44:49.321851 containerd[1561]: time="2025-05-16T16:44:49.321773814Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 16:44:49.321875 containerd[1561]: time="2025-05-16T16:44:49.321853483Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 16:44:49.321898 containerd[1561]: time="2025-05-16T16:44:49.321876176Z" level=info msg="Start snapshots syncer" May 16 16:44:49.321922 containerd[1561]: time="2025-05-16T16:44:49.321908727Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 16:44:49.322266 containerd[1561]: time="2025-05-16T16:44:49.322227535Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 16:44:49.322369 containerd[1561]: time="2025-05-16T16:44:49.322287567Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 16:44:49.323141 containerd[1561]: time="2025-05-16T16:44:49.323101314Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 16:44:49.323381 containerd[1561]: time="2025-05-16T16:44:49.323349319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 16:44:49.323413 containerd[1561]: time="2025-05-16T16:44:49.323381309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 16:44:49.323413 containerd[1561]: time="2025-05-16T16:44:49.323393552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 16:44:49.323413 containerd[1561]: time="2025-05-16T16:44:49.323404282Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 16:44:49.323482 containerd[1561]: time="2025-05-16T16:44:49.323416525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 16:44:49.323482 containerd[1561]: time="2025-05-16T16:44:49.323428477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 16:44:49.323482 containerd[1561]: time="2025-05-16T16:44:49.323439027Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 16:44:49.323482 containerd[1561]: time="2025-05-16T16:44:49.323462671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 16:44:49.323482 containerd[1561]: time="2025-05-16T16:44:49.323473923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 16:44:49.323482 containerd[1561]: time="2025-05-16T16:44:49.323484823Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 16:44:49.323622 containerd[1561]: time="2025-05-16T16:44:49.323518576Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:44:49.323622 containerd[1561]: time="2025-05-16T16:44:49.323532432Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:44:49.323622 containerd[1561]: time="2025-05-16T16:44:49.323541880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:44:49.323622 containerd[1561]: time="2025-05-16T16:44:49.323557940Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:44:49.323622 containerd[1561]: time="2025-05-16T16:44:49.323567929Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 16:44:49.323622 containerd[1561]: time="2025-05-16T16:44:49.323582055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 16:44:49.323622 containerd[1561]: time="2025-05-16T16:44:49.323593938Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 16:44:49.323622 containerd[1561]: time="2025-05-16T16:44:49.323612843Z" level=info msg="runtime interface created" May 16 16:44:49.323622 containerd[1561]: time="2025-05-16T16:44:49.323618233Z" level=info msg="created NRI interface" May 16 16:44:49.323622 containerd[1561]: time="2025-05-16T16:44:49.323627871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 16:44:49.323820 containerd[1561]: time="2025-05-16T16:44:49.323638772Z" level=info msg="Connect containerd service" May 16 16:44:49.323820 containerd[1561]: time="2025-05-16T16:44:49.323661805Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 16:44:49.324398 containerd[1561]: time="2025-05-16T16:44:49.324362449Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 16:44:49.410994 containerd[1561]: time="2025-05-16T16:44:49.410931658Z" level=info msg="Start subscribing containerd event" May 16 16:44:49.411161 containerd[1561]: time="2025-05-16T16:44:49.411099874Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 16:44:49.411271 containerd[1561]: time="2025-05-16T16:44:49.411163804Z" level=info msg="Start recovering state" May 16 16:44:49.411359 containerd[1561]: time="2025-05-16T16:44:49.411338511Z" level=info msg="Start event monitor" May 16 16:44:49.411359 containerd[1561]: time="2025-05-16T16:44:49.411182960Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 16:44:49.412617 containerd[1561]: time="2025-05-16T16:44:49.411359340Z" level=info msg="Start cni network conf syncer for default" May 16 16:44:49.412617 containerd[1561]: time="2025-05-16T16:44:49.411381662Z" level=info msg="Start streaming server" May 16 16:44:49.412617 containerd[1561]: time="2025-05-16T16:44:49.411391070Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 16:44:49.412617 containerd[1561]: time="2025-05-16T16:44:49.411398654Z" level=info msg="runtime interface starting up..." May 16 16:44:49.412617 containerd[1561]: time="2025-05-16T16:44:49.411404255Z" level=info msg="starting plugins..." May 16 16:44:49.412617 containerd[1561]: time="2025-05-16T16:44:49.411425715Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 16:44:49.412617 containerd[1561]: time="2025-05-16T16:44:49.411998269Z" level=info msg="containerd successfully booted in 0.202935s" May 16 16:44:49.411662 systemd[1]: Started containerd.service - containerd container runtime. May 16 16:44:49.455009 tar[1546]: linux-amd64/README.md May 16 16:44:49.478329 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 16:44:50.705271 systemd-networkd[1451]: eth0: Gained IPv6LL May 16 16:44:50.708602 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 16:44:50.710754 systemd[1]: Reached target network-online.target - Network is Online. May 16 16:44:50.713853 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 16:44:50.716750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:44:50.734911 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 16:44:50.760005 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 16:44:50.760342 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 16:44:50.762590 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 16:44:50.765259 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 16:44:51.445214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:44:51.447031 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 16:44:51.449143 systemd[1]: Startup finished in 2.877s (kernel) + 8.158s (initrd) + 5.132s (userspace) = 16.168s. May 16 16:44:51.452743 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:44:51.849813 kubelet[1665]: E0516 16:44:51.849674 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:44:51.858797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:44:51.858998 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:44:51.859381 systemd[1]: kubelet.service: Consumed 968ms CPU time, 267.9M memory peak. May 16 16:44:51.861291 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 16:44:51.862632 systemd[1]: Started sshd@0-10.0.0.104:22-10.0.0.1:56444.service - OpenSSH per-connection server daemon (10.0.0.1:56444). May 16 16:44:51.924966 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 56444 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:51.926583 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:51.933394 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 16:44:51.934574 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 16:44:51.941449 systemd-logind[1534]: New session 1 of user core. May 16 16:44:51.958103 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 16:44:51.961434 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 16:44:51.983390 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 16:44:51.985723 systemd-logind[1534]: New session c1 of user core. May 16 16:44:52.137115 systemd[1683]: Queued start job for default target default.target. May 16 16:44:52.156287 systemd[1683]: Created slice app.slice - User Application Slice. May 16 16:44:52.156312 systemd[1683]: Reached target paths.target - Paths. May 16 16:44:52.156351 systemd[1683]: Reached target timers.target - Timers. May 16 16:44:52.157775 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 16:44:52.168847 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 16:44:52.168962 systemd[1683]: Reached target sockets.target - Sockets. May 16 16:44:52.169003 systemd[1683]: Reached target basic.target - Basic System. May 16 16:44:52.169062 systemd[1683]: Reached target default.target - Main User Target. May 16 16:44:52.169121 systemd[1683]: Startup finished in 177ms. May 16 16:44:52.169821 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 16:44:52.171965 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 16:44:52.240755 systemd[1]: Started sshd@1-10.0.0.104:22-10.0.0.1:56448.service - OpenSSH per-connection server daemon (10.0.0.1:56448). May 16 16:44:52.291207 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 56448 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:52.292492 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:52.296832 systemd-logind[1534]: New session 2 of user core. May 16 16:44:52.313178 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 16:44:52.367468 sshd[1696]: Connection closed by 10.0.0.1 port 56448 May 16 16:44:52.367728 sshd-session[1694]: pam_unix(sshd:session): session closed for user core May 16 16:44:52.386196 systemd[1]: sshd@1-10.0.0.104:22-10.0.0.1:56448.service: Deactivated successfully. May 16 16:44:52.388227 systemd[1]: session-2.scope: Deactivated successfully. May 16 16:44:52.388926 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. May 16 16:44:52.392694 systemd[1]: Started sshd@2-10.0.0.104:22-10.0.0.1:56450.service - OpenSSH per-connection server daemon (10.0.0.1:56450). May 16 16:44:52.393393 systemd-logind[1534]: Removed session 2. May 16 16:44:52.450711 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 56450 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:52.452410 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:52.456867 systemd-logind[1534]: New session 3 of user core. May 16 16:44:52.472164 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 16:44:52.521590 sshd[1704]: Connection closed by 10.0.0.1 port 56450 May 16 16:44:52.521913 sshd-session[1702]: pam_unix(sshd:session): session closed for user core May 16 16:44:52.533396 systemd[1]: sshd@2-10.0.0.104:22-10.0.0.1:56450.service: Deactivated successfully. May 16 16:44:52.534978 systemd[1]: session-3.scope: Deactivated successfully. May 16 16:44:52.535822 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. May 16 16:44:52.537870 systemd-logind[1534]: Removed session 3. May 16 16:44:52.539455 systemd[1]: Started sshd@3-10.0.0.104:22-10.0.0.1:56456.service - OpenSSH per-connection server daemon (10.0.0.1:56456). May 16 16:44:52.597156 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 56456 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:52.598689 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:52.603331 systemd-logind[1534]: New session 4 of user core. May 16 16:44:52.614196 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 16:44:52.668800 sshd[1712]: Connection closed by 10.0.0.1 port 56456 May 16 16:44:52.669073 sshd-session[1710]: pam_unix(sshd:session): session closed for user core May 16 16:44:52.679460 systemd[1]: sshd@3-10.0.0.104:22-10.0.0.1:56456.service: Deactivated successfully. May 16 16:44:52.681136 systemd[1]: session-4.scope: Deactivated successfully. May 16 16:44:52.681831 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. May 16 16:44:52.684520 systemd[1]: Started sshd@4-10.0.0.104:22-10.0.0.1:56462.service - OpenSSH per-connection server daemon (10.0.0.1:56462). May 16 16:44:52.685204 systemd-logind[1534]: Removed session 4. May 16 16:44:52.733134 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 56462 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:52.734481 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:52.738496 systemd-logind[1534]: New session 5 of user core. May 16 16:44:52.751177 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 16:44:52.810239 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 16:44:52.810558 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:44:52.829814 sudo[1721]: pam_unix(sudo:session): session closed for user root May 16 16:44:52.831209 sshd[1720]: Connection closed by 10.0.0.1 port 56462 May 16 16:44:52.831600 sshd-session[1718]: pam_unix(sshd:session): session closed for user core May 16 16:44:52.846743 systemd[1]: sshd@4-10.0.0.104:22-10.0.0.1:56462.service: Deactivated successfully. May 16 16:44:52.848738 systemd[1]: session-5.scope: Deactivated successfully. May 16 16:44:52.849502 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. May 16 16:44:52.852601 systemd[1]: Started sshd@5-10.0.0.104:22-10.0.0.1:56472.service - OpenSSH per-connection server daemon (10.0.0.1:56472). May 16 16:44:52.853224 systemd-logind[1534]: Removed session 5. May 16 16:44:52.900639 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 56472 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:52.902130 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:52.906380 systemd-logind[1534]: New session 6 of user core. May 16 16:44:52.921189 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 16:44:52.974533 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 16:44:52.974841 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:44:52.980982 sudo[1732]: pam_unix(sudo:session): session closed for user root May 16 16:44:52.987322 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 16:44:52.987617 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:44:52.997000 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:44:53.041408 augenrules[1754]: No rules May 16 16:44:53.042356 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:44:53.042717 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:44:53.043821 sudo[1731]: pam_unix(sudo:session): session closed for user root May 16 16:44:53.045345 sshd[1730]: Connection closed by 10.0.0.1 port 56472 May 16 16:44:53.045666 sshd-session[1727]: pam_unix(sshd:session): session closed for user core May 16 16:44:53.053930 systemd[1]: sshd@5-10.0.0.104:22-10.0.0.1:56472.service: Deactivated successfully. May 16 16:44:53.055881 systemd[1]: session-6.scope: Deactivated successfully. May 16 16:44:53.056723 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. May 16 16:44:53.059295 systemd[1]: Started sshd@6-10.0.0.104:22-10.0.0.1:56474.service - OpenSSH per-connection server daemon (10.0.0.1:56474). May 16 16:44:53.059964 systemd-logind[1534]: Removed session 6. May 16 16:44:53.111964 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 56474 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:44:53.113414 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:44:53.117730 systemd-logind[1534]: New session 7 of user core. May 16 16:44:53.127225 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 16:44:53.180397 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 16:44:53.180703 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:44:53.489348 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 16:44:53.511456 (dockerd)[1786]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 16:44:53.724910 dockerd[1786]: time="2025-05-16T16:44:53.724835078Z" level=info msg="Starting up" May 16 16:44:53.726184 dockerd[1786]: time="2025-05-16T16:44:53.726161897Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 16:44:54.446886 dockerd[1786]: time="2025-05-16T16:44:54.446821208Z" level=info msg="Loading containers: start." May 16 16:44:54.457074 kernel: Initializing XFRM netlink socket May 16 16:44:54.704595 systemd-networkd[1451]: docker0: Link UP May 16 16:44:54.710384 dockerd[1786]: time="2025-05-16T16:44:54.710333982Z" level=info msg="Loading containers: done." May 16 16:44:54.723589 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck326523682-merged.mount: Deactivated successfully. May 16 16:44:54.724327 dockerd[1786]: time="2025-05-16T16:44:54.724277246Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 16:44:54.724405 dockerd[1786]: time="2025-05-16T16:44:54.724360933Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 16:44:54.724515 dockerd[1786]: time="2025-05-16T16:44:54.724485707Z" level=info msg="Initializing buildkit" May 16 16:44:54.755990 dockerd[1786]: time="2025-05-16T16:44:54.755942861Z" level=info msg="Completed buildkit initialization" May 16 16:44:54.761987 dockerd[1786]: time="2025-05-16T16:44:54.761952073Z" level=info msg="Daemon has completed initialization" May 16 16:44:54.762091 dockerd[1786]: time="2025-05-16T16:44:54.762026954Z" level=info msg="API listen on /run/docker.sock" May 16 16:44:54.762251 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 16:44:55.282892 containerd[1561]: time="2025-05-16T16:44:55.282844208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 16 16:44:55.977163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217838234.mount: Deactivated successfully. May 16 16:44:57.126321 containerd[1561]: time="2025-05-16T16:44:57.126257995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:44:57.174612 containerd[1561]: time="2025-05-16T16:44:57.174539387Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 16 16:44:57.192403 containerd[1561]: time="2025-05-16T16:44:57.192344515Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:44:57.216633 containerd[1561]: time="2025-05-16T16:44:57.216594230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:44:57.217605 containerd[1561]: time="2025-05-16T16:44:57.217536708Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.93463402s" May 16 16:44:57.217605 containerd[1561]: time="2025-05-16T16:44:57.217596069Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 16 16:44:57.218202 containerd[1561]: time="2025-05-16T16:44:57.218171188Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 16 16:44:58.492131 containerd[1561]: time="2025-05-16T16:44:58.492066642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:44:58.492766 containerd[1561]: time="2025-05-16T16:44:58.492713114Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 16 16:44:58.493826 containerd[1561]: time="2025-05-16T16:44:58.493795114Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:44:58.496088 containerd[1561]: time="2025-05-16T16:44:58.496064601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:44:58.496990 containerd[1561]: time="2025-05-16T16:44:58.496959980Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.278752424s" May 16 16:44:58.496990 containerd[1561]: time="2025-05-16T16:44:58.496988484Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 16 16:44:58.497438 containerd[1561]: time="2025-05-16T16:44:58.497396168Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 16 16:44:59.780969 containerd[1561]: time="2025-05-16T16:44:59.780906377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:44:59.781689 containerd[1561]: time="2025-05-16T16:44:59.781651845Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 16 16:44:59.782940 containerd[1561]: time="2025-05-16T16:44:59.782905687Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:44:59.785355 containerd[1561]: time="2025-05-16T16:44:59.785304657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:44:59.786112 containerd[1561]: time="2025-05-16T16:44:59.786083898Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.288648827s" May 16 16:44:59.786166 containerd[1561]: time="2025-05-16T16:44:59.786115307Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 16 16:44:59.786579 containerd[1561]: time="2025-05-16T16:44:59.786546576Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 16 16:45:01.278321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32287118.mount: Deactivated successfully. May 16 16:45:01.571295 containerd[1561]: time="2025-05-16T16:45:01.571164619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:01.571992 containerd[1561]: time="2025-05-16T16:45:01.571931668Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 16 16:45:01.573361 containerd[1561]: time="2025-05-16T16:45:01.573327767Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:01.575171 containerd[1561]: time="2025-05-16T16:45:01.575123145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:01.575620 containerd[1561]: time="2025-05-16T16:45:01.575587736Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.789011695s" May 16 16:45:01.575620 containerd[1561]: time="2025-05-16T16:45:01.575617301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 16 16:45:01.576128 containerd[1561]: time="2025-05-16T16:45:01.576101189Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 16 16:45:02.109429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 16:45:02.110967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:45:02.405473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:45:02.422543 (kubelet)[2077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:45:02.657581 kubelet[2077]: E0516 16:45:02.657458 2077 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:45:02.665144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:45:02.665355 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:45:02.665788 systemd[1]: kubelet.service: Consumed 232ms CPU time, 110.9M memory peak. May 16 16:45:02.844528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033948507.mount: Deactivated successfully. May 16 16:45:04.086309 containerd[1561]: time="2025-05-16T16:45:04.086244320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:04.104351 containerd[1561]: time="2025-05-16T16:45:04.104313453Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 16 16:45:04.119871 containerd[1561]: time="2025-05-16T16:45:04.119810400Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:04.147696 containerd[1561]: time="2025-05-16T16:45:04.147650390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:04.148658 containerd[1561]: time="2025-05-16T16:45:04.148616032Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.57248671s" May 16 16:45:04.148747 containerd[1561]: time="2025-05-16T16:45:04.148659663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 16 16:45:04.149264 containerd[1561]: time="2025-05-16T16:45:04.149222549Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 16:45:04.763224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819651427.mount: Deactivated successfully. May 16 16:45:04.769319 containerd[1561]: time="2025-05-16T16:45:04.769246617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:45:04.770062 containerd[1561]: time="2025-05-16T16:45:04.770014888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 16:45:04.771130 containerd[1561]: time="2025-05-16T16:45:04.771073344Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:45:04.773037 containerd[1561]: time="2025-05-16T16:45:04.772987805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:45:04.773667 containerd[1561]: time="2025-05-16T16:45:04.773621463Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 624.358498ms" May 16 16:45:04.773667 containerd[1561]: time="2025-05-16T16:45:04.773650307Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 16:45:04.774375 containerd[1561]: time="2025-05-16T16:45:04.774325464Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 16 16:45:07.119693 containerd[1561]: time="2025-05-16T16:45:07.119617231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:07.123893 containerd[1561]: time="2025-05-16T16:45:07.123848157Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 16 16:45:07.125545 containerd[1561]: time="2025-05-16T16:45:07.125473075Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:07.128162 containerd[1561]: time="2025-05-16T16:45:07.128123476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:07.129124 containerd[1561]: time="2025-05-16T16:45:07.129083407Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.354717006s" May 16 16:45:07.129124 containerd[1561]: time="2025-05-16T16:45:07.129112461Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 16 16:45:09.988559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:45:09.988735 systemd[1]: kubelet.service: Consumed 232ms CPU time, 110.9M memory peak. May 16 16:45:09.991136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:45:10.016924 systemd[1]: Reload requested from client PID 2182 ('systemctl') (unit session-7.scope)... May 16 16:45:10.016941 systemd[1]: Reloading... May 16 16:45:10.107276 zram_generator::config[2228]: No configuration found. May 16 16:45:10.347208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:45:10.468133 systemd[1]: Reloading finished in 450 ms. May 16 16:45:10.536767 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 16:45:10.536871 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 16:45:10.537240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:45:10.537285 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.2M memory peak. May 16 16:45:10.538986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:45:10.714142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:45:10.733447 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:45:10.771284 kubelet[2273]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:45:10.771284 kubelet[2273]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 16:45:10.771284 kubelet[2273]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:45:10.771693 kubelet[2273]: I0516 16:45:10.771320 2273 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:45:11.113446 kubelet[2273]: I0516 16:45:11.113319 2273 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 16:45:11.113446 kubelet[2273]: I0516 16:45:11.113349 2273 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:45:11.113637 kubelet[2273]: I0516 16:45:11.113611 2273 server.go:956] "Client rotation is on, will bootstrap in background" May 16 16:45:11.140758 kubelet[2273]: E0516 16:45:11.140695 2273 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 16 16:45:11.142338 kubelet[2273]: I0516 16:45:11.142285 2273 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:45:11.148764 kubelet[2273]: I0516 16:45:11.148733 2273 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:45:11.156622 kubelet[2273]: I0516 16:45:11.156564 2273 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:45:11.157034 kubelet[2273]: I0516 16:45:11.156978 2273 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:45:11.157282 kubelet[2273]: I0516 16:45:11.157027 2273 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:45:11.157395 kubelet[2273]: I0516 16:45:11.157285 2273 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:45:11.157395 kubelet[2273]: I0516 16:45:11.157298 2273 container_manager_linux.go:303] "Creating device plugin manager" May 16 16:45:11.158279 kubelet[2273]: I0516 16:45:11.158246 2273 state_mem.go:36] "Initialized new in-memory state store" May 16 16:45:11.160851 kubelet[2273]: I0516 16:45:11.160815 2273 kubelet.go:480] "Attempting to sync node with API server" May 16 16:45:11.160851 kubelet[2273]: I0516 16:45:11.160847 2273 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:45:11.160912 kubelet[2273]: I0516 16:45:11.160884 2273 kubelet.go:386] "Adding apiserver pod source" May 16 16:45:11.162828 kubelet[2273]: I0516 16:45:11.162448 2273 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:45:11.169218 kubelet[2273]: I0516 16:45:11.169176 2273 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:45:11.169729 kubelet[2273]: I0516 16:45:11.169684 2273 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 16:45:11.171068 kubelet[2273]: W0516 16:45:11.171022 2273 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 16:45:11.172954 kubelet[2273]: E0516 16:45:11.172788 2273 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 16 16:45:11.173450 kubelet[2273]: E0516 16:45:11.173397 2273 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 16:45:11.174959 kubelet[2273]: I0516 16:45:11.174916 2273 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 16:45:11.175174 kubelet[2273]: I0516 16:45:11.175148 2273 server.go:1289] "Started kubelet" May 16 16:45:11.177757 kubelet[2273]: I0516 16:45:11.176506 2273 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:45:11.178084 kubelet[2273]: I0516 16:45:11.178008 2273 server.go:317] "Adding debug handlers to kubelet server" May 16 16:45:11.181064 kubelet[2273]: I0516 16:45:11.180919 2273 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:45:11.181318 kubelet[2273]: I0516 16:45:11.181293 2273 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:45:11.184187 kubelet[2273]: E0516 16:45:11.184145 2273 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 16:45:11.184379 kubelet[2273]: I0516 16:45:11.184344 2273 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:45:11.185467 kubelet[2273]: I0516 16:45:11.184665 2273 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:45:11.186010 kubelet[2273]: I0516 16:45:11.185974 2273 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 16:45:11.186255 kubelet[2273]: I0516 16:45:11.186115 2273 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 16:45:11.186255 kubelet[2273]: E0516 16:45:11.186142 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:11.186338 kubelet[2273]: I0516 16:45:11.186320 2273 reconciler.go:26] "Reconciler: start to sync state" May 16 16:45:11.186634 kubelet[2273]: E0516 16:45:11.184708 2273 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.104:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18400fb1ab28a1b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 16:45:11.174947251 +0000 UTC m=+0.437031678,LastTimestamp:2025-05-16 16:45:11.174947251 +0000 UTC m=+0.437031678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 16:45:11.187369 kubelet[2273]: E0516 16:45:11.187328 2273 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 16:45:11.188066 kubelet[2273]: I0516 16:45:11.187440 2273 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:45:11.188066 kubelet[2273]: E0516 16:45:11.187442 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="200ms" May 16 16:45:11.188874 kubelet[2273]: I0516 16:45:11.188855 2273 factory.go:223] Registration of the containerd container factory successfully May 16 16:45:11.188874 kubelet[2273]: I0516 16:45:11.188869 2273 factory.go:223] Registration of the systemd container factory successfully May 16 16:45:11.192994 kubelet[2273]: I0516 16:45:11.192936 2273 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 16:45:11.204245 kubelet[2273]: I0516 16:45:11.204206 2273 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 16:45:11.204245 kubelet[2273]: I0516 16:45:11.204245 2273 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 16:45:11.204387 kubelet[2273]: I0516 16:45:11.204261 2273 state_mem.go:36] "Initialized new in-memory state store" May 16 16:45:11.287141 kubelet[2273]: E0516 16:45:11.287032 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:11.387654 kubelet[2273]: E0516 16:45:11.387591 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:11.389245 kubelet[2273]: E0516 16:45:11.389200 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="400ms" May 16 16:45:11.488676 kubelet[2273]: E0516 16:45:11.488602 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:11.589184 kubelet[2273]: E0516 16:45:11.589103 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:11.689504 kubelet[2273]: E0516 16:45:11.689334 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:11.790021 kubelet[2273]: E0516 16:45:11.789937 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:11.790605 kubelet[2273]: E0516 16:45:11.790549 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="800ms" May 16 16:45:11.891154 kubelet[2273]: E0516 16:45:11.891066 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:11.991843 kubelet[2273]: E0516 16:45:11.991650 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:12.028887 kubelet[2273]: I0516 16:45:12.028837 2273 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 16:45:12.028887 kubelet[2273]: I0516 16:45:12.028874 2273 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 16:45:12.028887 kubelet[2273]: I0516 16:45:12.028895 2273 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 16:45:12.028887 kubelet[2273]: I0516 16:45:12.028902 2273 kubelet.go:2436] "Starting kubelet main sync loop" May 16 16:45:12.029157 kubelet[2273]: E0516 16:45:12.028949 2273 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:45:12.029500 kubelet[2273]: E0516 16:45:12.029417 2273 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 16 16:45:12.042470 kubelet[2273]: I0516 16:45:12.042426 2273 policy_none.go:49] "None policy: Start" May 16 16:45:12.042470 kubelet[2273]: I0516 16:45:12.042450 2273 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 16:45:12.042470 kubelet[2273]: I0516 16:45:12.042462 2273 state_mem.go:35] "Initializing new in-memory state store" May 16 16:45:12.054751 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 16:45:12.070278 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 16:45:12.086252 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 16:45:12.087811 kubelet[2273]: E0516 16:45:12.087767 2273 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 16:45:12.088096 kubelet[2273]: I0516 16:45:12.088073 2273 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:45:12.088149 kubelet[2273]: I0516 16:45:12.088090 2273 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:45:12.088390 kubelet[2273]: I0516 16:45:12.088359 2273 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:45:12.089760 kubelet[2273]: E0516 16:45:12.089716 2273 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 16:45:12.089760 kubelet[2273]: E0516 16:45:12.089755 2273 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 16:45:12.142845 systemd[1]: Created slice kubepods-burstable-podd0ccafc8ce3e8b09d3548565c66e8477.slice - libcontainer container kubepods-burstable-podd0ccafc8ce3e8b09d3548565c66e8477.slice. May 16 16:45:12.151031 kubelet[2273]: E0516 16:45:12.150982 2273 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:45:12.190240 kubelet[2273]: I0516 16:45:12.190211 2273 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:45:12.190551 kubelet[2273]: E0516 16:45:12.190525 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" May 16 16:45:12.192717 kubelet[2273]: I0516 16:45:12.192685 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:45:12.192717 kubelet[2273]: I0516 16:45:12.192711 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:45:12.192783 kubelet[2273]: I0516 16:45:12.192728 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:45:12.192805 kubelet[2273]: I0516 16:45:12.192762 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0ccafc8ce3e8b09d3548565c66e8477-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0ccafc8ce3e8b09d3548565c66e8477\") " pod="kube-system/kube-apiserver-localhost" May 16 16:45:12.192827 kubelet[2273]: I0516 16:45:12.192803 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0ccafc8ce3e8b09d3548565c66e8477-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0ccafc8ce3e8b09d3548565c66e8477\") " pod="kube-system/kube-apiserver-localhost" May 16 16:45:12.192847 kubelet[2273]: I0516 16:45:12.192826 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0ccafc8ce3e8b09d3548565c66e8477-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d0ccafc8ce3e8b09d3548565c66e8477\") " pod="kube-system/kube-apiserver-localhost" May 16 16:45:12.192871 kubelet[2273]: I0516 16:45:12.192851 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:45:12.192897 kubelet[2273]: I0516 16:45:12.192868 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:45:12.231870 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 16 16:45:12.233626 kubelet[2273]: E0516 16:45:12.233582 2273 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:45:12.294178 kubelet[2273]: I0516 16:45:12.294033 2273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 16 16:45:12.316125 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 16 16:45:12.318166 kubelet[2273]: E0516 16:45:12.318135 2273 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:45:12.373933 kubelet[2273]: E0516 16:45:12.373876 2273 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 16:45:12.392692 kubelet[2273]: I0516 16:45:12.392659 2273 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:45:12.393008 kubelet[2273]: E0516 16:45:12.392969 2273 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" May 16 16:45:12.399654 kubelet[2273]: E0516 16:45:12.399607 2273 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 16:45:12.424496 kubelet[2273]: E0516 16:45:12.424382 2273 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.104:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18400fb1ab28a1b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 16:45:11.174947251 +0000 UTC m=+0.437031678,LastTimestamp:2025-05-16 16:45:11.174947251 +0000 UTC m=+0.437031678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 16:45:12.452628 kubelet[2273]: E0516 16:45:12.452581 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:12.453327 containerd[1561]: time="2025-05-16T16:45:12.453273515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d0ccafc8ce3e8b09d3548565c66e8477,Namespace:kube-system,Attempt:0,}" May 16 16:45:12.480345 containerd[1561]: time="2025-05-16T16:45:12.480288137Z" level=info msg="connecting to shim e34cfd512ae4e27f7a528a59275d810aadc113dbd1e914c1403098f939c7fae1" address="unix:///run/containerd/s/34d74b4fc7178dd2065e759ce33d7bb27defe21109a6b62801207c801b2c4bd8" namespace=k8s.io protocol=ttrpc version=3 May 16 16:45:12.507205 systemd[1]: Started cri-containerd-e34cfd512ae4e27f7a528a59275d810aadc113dbd1e914c1403098f939c7fae1.scope - libcontainer container e34cfd512ae4e27f7a528a59275d810aadc113dbd1e914c1403098f939c7fae1. May 16 16:45:12.534328 kubelet[2273]: E0516 16:45:12.534277 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:12.534856 containerd[1561]: time="2025-05-16T16:45:12.534817068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 16 16:45:12.553941 containerd[1561]: time="2025-05-16T16:45:12.553820613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d0ccafc8ce3e8b09d3548565c66e8477,Namespace:kube-system,Attempt:0,} returns sandbox id \"e34cfd512ae4e27f7a528a59275d810aadc113dbd1e914c1403098f939c7fae1\"" May 16 16:45:12.554891 kubelet[2273]: E0516 16:45:12.554866 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:12.558987 containerd[1561]: time="2025-05-16T16:45:12.558951427Z" level=info msg="CreateContainer within sandbox \"e34cfd512ae4e27f7a528a59275d810aadc113dbd1e914c1403098f939c7fae1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 16:45:12.575978 containerd[1561]: time="2025-05-16T16:45:12.575849643Z" level=info msg="Container e657f62f820d81a99c4d4b89abff9de5a860117c1a5f43ddc0c44f07f1c84812: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:12.577878 containerd[1561]: time="2025-05-16T16:45:12.577827643Z" level=info msg="connecting to shim beae8905fc02504c7a53222eb17e0b60d6eb0f6857b165cb6f84ca609b5e5b5f" address="unix:///run/containerd/s/ab55ca3f4cef1a98da661050ce0063bbb43fc4ebd59ce8904d117c7e2183c7d9" namespace=k8s.io protocol=ttrpc version=3 May 16 16:45:12.586932 containerd[1561]: time="2025-05-16T16:45:12.586891885Z" level=info msg="CreateContainer within sandbox \"e34cfd512ae4e27f7a528a59275d810aadc113dbd1e914c1403098f939c7fae1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e657f62f820d81a99c4d4b89abff9de5a860117c1a5f43ddc0c44f07f1c84812\"" May 16 16:45:12.587518 containerd[1561]: time="2025-05-16T16:45:12.587496489Z" level=info msg="StartContainer for \"e657f62f820d81a99c4d4b89abff9de5a860117c1a5f43ddc0c44f07f1c84812\"" May 16 16:45:12.588631 containerd[1561]: time="2025-05-16T16:45:12.588606962Z" level=info msg="connecting to shim e657f62f820d81a99c4d4b89abff9de5a860117c1a5f43ddc0c44f07f1c84812" address="unix:///run/containerd/s/34d74b4fc7178dd2065e759ce33d7bb27defe21109a6b62801207c801b2c4bd8" protocol=ttrpc version=3 May 16 16:45:12.591704 kubelet[2273]: E0516 16:45:12.591668 2273 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="1.6s" May 16 16:45:12.606280 systemd[1]: Started cri-containerd-beae8905fc02504c7a53222eb17e0b60d6eb0f6857b165cb6f84ca609b5e5b5f.scope - libcontainer container beae8905fc02504c7a53222eb17e0b60d6eb0f6857b165cb6f84ca609b5e5b5f. May 16 16:45:12.611221 systemd[1]: Started cri-containerd-e657f62f820d81a99c4d4b89abff9de5a860117c1a5f43ddc0c44f07f1c84812.scope - libcontainer container e657f62f820d81a99c4d4b89abff9de5a860117c1a5f43ddc0c44f07f1c84812. May 16 16:45:12.619450 kubelet[2273]: E0516 16:45:12.619416 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:12.620064 containerd[1561]: time="2025-05-16T16:45:12.620009254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 16 16:45:12.639518 containerd[1561]: time="2025-05-16T16:45:12.639454538Z" level=info msg="connecting to shim 19c2d06cb97816a605fdf882ec32fa4d6b2770d0091e6c1ddde4d491d5c52731" address="unix:///run/containerd/s/be45ebce3aeb9aaa208fa72f5cce1654423c444c3618ef2e48c46c908bc29937" namespace=k8s.io protocol=ttrpc version=3 May 16 16:45:12.662280 containerd[1561]: time="2025-05-16T16:45:12.662217004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"beae8905fc02504c7a53222eb17e0b60d6eb0f6857b165cb6f84ca609b5e5b5f\"" May 16 16:45:12.663338 kubelet[2273]: E0516 16:45:12.663289 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:12.670251 systemd[1]: Started cri-containerd-19c2d06cb97816a605fdf882ec32fa4d6b2770d0091e6c1ddde4d491d5c52731.scope - libcontainer container 19c2d06cb97816a605fdf882ec32fa4d6b2770d0091e6c1ddde4d491d5c52731. May 16 16:45:12.691868 containerd[1561]: time="2025-05-16T16:45:12.691808649Z" level=info msg="CreateContainer within sandbox \"beae8905fc02504c7a53222eb17e0b60d6eb0f6857b165cb6f84ca609b5e5b5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 16:45:12.695610 containerd[1561]: time="2025-05-16T16:45:12.695574763Z" level=info msg="StartContainer for \"e657f62f820d81a99c4d4b89abff9de5a860117c1a5f43ddc0c44f07f1c84812\" returns successfully" May 16 16:45:12.702320 containerd[1561]: time="2025-05-16T16:45:12.702243682Z" level=info msg="Container 1e03649fc622016c8ade6eb95b95d79a36c5957acb4a5dc1e9317e2a3a7a1e1b: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:12.710813 containerd[1561]: time="2025-05-16T16:45:12.710678864Z" level=info msg="CreateContainer within sandbox \"beae8905fc02504c7a53222eb17e0b60d6eb0f6857b165cb6f84ca609b5e5b5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1e03649fc622016c8ade6eb95b95d79a36c5957acb4a5dc1e9317e2a3a7a1e1b\"" May 16 16:45:12.711563 containerd[1561]: time="2025-05-16T16:45:12.711462765Z" level=info msg="StartContainer for \"1e03649fc622016c8ade6eb95b95d79a36c5957acb4a5dc1e9317e2a3a7a1e1b\"" May 16 16:45:12.713464 containerd[1561]: time="2025-05-16T16:45:12.713414676Z" level=info msg="connecting to shim 1e03649fc622016c8ade6eb95b95d79a36c5957acb4a5dc1e9317e2a3a7a1e1b" address="unix:///run/containerd/s/ab55ca3f4cef1a98da661050ce0063bbb43fc4ebd59ce8904d117c7e2183c7d9" protocol=ttrpc version=3 May 16 16:45:12.740294 systemd[1]: Started cri-containerd-1e03649fc622016c8ade6eb95b95d79a36c5957acb4a5dc1e9317e2a3a7a1e1b.scope - libcontainer container 1e03649fc622016c8ade6eb95b95d79a36c5957acb4a5dc1e9317e2a3a7a1e1b. May 16 16:45:12.745254 containerd[1561]: time="2025-05-16T16:45:12.745192943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"19c2d06cb97816a605fdf882ec32fa4d6b2770d0091e6c1ddde4d491d5c52731\"" May 16 16:45:12.748978 kubelet[2273]: E0516 16:45:12.748928 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:12.795518 kubelet[2273]: I0516 16:45:12.795207 2273 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:45:12.877776 containerd[1561]: time="2025-05-16T16:45:12.877646307Z" level=info msg="CreateContainer within sandbox \"19c2d06cb97816a605fdf882ec32fa4d6b2770d0091e6c1ddde4d491d5c52731\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 16:45:12.976802 containerd[1561]: time="2025-05-16T16:45:12.976757872Z" level=info msg="StartContainer for \"1e03649fc622016c8ade6eb95b95d79a36c5957acb4a5dc1e9317e2a3a7a1e1b\" returns successfully" May 16 16:45:13.037677 kubelet[2273]: E0516 16:45:13.037632 2273 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:45:13.037803 kubelet[2273]: E0516 16:45:13.037793 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:13.039687 kubelet[2273]: E0516 16:45:13.039656 2273 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:45:13.039800 kubelet[2273]: E0516 16:45:13.039778 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:13.087281 containerd[1561]: time="2025-05-16T16:45:13.087207755Z" level=info msg="Container d6d12af2b388bad8c98827fc32544912b3c9ea4d4b44908c00825f52438d92b1: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:13.095468 containerd[1561]: time="2025-05-16T16:45:13.095415941Z" level=info msg="CreateContainer within sandbox \"19c2d06cb97816a605fdf882ec32fa4d6b2770d0091e6c1ddde4d491d5c52731\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d6d12af2b388bad8c98827fc32544912b3c9ea4d4b44908c00825f52438d92b1\"" May 16 16:45:13.095940 containerd[1561]: time="2025-05-16T16:45:13.095902303Z" level=info msg="StartContainer for \"d6d12af2b388bad8c98827fc32544912b3c9ea4d4b44908c00825f52438d92b1\"" May 16 16:45:13.097184 containerd[1561]: time="2025-05-16T16:45:13.097124736Z" level=info msg="connecting to shim d6d12af2b388bad8c98827fc32544912b3c9ea4d4b44908c00825f52438d92b1" address="unix:///run/containerd/s/be45ebce3aeb9aaa208fa72f5cce1654423c444c3618ef2e48c46c908bc29937" protocol=ttrpc version=3 May 16 16:45:13.122187 systemd[1]: Started cri-containerd-d6d12af2b388bad8c98827fc32544912b3c9ea4d4b44908c00825f52438d92b1.scope - libcontainer container d6d12af2b388bad8c98827fc32544912b3c9ea4d4b44908c00825f52438d92b1. May 16 16:45:13.174070 containerd[1561]: time="2025-05-16T16:45:13.173520553Z" level=info msg="StartContainer for \"d6d12af2b388bad8c98827fc32544912b3c9ea4d4b44908c00825f52438d92b1\" returns successfully" May 16 16:45:14.046611 kubelet[2273]: E0516 16:45:14.046425 2273 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:45:14.046611 kubelet[2273]: E0516 16:45:14.046545 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:14.047336 kubelet[2273]: E0516 16:45:14.047254 2273 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:45:14.047413 kubelet[2273]: E0516 16:45:14.047401 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:14.457220 kubelet[2273]: E0516 16:45:14.457155 2273 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 16:45:14.550890 kubelet[2273]: I0516 16:45:14.550816 2273 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 16:45:14.550890 kubelet[2273]: E0516 16:45:14.550865 2273 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 16:45:14.560120 kubelet[2273]: E0516 16:45:14.560083 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:14.661184 kubelet[2273]: E0516 16:45:14.661115 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:14.761586 kubelet[2273]: E0516 16:45:14.761425 2273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:45:14.886826 kubelet[2273]: I0516 16:45:14.886772 2273 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 16:45:14.891716 kubelet[2273]: E0516 16:45:14.891690 2273 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 16:45:14.891716 kubelet[2273]: I0516 16:45:14.891712 2273 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:45:14.892797 kubelet[2273]: E0516 16:45:14.892776 2273 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 16 16:45:14.892797 kubelet[2273]: I0516 16:45:14.892791 2273 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 16:45:14.894286 kubelet[2273]: E0516 16:45:14.894266 2273 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 16:45:15.047204 kubelet[2273]: I0516 16:45:15.047095 2273 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 16:45:15.048722 kubelet[2273]: E0516 16:45:15.048690 2273 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 16:45:15.048855 kubelet[2273]: E0516 16:45:15.048828 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:15.167488 kubelet[2273]: I0516 16:45:15.167413 2273 apiserver.go:52] "Watching apiserver" May 16 16:45:15.186404 kubelet[2273]: I0516 16:45:15.186353 2273 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 16:45:16.047629 kubelet[2273]: I0516 16:45:16.047594 2273 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 16:45:16.147280 kubelet[2273]: E0516 16:45:16.147236 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:16.578877 kubelet[2273]: I0516 16:45:16.578834 2273 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 16:45:16.624382 kubelet[2273]: E0516 16:45:16.624339 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:17.049678 kubelet[2273]: E0516 16:45:17.049646 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:17.050276 kubelet[2273]: E0516 16:45:17.049815 2273 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:18.163123 systemd[1]: Reload requested from client PID 2556 ('systemctl') (unit session-7.scope)... May 16 16:45:18.163138 systemd[1]: Reloading... May 16 16:45:18.239086 zram_generator::config[2602]: No configuration found. May 16 16:45:18.569295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:45:18.705395 systemd[1]: Reloading finished in 541 ms. May 16 16:45:18.740429 kubelet[2273]: I0516 16:45:18.740357 2273 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:45:18.740570 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:45:18.759617 systemd[1]: kubelet.service: Deactivated successfully. May 16 16:45:18.759938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:45:18.760000 systemd[1]: kubelet.service: Consumed 979ms CPU time, 134.5M memory peak. May 16 16:45:18.761954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:45:18.989432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:45:18.993547 (kubelet)[2644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:45:19.030883 kubelet[2644]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:45:19.030883 kubelet[2644]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 16:45:19.030883 kubelet[2644]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:45:19.030883 kubelet[2644]: I0516 16:45:19.030834 2644 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:45:19.052724 kubelet[2644]: I0516 16:45:19.052676 2644 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 16:45:19.052724 kubelet[2644]: I0516 16:45:19.052711 2644 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:45:19.052963 kubelet[2644]: I0516 16:45:19.052941 2644 server.go:956] "Client rotation is on, will bootstrap in background" May 16 16:45:19.055913 kubelet[2644]: I0516 16:45:19.055878 2644 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 16 16:45:19.058698 kubelet[2644]: I0516 16:45:19.058457 2644 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:45:19.062545 kubelet[2644]: I0516 16:45:19.062529 2644 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:45:19.068027 kubelet[2644]: I0516 16:45:19.068003 2644 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:45:19.068366 kubelet[2644]: I0516 16:45:19.068333 2644 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:45:19.068572 kubelet[2644]: I0516 16:45:19.068431 2644 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:45:19.068679 kubelet[2644]: I0516 16:45:19.068669 2644 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:45:19.068726 kubelet[2644]: I0516 16:45:19.068718 2644 container_manager_linux.go:303] "Creating device plugin manager" May 16 16:45:19.068812 kubelet[2644]: I0516 16:45:19.068803 2644 state_mem.go:36] "Initialized new in-memory state store" May 16 16:45:19.069036 kubelet[2644]: I0516 16:45:19.069022 2644 kubelet.go:480] "Attempting to sync node with API server" May 16 16:45:19.069119 kubelet[2644]: I0516 16:45:19.069109 2644 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:45:19.069176 kubelet[2644]: I0516 16:45:19.069168 2644 kubelet.go:386] "Adding apiserver pod source" May 16 16:45:19.069229 kubelet[2644]: I0516 16:45:19.069220 2644 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:45:19.070236 kubelet[2644]: I0516 16:45:19.070214 2644 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:45:19.070859 kubelet[2644]: I0516 16:45:19.070773 2644 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 16:45:19.076603 kubelet[2644]: I0516 16:45:19.076553 2644 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 16:45:19.077131 kubelet[2644]: I0516 16:45:19.077116 2644 server.go:1289] "Started kubelet" May 16 16:45:19.077278 kubelet[2644]: I0516 16:45:19.077256 2644 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:45:19.078230 kubelet[2644]: I0516 16:45:19.078135 2644 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:45:19.078341 kubelet[2644]: I0516 16:45:19.078328 2644 server.go:317] "Adding debug handlers to kubelet server" May 16 16:45:19.078493 kubelet[2644]: I0516 16:45:19.078486 2644 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:45:19.078568 kubelet[2644]: I0516 16:45:19.078525 2644 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:45:19.078791 kubelet[2644]: I0516 16:45:19.078769 2644 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:45:19.081266 kubelet[2644]: I0516 16:45:19.081239 2644 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 16:45:19.081362 kubelet[2644]: I0516 16:45:19.081343 2644 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 16:45:19.081512 kubelet[2644]: I0516 16:45:19.081467 2644 reconciler.go:26] "Reconciler: start to sync state" May 16 16:45:19.084069 kubelet[2644]: I0516 16:45:19.084006 2644 factory.go:223] Registration of the systemd container factory successfully May 16 16:45:19.084156 kubelet[2644]: I0516 16:45:19.084117 2644 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:45:19.085337 kubelet[2644]: I0516 16:45:19.085292 2644 factory.go:223] Registration of the containerd container factory successfully May 16 16:45:19.096684 kubelet[2644]: I0516 16:45:19.096638 2644 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 16:45:19.098448 kubelet[2644]: I0516 16:45:19.098403 2644 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 16:45:19.098448 kubelet[2644]: I0516 16:45:19.098438 2644 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 16:45:19.098521 kubelet[2644]: I0516 16:45:19.098489 2644 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 16:45:19.098521 kubelet[2644]: I0516 16:45:19.098503 2644 kubelet.go:2436] "Starting kubelet main sync loop" May 16 16:45:19.098913 kubelet[2644]: E0516 16:45:19.098882 2644 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:45:19.117722 kubelet[2644]: I0516 16:45:19.117676 2644 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 16:45:19.117722 kubelet[2644]: I0516 16:45:19.117692 2644 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 16:45:19.117722 kubelet[2644]: I0516 16:45:19.117709 2644 state_mem.go:36] "Initialized new in-memory state store" May 16 16:45:19.117898 kubelet[2644]: I0516 16:45:19.117822 2644 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 16:45:19.117898 kubelet[2644]: I0516 16:45:19.117831 2644 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 16:45:19.117898 kubelet[2644]: I0516 16:45:19.117847 2644 policy_none.go:49] "None policy: Start" May 16 16:45:19.117898 kubelet[2644]: I0516 16:45:19.117856 2644 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 16:45:19.117898 kubelet[2644]: I0516 16:45:19.117866 2644 state_mem.go:35] "Initializing new in-memory state store" May 16 16:45:19.118006 kubelet[2644]: I0516 16:45:19.117945 2644 state_mem.go:75] "Updated machine memory state" May 16 16:45:19.122447 kubelet[2644]: E0516 16:45:19.122424 2644 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 16:45:19.123034 kubelet[2644]: I0516 16:45:19.122780 2644 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:45:19.123034 kubelet[2644]: I0516 16:45:19.122796 2644 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:45:19.123034 kubelet[2644]: I0516 16:45:19.123018 2644 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:45:19.123795 kubelet[2644]: E0516 16:45:19.123759 2644 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 16:45:19.168157 sudo[2684]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 16:45:19.168562 sudo[2684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 16:45:19.200519 kubelet[2644]: I0516 16:45:19.200475 2644 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 16:45:19.201282 kubelet[2644]: I0516 16:45:19.201237 2644 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 16:45:19.201282 kubelet[2644]: I0516 16:45:19.201248 2644 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:45:19.208021 kubelet[2644]: E0516 16:45:19.207947 2644 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 16:45:19.211562 kubelet[2644]: E0516 16:45:19.211538 2644 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 16:45:19.228643 kubelet[2644]: I0516 16:45:19.228614 2644 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:45:19.235298 kubelet[2644]: I0516 16:45:19.235262 2644 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 16:45:19.235381 kubelet[2644]: I0516 16:45:19.235342 2644 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 16:45:19.384036 kubelet[2644]: I0516 16:45:19.383916 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0ccafc8ce3e8b09d3548565c66e8477-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d0ccafc8ce3e8b09d3548565c66e8477\") " pod="kube-system/kube-apiserver-localhost" May 16 16:45:19.384036 kubelet[2644]: I0516 16:45:19.383967 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:45:19.384036 kubelet[2644]: I0516 16:45:19.383986 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:45:19.384036 kubelet[2644]: I0516 16:45:19.384003 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:45:19.384036 kubelet[2644]: I0516 16:45:19.384022 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:45:19.384562 kubelet[2644]: I0516 16:45:19.384038 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:45:19.384562 kubelet[2644]: I0516 16:45:19.384071 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 16 16:45:19.384562 kubelet[2644]: I0516 16:45:19.384085 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0ccafc8ce3e8b09d3548565c66e8477-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0ccafc8ce3e8b09d3548565c66e8477\") " pod="kube-system/kube-apiserver-localhost" May 16 16:45:19.384562 kubelet[2644]: I0516 16:45:19.384101 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0ccafc8ce3e8b09d3548565c66e8477-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0ccafc8ce3e8b09d3548565c66e8477\") " pod="kube-system/kube-apiserver-localhost" May 16 16:45:19.508975 kubelet[2644]: E0516 16:45:19.508932 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:19.512348 kubelet[2644]: E0516 16:45:19.512262 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:19.512348 kubelet[2644]: E0516 16:45:19.512269 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:19.630557 sudo[2684]: pam_unix(sudo:session): session closed for user root May 16 16:45:20.069747 kubelet[2644]: I0516 16:45:20.069690 2644 apiserver.go:52] "Watching apiserver" May 16 16:45:20.081936 kubelet[2644]: I0516 16:45:20.081896 2644 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 16:45:20.108790 kubelet[2644]: E0516 16:45:20.108749 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:20.108853 kubelet[2644]: E0516 16:45:20.108835 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:20.109115 kubelet[2644]: E0516 16:45:20.109093 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:20.130941 kubelet[2644]: I0516 16:45:20.130858 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.130843182 podStartE2EDuration="1.130843182s" podCreationTimestamp="2025-05-16 16:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:45:20.125292971 +0000 UTC m=+1.127328537" watchObservedRunningTime="2025-05-16 16:45:20.130843182 +0000 UTC m=+1.132878758" May 16 16:45:20.139209 kubelet[2644]: I0516 16:45:20.139142 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.139065935 podStartE2EDuration="4.139065935s" podCreationTimestamp="2025-05-16 16:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:45:20.131076399 +0000 UTC m=+1.133111975" watchObservedRunningTime="2025-05-16 16:45:20.139065935 +0000 UTC m=+1.141101501" May 16 16:45:20.139385 kubelet[2644]: I0516 16:45:20.139271 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.13926603 podStartE2EDuration="4.13926603s" podCreationTimestamp="2025-05-16 16:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:45:20.1392126 +0000 UTC m=+1.141248166" watchObservedRunningTime="2025-05-16 16:45:20.13926603 +0000 UTC m=+1.141301596" May 16 16:45:21.110279 kubelet[2644]: E0516 16:45:21.110227 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:21.110916 kubelet[2644]: E0516 16:45:21.110584 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:21.268545 sudo[1766]: pam_unix(sudo:session): session closed for user root May 16 16:45:21.269955 sshd[1765]: Connection closed by 10.0.0.1 port 56474 May 16 16:45:21.270389 sshd-session[1763]: pam_unix(sshd:session): session closed for user core May 16 16:45:21.275437 systemd[1]: sshd@6-10.0.0.104:22-10.0.0.1:56474.service: Deactivated successfully. May 16 16:45:21.277886 systemd[1]: session-7.scope: Deactivated successfully. May 16 16:45:21.278181 systemd[1]: session-7.scope: Consumed 4.965s CPU time, 260.8M memory peak. May 16 16:45:21.279706 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. May 16 16:45:21.281365 systemd-logind[1534]: Removed session 7. May 16 16:45:24.543285 kubelet[2644]: E0516 16:45:24.543248 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:24.772674 kubelet[2644]: I0516 16:45:24.772632 2644 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 16:45:24.773233 containerd[1561]: time="2025-05-16T16:45:24.773190068Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 16:45:24.773657 kubelet[2644]: I0516 16:45:24.773331 2644 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 16:45:25.115839 kubelet[2644]: E0516 16:45:25.115761 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:25.956763 systemd[1]: Created slice kubepods-besteffort-podaa1074e9_e940_4210_88c9_fe7dafcd40f9.slice - libcontainer container kubepods-besteffort-podaa1074e9_e940_4210_88c9_fe7dafcd40f9.slice. May 16 16:45:25.981067 systemd[1]: Created slice kubepods-burstable-pod99e30828_ab61_432c_b51c_aa75e8dccc1d.slice - libcontainer container kubepods-burstable-pod99e30828_ab61_432c_b51c_aa75e8dccc1d.slice. May 16 16:45:25.995936 systemd[1]: Created slice kubepods-besteffort-pod301231b7_54b3_4138_801b_5ba28862e91f.slice - libcontainer container kubepods-besteffort-pod301231b7_54b3_4138_801b_5ba28862e91f.slice. May 16 16:45:26.025077 kubelet[2644]: I0516 16:45:26.024989 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa1074e9-e940-4210-88c9-fe7dafcd40f9-lib-modules\") pod \"kube-proxy-fnmsv\" (UID: \"aa1074e9-e940-4210-88c9-fe7dafcd40f9\") " pod="kube-system/kube-proxy-fnmsv" May 16 16:45:26.025077 kubelet[2644]: I0516 16:45:26.025082 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-config-path\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025549 kubelet[2644]: I0516 16:45:26.025111 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99e30828-ab61-432c-b51c-aa75e8dccc1d-hubble-tls\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025549 kubelet[2644]: I0516 16:45:26.025136 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-hostproc\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025549 kubelet[2644]: I0516 16:45:26.025157 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-cgroup\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025549 kubelet[2644]: I0516 16:45:26.025177 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-lib-modules\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025549 kubelet[2644]: I0516 16:45:26.025198 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa1074e9-e940-4210-88c9-fe7dafcd40f9-kube-proxy\") pod \"kube-proxy-fnmsv\" (UID: \"aa1074e9-e940-4210-88c9-fe7dafcd40f9\") " pod="kube-system/kube-proxy-fnmsv" May 16 16:45:26.025549 kubelet[2644]: I0516 16:45:26.025219 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5jb5\" (UniqueName: \"kubernetes.io/projected/aa1074e9-e940-4210-88c9-fe7dafcd40f9-kube-api-access-x5jb5\") pod \"kube-proxy-fnmsv\" (UID: \"aa1074e9-e940-4210-88c9-fe7dafcd40f9\") " pod="kube-system/kube-proxy-fnmsv" May 16 16:45:26.025682 kubelet[2644]: I0516 16:45:26.025239 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-bpf-maps\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025682 kubelet[2644]: I0516 16:45:26.025287 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cni-path\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025682 kubelet[2644]: I0516 16:45:26.025335 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmqmx\" (UniqueName: \"kubernetes.io/projected/99e30828-ab61-432c-b51c-aa75e8dccc1d-kube-api-access-gmqmx\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025682 kubelet[2644]: I0516 16:45:26.025376 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99e30828-ab61-432c-b51c-aa75e8dccc1d-clustermesh-secrets\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025682 kubelet[2644]: I0516 16:45:26.025424 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-xtables-lock\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025799 kubelet[2644]: I0516 16:45:26.025448 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/301231b7-54b3-4138-801b-5ba28862e91f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pzntn\" (UID: \"301231b7-54b3-4138-801b-5ba28862e91f\") " pod="kube-system/cilium-operator-6c4d7847fc-pzntn" May 16 16:45:26.025799 kubelet[2644]: I0516 16:45:26.025467 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2vj5\" (UniqueName: \"kubernetes.io/projected/301231b7-54b3-4138-801b-5ba28862e91f-kube-api-access-p2vj5\") pod \"cilium-operator-6c4d7847fc-pzntn\" (UID: \"301231b7-54b3-4138-801b-5ba28862e91f\") " pod="kube-system/cilium-operator-6c4d7847fc-pzntn" May 16 16:45:26.025799 kubelet[2644]: I0516 16:45:26.025483 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa1074e9-e940-4210-88c9-fe7dafcd40f9-xtables-lock\") pod \"kube-proxy-fnmsv\" (UID: \"aa1074e9-e940-4210-88c9-fe7dafcd40f9\") " pod="kube-system/kube-proxy-fnmsv" May 16 16:45:26.025799 kubelet[2644]: I0516 16:45:26.025502 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-run\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025799 kubelet[2644]: I0516 16:45:26.025528 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-etc-cni-netd\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025914 kubelet[2644]: I0516 16:45:26.025542 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-host-proc-sys-net\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.025914 kubelet[2644]: I0516 16:45:26.025567 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-host-proc-sys-kernel\") pod \"cilium-7dbrn\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " pod="kube-system/cilium-7dbrn" May 16 16:45:26.117760 kubelet[2644]: E0516 16:45:26.117727 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:26.277042 kubelet[2644]: E0516 16:45:26.276901 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:26.277914 containerd[1561]: time="2025-05-16T16:45:26.277719238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fnmsv,Uid:aa1074e9-e940-4210-88c9-fe7dafcd40f9,Namespace:kube-system,Attempt:0,}" May 16 16:45:26.286160 kubelet[2644]: E0516 16:45:26.286136 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:26.286722 containerd[1561]: time="2025-05-16T16:45:26.286676549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7dbrn,Uid:99e30828-ab61-432c-b51c-aa75e8dccc1d,Namespace:kube-system,Attempt:0,}" May 16 16:45:26.299906 kubelet[2644]: E0516 16:45:26.299872 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:26.300274 containerd[1561]: time="2025-05-16T16:45:26.300236418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pzntn,Uid:301231b7-54b3-4138-801b-5ba28862e91f,Namespace:kube-system,Attempt:0,}" May 16 16:45:26.346217 kubelet[2644]: E0516 16:45:26.346177 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:26.589926 kubelet[2644]: E0516 16:45:26.589779 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:27.119709 kubelet[2644]: E0516 16:45:27.119581 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:27.120134 kubelet[2644]: E0516 16:45:27.119740 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:28.121062 kubelet[2644]: E0516 16:45:28.121010 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:28.356159 containerd[1561]: time="2025-05-16T16:45:28.356069558Z" level=info msg="connecting to shim 5e46003e5b3b309a1647504dc70cb52705a4b8fefd8b6ffce21540bc1b82463a" address="unix:///run/containerd/s/394ead0eae1e61103aec2e1d1356f1761b15c09b6d6b8802a6aca229b73c6757" namespace=k8s.io protocol=ttrpc version=3 May 16 16:45:28.372995 containerd[1561]: time="2025-05-16T16:45:28.372827110Z" level=info msg="connecting to shim c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5" address="unix:///run/containerd/s/7f4b004e32512b4454134ef70e6d1e1fe833b3375dee43d37c50dbb70f3c18c3" namespace=k8s.io protocol=ttrpc version=3 May 16 16:45:28.410197 systemd[1]: Started cri-containerd-5e46003e5b3b309a1647504dc70cb52705a4b8fefd8b6ffce21540bc1b82463a.scope - libcontainer container 5e46003e5b3b309a1647504dc70cb52705a4b8fefd8b6ffce21540bc1b82463a. May 16 16:45:28.413727 systemd[1]: Started cri-containerd-c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5.scope - libcontainer container c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5. May 16 16:45:28.448576 containerd[1561]: time="2025-05-16T16:45:28.448512868Z" level=info msg="connecting to shim 93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366" address="unix:///run/containerd/s/8b750024435bcc0f1970c27c74842b7e3f4cf7c6002fb4690218ef0bca65f16a" namespace=k8s.io protocol=ttrpc version=3 May 16 16:45:28.471471 containerd[1561]: time="2025-05-16T16:45:28.471418225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fnmsv,Uid:aa1074e9-e940-4210-88c9-fe7dafcd40f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e46003e5b3b309a1647504dc70cb52705a4b8fefd8b6ffce21540bc1b82463a\"" May 16 16:45:28.472608 kubelet[2644]: E0516 16:45:28.472581 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:28.488181 systemd[1]: Started cri-containerd-93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366.scope - libcontainer container 93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366. May 16 16:45:28.667773 containerd[1561]: time="2025-05-16T16:45:28.667739824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pzntn,Uid:301231b7-54b3-4138-801b-5ba28862e91f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5\"" May 16 16:45:28.668311 kubelet[2644]: E0516 16:45:28.668288 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:28.669137 containerd[1561]: time="2025-05-16T16:45:28.669073660Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 16:45:28.740924 containerd[1561]: time="2025-05-16T16:45:28.740868691Z" level=info msg="CreateContainer within sandbox \"5e46003e5b3b309a1647504dc70cb52705a4b8fefd8b6ffce21540bc1b82463a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 16:45:28.785741 containerd[1561]: time="2025-05-16T16:45:28.785688906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7dbrn,Uid:99e30828-ab61-432c-b51c-aa75e8dccc1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\"" May 16 16:45:28.786374 kubelet[2644]: E0516 16:45:28.786350 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:29.148947 containerd[1561]: time="2025-05-16T16:45:29.148890669Z" level=info msg="Container 4dcbb7e02eb73819696109e1e8ce4ceba7f68f763330f379648e8dda5ef89596: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:29.159334 containerd[1561]: time="2025-05-16T16:45:29.159277066Z" level=info msg="CreateContainer within sandbox \"5e46003e5b3b309a1647504dc70cb52705a4b8fefd8b6ffce21540bc1b82463a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4dcbb7e02eb73819696109e1e8ce4ceba7f68f763330f379648e8dda5ef89596\"" May 16 16:45:29.160170 containerd[1561]: time="2025-05-16T16:45:29.160097450Z" level=info msg="StartContainer for \"4dcbb7e02eb73819696109e1e8ce4ceba7f68f763330f379648e8dda5ef89596\"" May 16 16:45:29.162015 containerd[1561]: time="2025-05-16T16:45:29.161973728Z" level=info msg="connecting to shim 4dcbb7e02eb73819696109e1e8ce4ceba7f68f763330f379648e8dda5ef89596" address="unix:///run/containerd/s/394ead0eae1e61103aec2e1d1356f1761b15c09b6d6b8802a6aca229b73c6757" protocol=ttrpc version=3 May 16 16:45:29.189274 systemd[1]: Started cri-containerd-4dcbb7e02eb73819696109e1e8ce4ceba7f68f763330f379648e8dda5ef89596.scope - libcontainer container 4dcbb7e02eb73819696109e1e8ce4ceba7f68f763330f379648e8dda5ef89596. May 16 16:45:29.321226 containerd[1561]: time="2025-05-16T16:45:29.321175472Z" level=info msg="StartContainer for \"4dcbb7e02eb73819696109e1e8ce4ceba7f68f763330f379648e8dda5ef89596\" returns successfully" May 16 16:45:30.129818 kubelet[2644]: E0516 16:45:30.129741 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:30.139325 kubelet[2644]: I0516 16:45:30.139171 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fnmsv" podStartSLOduration=5.139149966 podStartE2EDuration="5.139149966s" podCreationTimestamp="2025-05-16 16:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:45:30.138813655 +0000 UTC m=+11.140849241" watchObservedRunningTime="2025-05-16 16:45:30.139149966 +0000 UTC m=+11.141185532" May 16 16:45:30.355696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1340258046.mount: Deactivated successfully. May 16 16:45:30.656824 containerd[1561]: time="2025-05-16T16:45:30.656760644Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:30.657477 containerd[1561]: time="2025-05-16T16:45:30.657428476Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 16:45:30.658640 containerd[1561]: time="2025-05-16T16:45:30.658601230Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:30.659677 containerd[1561]: time="2025-05-16T16:45:30.659632104Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.990531442s" May 16 16:45:30.659677 containerd[1561]: time="2025-05-16T16:45:30.659666990Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 16:45:30.660603 containerd[1561]: time="2025-05-16T16:45:30.660522991Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 16:45:30.664438 containerd[1561]: time="2025-05-16T16:45:30.664388746Z" level=info msg="CreateContainer within sandbox \"c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 16:45:30.674021 containerd[1561]: time="2025-05-16T16:45:30.673970258Z" level=info msg="Container 478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:30.680321 containerd[1561]: time="2025-05-16T16:45:30.680267164Z" level=info msg="CreateContainer within sandbox \"c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\"" May 16 16:45:30.680808 containerd[1561]: time="2025-05-16T16:45:30.680775212Z" level=info msg="StartContainer for \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\"" May 16 16:45:30.681602 containerd[1561]: time="2025-05-16T16:45:30.681569505Z" level=info msg="connecting to shim 478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400" address="unix:///run/containerd/s/7f4b004e32512b4454134ef70e6d1e1fe833b3375dee43d37c50dbb70f3c18c3" protocol=ttrpc version=3 May 16 16:45:30.705225 systemd[1]: Started cri-containerd-478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400.scope - libcontainer container 478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400. May 16 16:45:30.734487 containerd[1561]: time="2025-05-16T16:45:30.734408905Z" level=info msg="StartContainer for \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\" returns successfully" May 16 16:45:31.132687 kubelet[2644]: E0516 16:45:31.132513 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:31.132687 kubelet[2644]: E0516 16:45:31.132571 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:32.134548 kubelet[2644]: E0516 16:45:32.134513 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:34.282079 update_engine[1538]: I20250516 16:45:34.281950 1538 update_attempter.cc:509] Updating boot flags... May 16 16:45:41.143717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052943222.mount: Deactivated successfully. May 16 16:45:45.236298 containerd[1561]: time="2025-05-16T16:45:45.236237441Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:45.270014 containerd[1561]: time="2025-05-16T16:45:45.269924444Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 16:45:45.299073 containerd[1561]: time="2025-05-16T16:45:45.298984074Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:45:45.300577 containerd[1561]: time="2025-05-16T16:45:45.300535111Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.639978666s" May 16 16:45:45.300577 containerd[1561]: time="2025-05-16T16:45:45.300570799Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 16:45:45.348688 containerd[1561]: time="2025-05-16T16:45:45.348631532Z" level=info msg="CreateContainer within sandbox \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 16:45:45.514135 containerd[1561]: time="2025-05-16T16:45:45.513990018Z" level=info msg="Container 1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:45.517854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654432256.mount: Deactivated successfully. May 16 16:45:45.928062 containerd[1561]: time="2025-05-16T16:45:45.927984140Z" level=info msg="CreateContainer within sandbox \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\"" May 16 16:45:45.928486 containerd[1561]: time="2025-05-16T16:45:45.928376470Z" level=info msg="StartContainer for \"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\"" May 16 16:45:45.929184 containerd[1561]: time="2025-05-16T16:45:45.929165039Z" level=info msg="connecting to shim 1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51" address="unix:///run/containerd/s/8b750024435bcc0f1970c27c74842b7e3f4cf7c6002fb4690218ef0bca65f16a" protocol=ttrpc version=3 May 16 16:45:45.949192 systemd[1]: Started cri-containerd-1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51.scope - libcontainer container 1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51. May 16 16:45:46.166826 systemd[1]: cri-containerd-1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51.scope: Deactivated successfully. May 16 16:45:46.167425 systemd[1]: cri-containerd-1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51.scope: Consumed 26ms CPU time, 7M memory peak, 4K read from disk, 3.2M written to disk. May 16 16:45:46.686354 containerd[1561]: time="2025-05-16T16:45:46.686164992Z" level=info msg="StartContainer for \"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\" returns successfully" May 16 16:45:46.714130 containerd[1561]: time="2025-05-16T16:45:46.714074023Z" level=info msg="received exit event container_id:\"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\" id:\"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\" pid:3148 exited_at:{seconds:1747413946 nanos:169472683}" May 16 16:45:46.725418 containerd[1561]: time="2025-05-16T16:45:46.725358417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\" id:\"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\" pid:3148 exited_at:{seconds:1747413946 nanos:169472683}" May 16 16:45:46.754422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51-rootfs.mount: Deactivated successfully. May 16 16:45:47.780843 kubelet[2644]: E0516 16:45:47.780790 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:47.883835 kubelet[2644]: I0516 16:45:47.883767 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pzntn" podStartSLOduration=20.892144532 podStartE2EDuration="22.883751434s" podCreationTimestamp="2025-05-16 16:45:25 +0000 UTC" firstStartedPulling="2025-05-16 16:45:28.668763218 +0000 UTC m=+9.670798784" lastFinishedPulling="2025-05-16 16:45:30.66037012 +0000 UTC m=+11.662405686" observedRunningTime="2025-05-16 16:45:31.175983652 +0000 UTC m=+12.178019228" watchObservedRunningTime="2025-05-16 16:45:47.883751434 +0000 UTC m=+28.885787000" May 16 16:45:48.691501 kubelet[2644]: E0516 16:45:48.691466 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:48.986186 containerd[1561]: time="2025-05-16T16:45:48.986036947Z" level=info msg="CreateContainer within sandbox \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 16:45:49.083074 containerd[1561]: time="2025-05-16T16:45:49.082923790Z" level=info msg="Container d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:49.206693 containerd[1561]: time="2025-05-16T16:45:49.206565740Z" level=info msg="CreateContainer within sandbox \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\"" May 16 16:45:49.208658 containerd[1561]: time="2025-05-16T16:45:49.208455472Z" level=info msg="StartContainer for \"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\"" May 16 16:45:49.210523 containerd[1561]: time="2025-05-16T16:45:49.210272074Z" level=info msg="connecting to shim d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b" address="unix:///run/containerd/s/8b750024435bcc0f1970c27c74842b7e3f4cf7c6002fb4690218ef0bca65f16a" protocol=ttrpc version=3 May 16 16:45:49.243195 systemd[1]: Started cri-containerd-d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b.scope - libcontainer container d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b. May 16 16:45:49.274739 containerd[1561]: time="2025-05-16T16:45:49.274693264Z" level=info msg="StartContainer for \"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\" returns successfully" May 16 16:45:49.289476 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 16:45:49.290150 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 16:45:49.290429 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 16:45:49.292705 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:45:49.294759 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 16:45:49.296037 containerd[1561]: time="2025-05-16T16:45:49.295996957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\" id:\"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\" pid:3193 exited_at:{seconds:1747413949 nanos:295728190}" May 16 16:45:49.296106 systemd[1]: cri-containerd-d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b.scope: Deactivated successfully. May 16 16:45:49.296231 containerd[1561]: time="2025-05-16T16:45:49.296174802Z" level=info msg="received exit event container_id:\"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\" id:\"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\" pid:3193 exited_at:{seconds:1747413949 nanos:295728190}" May 16 16:45:49.323017 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:45:49.479462 systemd[1]: Started sshd@7-10.0.0.104:22-10.0.0.1:38936.service - OpenSSH per-connection server daemon (10.0.0.1:38936). May 16 16:45:49.523795 sshd[3230]: Accepted publickey for core from 10.0.0.1 port 38936 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:45:49.525721 sshd-session[3230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:45:49.530621 systemd-logind[1534]: New session 8 of user core. May 16 16:45:49.541157 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 16:45:49.669159 sshd[3232]: Connection closed by 10.0.0.1 port 38936 May 16 16:45:49.669492 sshd-session[3230]: pam_unix(sshd:session): session closed for user core May 16 16:45:49.674109 systemd[1]: sshd@7-10.0.0.104:22-10.0.0.1:38936.service: Deactivated successfully. May 16 16:45:49.676570 systemd[1]: session-8.scope: Deactivated successfully. May 16 16:45:49.677574 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. May 16 16:45:49.679251 systemd-logind[1534]: Removed session 8. May 16 16:45:49.695420 kubelet[2644]: E0516 16:45:49.695253 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:49.703832 containerd[1561]: time="2025-05-16T16:45:49.703770139Z" level=info msg="CreateContainer within sandbox \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 16:45:49.715993 containerd[1561]: time="2025-05-16T16:45:49.715920153Z" level=info msg="Container 96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:49.725957 containerd[1561]: time="2025-05-16T16:45:49.725899989Z" level=info msg="CreateContainer within sandbox \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\"" May 16 16:45:49.726412 containerd[1561]: time="2025-05-16T16:45:49.726386555Z" level=info msg="StartContainer for \"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\"" May 16 16:45:49.727981 containerd[1561]: time="2025-05-16T16:45:49.727953929Z" level=info msg="connecting to shim 96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71" address="unix:///run/containerd/s/8b750024435bcc0f1970c27c74842b7e3f4cf7c6002fb4690218ef0bca65f16a" protocol=ttrpc version=3 May 16 16:45:49.763239 systemd[1]: Started cri-containerd-96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71.scope - libcontainer container 96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71. May 16 16:45:49.805579 systemd[1]: cri-containerd-96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71.scope: Deactivated successfully. May 16 16:45:49.805803 containerd[1561]: time="2025-05-16T16:45:49.805749089Z" level=info msg="StartContainer for \"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\" returns successfully" May 16 16:45:49.808128 containerd[1561]: time="2025-05-16T16:45:49.808094378Z" level=info msg="received exit event container_id:\"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\" id:\"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\" pid:3258 exited_at:{seconds:1747413949 nanos:807883281}" May 16 16:45:49.808341 containerd[1561]: time="2025-05-16T16:45:49.808274919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\" id:\"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\" pid:3258 exited_at:{seconds:1747413949 nanos:807883281}" May 16 16:45:50.083892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b-rootfs.mount: Deactivated successfully. May 16 16:45:50.700457 kubelet[2644]: E0516 16:45:50.700424 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:50.706304 containerd[1561]: time="2025-05-16T16:45:50.706210140Z" level=info msg="CreateContainer within sandbox \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 16:45:50.745252 containerd[1561]: time="2025-05-16T16:45:50.745198736Z" level=info msg="Container bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:50.755934 containerd[1561]: time="2025-05-16T16:45:50.755887893Z" level=info msg="CreateContainer within sandbox \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\"" May 16 16:45:50.756641 containerd[1561]: time="2025-05-16T16:45:50.756490938Z" level=info msg="StartContainer for \"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\"" May 16 16:45:50.757530 containerd[1561]: time="2025-05-16T16:45:50.757500290Z" level=info msg="connecting to shim bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890" address="unix:///run/containerd/s/8b750024435bcc0f1970c27c74842b7e3f4cf7c6002fb4690218ef0bca65f16a" protocol=ttrpc version=3 May 16 16:45:50.780180 systemd[1]: Started cri-containerd-bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890.scope - libcontainer container bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890. May 16 16:45:50.809128 systemd[1]: cri-containerd-bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890.scope: Deactivated successfully. May 16 16:45:50.809687 containerd[1561]: time="2025-05-16T16:45:50.809644287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\" id:\"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\" pid:3297 exited_at:{seconds:1747413950 nanos:809399937}" May 16 16:45:50.815325 containerd[1561]: time="2025-05-16T16:45:50.815296264Z" level=info msg="received exit event container_id:\"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\" id:\"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\" pid:3297 exited_at:{seconds:1747413950 nanos:809399937}" May 16 16:45:50.822997 containerd[1561]: time="2025-05-16T16:45:50.822951324Z" level=info msg="StartContainer for \"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\" returns successfully" May 16 16:45:50.836337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890-rootfs.mount: Deactivated successfully. May 16 16:45:51.705452 kubelet[2644]: E0516 16:45:51.705415 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:51.710548 containerd[1561]: time="2025-05-16T16:45:51.710478137Z" level=info msg="CreateContainer within sandbox \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 16:45:51.723213 containerd[1561]: time="2025-05-16T16:45:51.723152406Z" level=info msg="Container 76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:51.727791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668780869.mount: Deactivated successfully. May 16 16:45:51.729977 containerd[1561]: time="2025-05-16T16:45:51.729937495Z" level=info msg="CreateContainer within sandbox \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\"" May 16 16:45:51.731333 containerd[1561]: time="2025-05-16T16:45:51.730472072Z" level=info msg="StartContainer for \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\"" May 16 16:45:51.731576 containerd[1561]: time="2025-05-16T16:45:51.731552136Z" level=info msg="connecting to shim 76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651" address="unix:///run/containerd/s/8b750024435bcc0f1970c27c74842b7e3f4cf7c6002fb4690218ef0bca65f16a" protocol=ttrpc version=3 May 16 16:45:51.754262 systemd[1]: Started cri-containerd-76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651.scope - libcontainer container 76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651. May 16 16:45:51.799095 containerd[1561]: time="2025-05-16T16:45:51.799030646Z" level=info msg="StartContainer for \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" returns successfully" May 16 16:45:51.875487 containerd[1561]: time="2025-05-16T16:45:51.875292848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" id:\"9223fc14aa5c85467fe90ae248332cb39b94d9e5dfa4e3bc93ac2c71e8809d61\" pid:3366 exited_at:{seconds:1747413951 nanos:874861807}" May 16 16:45:51.951991 kubelet[2644]: I0516 16:45:51.951955 2644 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 16:45:51.992220 systemd[1]: Created slice kubepods-burstable-pod2ba2357c_5050_4817_b717_cac189494fc4.slice - libcontainer container kubepods-burstable-pod2ba2357c_5050_4817_b717_cac189494fc4.slice. May 16 16:45:52.001004 systemd[1]: Created slice kubepods-burstable-pod366a46d4_7e1f_4843_880c_f786317ecab5.slice - libcontainer container kubepods-burstable-pod366a46d4_7e1f_4843_880c_f786317ecab5.slice. May 16 16:45:52.107157 kubelet[2644]: I0516 16:45:52.107091 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ba2357c-5050-4817-b717-cac189494fc4-config-volume\") pod \"coredns-674b8bbfcf-bd2j6\" (UID: \"2ba2357c-5050-4817-b717-cac189494fc4\") " pod="kube-system/coredns-674b8bbfcf-bd2j6" May 16 16:45:52.107157 kubelet[2644]: I0516 16:45:52.107148 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk8pr\" (UniqueName: \"kubernetes.io/projected/366a46d4-7e1f-4843-880c-f786317ecab5-kube-api-access-gk8pr\") pod \"coredns-674b8bbfcf-x42r6\" (UID: \"366a46d4-7e1f-4843-880c-f786317ecab5\") " pod="kube-system/coredns-674b8bbfcf-x42r6" May 16 16:45:52.107301 kubelet[2644]: I0516 16:45:52.107171 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxqkn\" (UniqueName: \"kubernetes.io/projected/2ba2357c-5050-4817-b717-cac189494fc4-kube-api-access-xxqkn\") pod \"coredns-674b8bbfcf-bd2j6\" (UID: \"2ba2357c-5050-4817-b717-cac189494fc4\") " pod="kube-system/coredns-674b8bbfcf-bd2j6" May 16 16:45:52.107301 kubelet[2644]: I0516 16:45:52.107195 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/366a46d4-7e1f-4843-880c-f786317ecab5-config-volume\") pod \"coredns-674b8bbfcf-x42r6\" (UID: \"366a46d4-7e1f-4843-880c-f786317ecab5\") " pod="kube-system/coredns-674b8bbfcf-x42r6" May 16 16:45:52.297306 kubelet[2644]: E0516 16:45:52.297153 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:52.297891 containerd[1561]: time="2025-05-16T16:45:52.297757564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bd2j6,Uid:2ba2357c-5050-4817-b717-cac189494fc4,Namespace:kube-system,Attempt:0,}" May 16 16:45:52.304071 kubelet[2644]: E0516 16:45:52.304018 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:52.304814 containerd[1561]: time="2025-05-16T16:45:52.304761962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x42r6,Uid:366a46d4-7e1f-4843-880c-f786317ecab5,Namespace:kube-system,Attempt:0,}" May 16 16:45:52.711867 kubelet[2644]: E0516 16:45:52.711835 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:52.726062 kubelet[2644]: I0516 16:45:52.725132 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7dbrn" podStartSLOduration=11.210772304 podStartE2EDuration="27.725118368s" podCreationTimestamp="2025-05-16 16:45:25 +0000 UTC" firstStartedPulling="2025-05-16 16:45:28.78695367 +0000 UTC m=+9.788989236" lastFinishedPulling="2025-05-16 16:45:45.301299734 +0000 UTC m=+26.303335300" observedRunningTime="2025-05-16 16:45:52.72469486 +0000 UTC m=+33.726730426" watchObservedRunningTime="2025-05-16 16:45:52.725118368 +0000 UTC m=+33.727153934" May 16 16:45:53.999353 systemd-networkd[1451]: cilium_host: Link UP May 16 16:45:53.999882 systemd-networkd[1451]: cilium_net: Link UP May 16 16:45:54.000623 systemd-networkd[1451]: cilium_net: Gained carrier May 16 16:45:54.001029 systemd-networkd[1451]: cilium_host: Gained carrier May 16 16:45:54.107532 systemd-networkd[1451]: cilium_vxlan: Link UP May 16 16:45:54.107541 systemd-networkd[1451]: cilium_vxlan: Gained carrier May 16 16:45:54.209217 systemd-networkd[1451]: cilium_host: Gained IPv6LL May 16 16:45:54.287621 kubelet[2644]: E0516 16:45:54.287437 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:54.333086 kernel: NET: Registered PF_ALG protocol family May 16 16:45:54.688938 systemd[1]: Started sshd@8-10.0.0.104:22-10.0.0.1:45790.service - OpenSSH per-connection server daemon (10.0.0.1:45790). May 16 16:45:54.766067 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 45790 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:45:54.767729 sshd-session[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:45:54.772726 systemd-logind[1534]: New session 9 of user core. May 16 16:45:54.779178 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 16:45:54.893394 sshd[3701]: Connection closed by 10.0.0.1 port 45790 May 16 16:45:54.893693 sshd-session[3677]: pam_unix(sshd:session): session closed for user core May 16 16:45:54.898133 systemd[1]: sshd@8-10.0.0.104:22-10.0.0.1:45790.service: Deactivated successfully. May 16 16:45:54.900241 systemd[1]: session-9.scope: Deactivated successfully. May 16 16:45:54.901251 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. May 16 16:45:54.902910 systemd-logind[1534]: Removed session 9. May 16 16:45:54.983967 systemd-networkd[1451]: lxc_health: Link UP May 16 16:45:54.986343 systemd-networkd[1451]: lxc_health: Gained carrier May 16 16:45:55.025202 systemd-networkd[1451]: cilium_net: Gained IPv6LL May 16 16:45:55.335088 kernel: eth0: renamed from tmpc77d4 May 16 16:45:55.334673 systemd-networkd[1451]: lxc4b78a6790342: Link UP May 16 16:45:55.336281 systemd-networkd[1451]: lxc4b78a6790342: Gained carrier May 16 16:45:55.346128 systemd-networkd[1451]: lxc088a33512c21: Link UP May 16 16:45:55.357072 kernel: eth0: renamed from tmp9588f May 16 16:45:55.356455 systemd-networkd[1451]: lxc088a33512c21: Gained carrier May 16 16:45:55.665261 systemd-networkd[1451]: cilium_vxlan: Gained IPv6LL May 16 16:45:56.289075 kubelet[2644]: E0516 16:45:56.288693 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:56.369288 systemd-networkd[1451]: lxc4b78a6790342: Gained IPv6LL May 16 16:45:56.716787 kubelet[2644]: E0516 16:45:56.716746 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:56.946230 systemd-networkd[1451]: lxc_health: Gained IPv6LL May 16 16:45:57.073337 systemd-networkd[1451]: lxc088a33512c21: Gained IPv6LL May 16 16:45:57.718412 kubelet[2644]: E0516 16:45:57.718357 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:58.739491 containerd[1561]: time="2025-05-16T16:45:58.739435692Z" level=info msg="connecting to shim c77d4f568c6799dc3e7717a3f796a5ad6509246e9afb0b35c2538d65fbcb5648" address="unix:///run/containerd/s/6a1cabcfcbd83b12efd6783322c0dc40a2e43ea86f2cf6b22416ff2f485656d7" namespace=k8s.io protocol=ttrpc version=3 May 16 16:45:58.764367 containerd[1561]: time="2025-05-16T16:45:58.764298738Z" level=info msg="connecting to shim 9588f38b9044e541cc152e0f8f2117cdba6016b160248eb7bd1ae590e2337c25" address="unix:///run/containerd/s/5a295a07a0396ad5bbaa2990a16e60d27f305e831f7125ddc2171ba0bb38fbe8" namespace=k8s.io protocol=ttrpc version=3 May 16 16:45:58.766287 systemd[1]: Started cri-containerd-c77d4f568c6799dc3e7717a3f796a5ad6509246e9afb0b35c2538d65fbcb5648.scope - libcontainer container c77d4f568c6799dc3e7717a3f796a5ad6509246e9afb0b35c2538d65fbcb5648. May 16 16:45:58.783859 systemd-resolved[1421]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:45:58.793201 systemd[1]: Started cri-containerd-9588f38b9044e541cc152e0f8f2117cdba6016b160248eb7bd1ae590e2337c25.scope - libcontainer container 9588f38b9044e541cc152e0f8f2117cdba6016b160248eb7bd1ae590e2337c25. May 16 16:45:58.807647 systemd-resolved[1421]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:45:58.819534 containerd[1561]: time="2025-05-16T16:45:58.819487963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bd2j6,Uid:2ba2357c-5050-4817-b717-cac189494fc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c77d4f568c6799dc3e7717a3f796a5ad6509246e9afb0b35c2538d65fbcb5648\"" May 16 16:45:58.820294 kubelet[2644]: E0516 16:45:58.820267 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:58.826103 containerd[1561]: time="2025-05-16T16:45:58.826065402Z" level=info msg="CreateContainer within sandbox \"c77d4f568c6799dc3e7717a3f796a5ad6509246e9afb0b35c2538d65fbcb5648\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:45:58.841934 containerd[1561]: time="2025-05-16T16:45:58.841363932Z" level=info msg="Container 0a74acc516b94fbc3a31030a45e83b75848f20323a4a2690a93cc796872de4c9: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:58.847898 containerd[1561]: time="2025-05-16T16:45:58.847833939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x42r6,Uid:366a46d4-7e1f-4843-880c-f786317ecab5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9588f38b9044e541cc152e0f8f2117cdba6016b160248eb7bd1ae590e2337c25\"" May 16 16:45:58.848556 kubelet[2644]: E0516 16:45:58.848532 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:58.853351 containerd[1561]: time="2025-05-16T16:45:58.853308493Z" level=info msg="CreateContainer within sandbox \"c77d4f568c6799dc3e7717a3f796a5ad6509246e9afb0b35c2538d65fbcb5648\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a74acc516b94fbc3a31030a45e83b75848f20323a4a2690a93cc796872de4c9\"" May 16 16:45:58.853491 containerd[1561]: time="2025-05-16T16:45:58.853391590Z" level=info msg="CreateContainer within sandbox \"9588f38b9044e541cc152e0f8f2117cdba6016b160248eb7bd1ae590e2337c25\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:45:58.854081 containerd[1561]: time="2025-05-16T16:45:58.854031453Z" level=info msg="StartContainer for \"0a74acc516b94fbc3a31030a45e83b75848f20323a4a2690a93cc796872de4c9\"" May 16 16:45:58.855216 containerd[1561]: time="2025-05-16T16:45:58.855155727Z" level=info msg="connecting to shim 0a74acc516b94fbc3a31030a45e83b75848f20323a4a2690a93cc796872de4c9" address="unix:///run/containerd/s/6a1cabcfcbd83b12efd6783322c0dc40a2e43ea86f2cf6b22416ff2f485656d7" protocol=ttrpc version=3 May 16 16:45:58.863812 containerd[1561]: time="2025-05-16T16:45:58.863754488Z" level=info msg="Container 6b03eade77f3c7816b5eb16668cda2d292266427203393a063e2882824ea6932: CDI devices from CRI Config.CDIDevices: []" May 16 16:45:58.870667 containerd[1561]: time="2025-05-16T16:45:58.870620299Z" level=info msg="CreateContainer within sandbox \"9588f38b9044e541cc152e0f8f2117cdba6016b160248eb7bd1ae590e2337c25\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b03eade77f3c7816b5eb16668cda2d292266427203393a063e2882824ea6932\"" May 16 16:45:58.871629 containerd[1561]: time="2025-05-16T16:45:58.871532404Z" level=info msg="StartContainer for \"6b03eade77f3c7816b5eb16668cda2d292266427203393a063e2882824ea6932\"" May 16 16:45:58.872472 containerd[1561]: time="2025-05-16T16:45:58.872445310Z" level=info msg="connecting to shim 6b03eade77f3c7816b5eb16668cda2d292266427203393a063e2882824ea6932" address="unix:///run/containerd/s/5a295a07a0396ad5bbaa2990a16e60d27f305e831f7125ddc2171ba0bb38fbe8" protocol=ttrpc version=3 May 16 16:45:58.875228 systemd[1]: Started cri-containerd-0a74acc516b94fbc3a31030a45e83b75848f20323a4a2690a93cc796872de4c9.scope - libcontainer container 0a74acc516b94fbc3a31030a45e83b75848f20323a4a2690a93cc796872de4c9. May 16 16:45:58.896242 systemd[1]: Started cri-containerd-6b03eade77f3c7816b5eb16668cda2d292266427203393a063e2882824ea6932.scope - libcontainer container 6b03eade77f3c7816b5eb16668cda2d292266427203393a063e2882824ea6932. May 16 16:45:58.909976 containerd[1561]: time="2025-05-16T16:45:58.909824183Z" level=info msg="StartContainer for \"0a74acc516b94fbc3a31030a45e83b75848f20323a4a2690a93cc796872de4c9\" returns successfully" May 16 16:45:58.931410 containerd[1561]: time="2025-05-16T16:45:58.931365401Z" level=info msg="StartContainer for \"6b03eade77f3c7816b5eb16668cda2d292266427203393a063e2882824ea6932\" returns successfully" May 16 16:45:59.723572 kubelet[2644]: E0516 16:45:59.723282 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:59.725799 kubelet[2644]: E0516 16:45:59.725583 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:45:59.731310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772853180.mount: Deactivated successfully. May 16 16:45:59.910038 systemd[1]: Started sshd@9-10.0.0.104:22-10.0.0.1:45800.service - OpenSSH per-connection server daemon (10.0.0.1:45800). May 16 16:45:59.940353 kubelet[2644]: I0516 16:45:59.940015 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bd2j6" podStartSLOduration=34.939998648 podStartE2EDuration="34.939998648s" podCreationTimestamp="2025-05-16 16:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:45:59.939681571 +0000 UTC m=+40.941717157" watchObservedRunningTime="2025-05-16 16:45:59.939998648 +0000 UTC m=+40.942034214" May 16 16:45:59.940353 kubelet[2644]: I0516 16:45:59.940224 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x42r6" podStartSLOduration=34.940220976 podStartE2EDuration="34.940220976s" podCreationTimestamp="2025-05-16 16:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:45:59.816696441 +0000 UTC m=+40.818732007" watchObservedRunningTime="2025-05-16 16:45:59.940220976 +0000 UTC m=+40.942256542" May 16 16:45:59.977323 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 45800 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:45:59.979076 sshd-session[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:45:59.983499 systemd-logind[1534]: New session 10 of user core. May 16 16:45:59.994179 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 16:46:00.114604 sshd[4035]: Connection closed by 10.0.0.1 port 45800 May 16 16:46:00.114925 sshd-session[4033]: pam_unix(sshd:session): session closed for user core May 16 16:46:00.119000 systemd[1]: sshd@9-10.0.0.104:22-10.0.0.1:45800.service: Deactivated successfully. May 16 16:46:00.122189 systemd[1]: session-10.scope: Deactivated successfully. May 16 16:46:00.123183 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. May 16 16:46:00.124528 systemd-logind[1534]: Removed session 10. May 16 16:46:00.727201 kubelet[2644]: E0516 16:46:00.727165 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:00.727432 kubelet[2644]: E0516 16:46:00.727401 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:01.729596 kubelet[2644]: E0516 16:46:01.729531 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:01.730015 kubelet[2644]: E0516 16:46:01.729631 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:05.130991 systemd[1]: Started sshd@10-10.0.0.104:22-10.0.0.1:35836.service - OpenSSH per-connection server daemon (10.0.0.1:35836). May 16 16:46:05.168682 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 35836 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:05.170088 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:05.174552 systemd-logind[1534]: New session 11 of user core. May 16 16:46:05.190200 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 16:46:05.307282 sshd[4057]: Connection closed by 10.0.0.1 port 35836 May 16 16:46:05.307578 sshd-session[4055]: pam_unix(sshd:session): session closed for user core May 16 16:46:05.311811 systemd[1]: sshd@10-10.0.0.104:22-10.0.0.1:35836.service: Deactivated successfully. May 16 16:46:05.313774 systemd[1]: session-11.scope: Deactivated successfully. May 16 16:46:05.314721 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. May 16 16:46:05.316197 systemd-logind[1534]: Removed session 11. May 16 16:46:10.321930 systemd[1]: Started sshd@11-10.0.0.104:22-10.0.0.1:35838.service - OpenSSH per-connection server daemon (10.0.0.1:35838). May 16 16:46:10.423228 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 35838 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:10.425220 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:10.430075 systemd-logind[1534]: New session 12 of user core. May 16 16:46:10.443255 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 16:46:10.564763 sshd[4073]: Connection closed by 10.0.0.1 port 35838 May 16 16:46:10.565184 sshd-session[4071]: pam_unix(sshd:session): session closed for user core May 16 16:46:10.578155 systemd[1]: sshd@11-10.0.0.104:22-10.0.0.1:35838.service: Deactivated successfully. May 16 16:46:10.580344 systemd[1]: session-12.scope: Deactivated successfully. May 16 16:46:10.581106 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. May 16 16:46:10.584606 systemd[1]: Started sshd@12-10.0.0.104:22-10.0.0.1:35854.service - OpenSSH per-connection server daemon (10.0.0.1:35854). May 16 16:46:10.586170 systemd-logind[1534]: Removed session 12. May 16 16:46:10.632966 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 35854 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:10.634460 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:10.639574 systemd-logind[1534]: New session 13 of user core. May 16 16:46:10.654327 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 16:46:10.831835 sshd[4090]: Connection closed by 10.0.0.1 port 35854 May 16 16:46:10.833668 sshd-session[4088]: pam_unix(sshd:session): session closed for user core May 16 16:46:10.842998 systemd[1]: sshd@12-10.0.0.104:22-10.0.0.1:35854.service: Deactivated successfully. May 16 16:46:10.845168 systemd[1]: session-13.scope: Deactivated successfully. May 16 16:46:10.846826 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. May 16 16:46:10.851537 systemd[1]: Started sshd@13-10.0.0.104:22-10.0.0.1:35864.service - OpenSSH per-connection server daemon (10.0.0.1:35864). May 16 16:46:10.852586 systemd-logind[1534]: Removed session 13. May 16 16:46:10.905620 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 35864 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:10.907192 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:10.912259 systemd-logind[1534]: New session 14 of user core. May 16 16:46:10.924181 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 16:46:11.050146 sshd[4103]: Connection closed by 10.0.0.1 port 35864 May 16 16:46:11.050447 sshd-session[4101]: pam_unix(sshd:session): session closed for user core May 16 16:46:11.055008 systemd[1]: sshd@13-10.0.0.104:22-10.0.0.1:35864.service: Deactivated successfully. May 16 16:46:11.057250 systemd[1]: session-14.scope: Deactivated successfully. May 16 16:46:11.058090 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. May 16 16:46:11.059339 systemd-logind[1534]: Removed session 14. May 16 16:46:16.062677 systemd[1]: Started sshd@14-10.0.0.104:22-10.0.0.1:56654.service - OpenSSH per-connection server daemon (10.0.0.1:56654). May 16 16:46:16.112034 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 56654 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:16.113538 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:16.117979 systemd-logind[1534]: New session 15 of user core. May 16 16:46:16.128184 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 16:46:16.236493 sshd[4118]: Connection closed by 10.0.0.1 port 56654 May 16 16:46:16.236793 sshd-session[4116]: pam_unix(sshd:session): session closed for user core May 16 16:46:16.241494 systemd[1]: sshd@14-10.0.0.104:22-10.0.0.1:56654.service: Deactivated successfully. May 16 16:46:16.243545 systemd[1]: session-15.scope: Deactivated successfully. May 16 16:46:16.244335 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. May 16 16:46:16.245461 systemd-logind[1534]: Removed session 15. May 16 16:46:21.254919 systemd[1]: Started sshd@15-10.0.0.104:22-10.0.0.1:56670.service - OpenSSH per-connection server daemon (10.0.0.1:56670). May 16 16:46:21.313690 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 56670 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:21.315398 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:21.320073 systemd-logind[1534]: New session 16 of user core. May 16 16:46:21.332345 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 16:46:21.453153 sshd[4136]: Connection closed by 10.0.0.1 port 56670 May 16 16:46:21.453719 sshd-session[4134]: pam_unix(sshd:session): session closed for user core May 16 16:46:21.471112 systemd[1]: sshd@15-10.0.0.104:22-10.0.0.1:56670.service: Deactivated successfully. May 16 16:46:21.473188 systemd[1]: session-16.scope: Deactivated successfully. May 16 16:46:21.474088 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. May 16 16:46:21.477038 systemd[1]: Started sshd@16-10.0.0.104:22-10.0.0.1:56684.service - OpenSSH per-connection server daemon (10.0.0.1:56684). May 16 16:46:21.477977 systemd-logind[1534]: Removed session 16. May 16 16:46:21.539163 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 56684 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:21.541030 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:21.545914 systemd-logind[1534]: New session 17 of user core. May 16 16:46:21.559310 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 16:46:21.745182 sshd[4152]: Connection closed by 10.0.0.1 port 56684 May 16 16:46:21.745624 sshd-session[4150]: pam_unix(sshd:session): session closed for user core May 16 16:46:21.755950 systemd[1]: sshd@16-10.0.0.104:22-10.0.0.1:56684.service: Deactivated successfully. May 16 16:46:21.757815 systemd[1]: session-17.scope: Deactivated successfully. May 16 16:46:21.758640 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. May 16 16:46:21.761587 systemd[1]: Started sshd@17-10.0.0.104:22-10.0.0.1:56694.service - OpenSSH per-connection server daemon (10.0.0.1:56694). May 16 16:46:21.762797 systemd-logind[1534]: Removed session 17. May 16 16:46:21.814916 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 56694 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:21.816579 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:21.821181 systemd-logind[1534]: New session 18 of user core. May 16 16:46:21.828164 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 16:46:22.659395 sshd[4166]: Connection closed by 10.0.0.1 port 56694 May 16 16:46:22.659846 sshd-session[4164]: pam_unix(sshd:session): session closed for user core May 16 16:46:22.674424 systemd[1]: sshd@17-10.0.0.104:22-10.0.0.1:56694.service: Deactivated successfully. May 16 16:46:22.677653 systemd[1]: session-18.scope: Deactivated successfully. May 16 16:46:22.679310 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. May 16 16:46:22.681888 systemd-logind[1534]: Removed session 18. May 16 16:46:22.683462 systemd[1]: Started sshd@18-10.0.0.104:22-10.0.0.1:56700.service - OpenSSH per-connection server daemon (10.0.0.1:56700). May 16 16:46:22.734663 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 56700 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:22.736444 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:22.741134 systemd-logind[1534]: New session 19 of user core. May 16 16:46:22.755261 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 16:46:22.982040 sshd[4188]: Connection closed by 10.0.0.1 port 56700 May 16 16:46:22.982646 sshd-session[4186]: pam_unix(sshd:session): session closed for user core May 16 16:46:22.992732 systemd[1]: sshd@18-10.0.0.104:22-10.0.0.1:56700.service: Deactivated successfully. May 16 16:46:22.995297 systemd[1]: session-19.scope: Deactivated successfully. May 16 16:46:22.996335 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. May 16 16:46:23.000890 systemd[1]: Started sshd@19-10.0.0.104:22-10.0.0.1:56714.service - OpenSSH per-connection server daemon (10.0.0.1:56714). May 16 16:46:23.001692 systemd-logind[1534]: Removed session 19. May 16 16:46:23.054550 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 56714 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:23.056372 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:23.061437 systemd-logind[1534]: New session 20 of user core. May 16 16:46:23.073315 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 16:46:23.181244 sshd[4202]: Connection closed by 10.0.0.1 port 56714 May 16 16:46:23.181612 sshd-session[4200]: pam_unix(sshd:session): session closed for user core May 16 16:46:23.186603 systemd[1]: sshd@19-10.0.0.104:22-10.0.0.1:56714.service: Deactivated successfully. May 16 16:46:23.189220 systemd[1]: session-20.scope: Deactivated successfully. May 16 16:46:23.190031 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. May 16 16:46:23.191504 systemd-logind[1534]: Removed session 20. May 16 16:46:28.204304 systemd[1]: Started sshd@20-10.0.0.104:22-10.0.0.1:55834.service - OpenSSH per-connection server daemon (10.0.0.1:55834). May 16 16:46:28.264820 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 55834 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:28.266681 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:28.271523 systemd-logind[1534]: New session 21 of user core. May 16 16:46:28.281279 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 16:46:28.395766 sshd[4219]: Connection closed by 10.0.0.1 port 55834 May 16 16:46:28.396157 sshd-session[4217]: pam_unix(sshd:session): session closed for user core May 16 16:46:28.401057 systemd[1]: sshd@20-10.0.0.104:22-10.0.0.1:55834.service: Deactivated successfully. May 16 16:46:28.403532 systemd[1]: session-21.scope: Deactivated successfully. May 16 16:46:28.404515 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. May 16 16:46:28.406008 systemd-logind[1534]: Removed session 21. May 16 16:46:33.108325 kubelet[2644]: E0516 16:46:33.108279 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:33.407977 systemd[1]: Started sshd@21-10.0.0.104:22-10.0.0.1:55842.service - OpenSSH per-connection server daemon (10.0.0.1:55842). May 16 16:46:33.452595 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 55842 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:33.453948 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:33.457930 systemd-logind[1534]: New session 22 of user core. May 16 16:46:33.468154 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 16:46:33.582079 sshd[4237]: Connection closed by 10.0.0.1 port 55842 May 16 16:46:33.580118 sshd-session[4235]: pam_unix(sshd:session): session closed for user core May 16 16:46:33.586308 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. May 16 16:46:33.587649 systemd[1]: sshd@21-10.0.0.104:22-10.0.0.1:55842.service: Deactivated successfully. May 16 16:46:33.594116 systemd[1]: session-22.scope: Deactivated successfully. May 16 16:46:33.597817 systemd-logind[1534]: Removed session 22. May 16 16:46:37.099952 kubelet[2644]: E0516 16:46:37.099627 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:38.592493 systemd[1]: Started sshd@22-10.0.0.104:22-10.0.0.1:36648.service - OpenSSH per-connection server daemon (10.0.0.1:36648). May 16 16:46:38.645459 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 36648 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:38.647008 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:38.651556 systemd-logind[1534]: New session 23 of user core. May 16 16:46:38.661238 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 16:46:38.769770 sshd[4253]: Connection closed by 10.0.0.1 port 36648 May 16 16:46:38.770176 sshd-session[4251]: pam_unix(sshd:session): session closed for user core May 16 16:46:38.786008 systemd[1]: sshd@22-10.0.0.104:22-10.0.0.1:36648.service: Deactivated successfully. May 16 16:46:38.788201 systemd[1]: session-23.scope: Deactivated successfully. May 16 16:46:38.789077 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. May 16 16:46:38.792515 systemd[1]: Started sshd@23-10.0.0.104:22-10.0.0.1:36662.service - OpenSSH per-connection server daemon (10.0.0.1:36662). May 16 16:46:38.793337 systemd-logind[1534]: Removed session 23. May 16 16:46:38.853367 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 36662 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:38.855011 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:38.859997 systemd-logind[1534]: New session 24 of user core. May 16 16:46:38.871288 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 16:46:39.102070 kubelet[2644]: E0516 16:46:39.101253 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:40.197980 containerd[1561]: time="2025-05-16T16:46:40.197867161Z" level=info msg="StopContainer for \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\" with timeout 30 (s)" May 16 16:46:40.198776 containerd[1561]: time="2025-05-16T16:46:40.198742140Z" level=info msg="Stop container \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\" with signal terminated" May 16 16:46:40.210636 systemd[1]: cri-containerd-478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400.scope: Deactivated successfully. May 16 16:46:40.211783 containerd[1561]: time="2025-05-16T16:46:40.211726150Z" level=info msg="received exit event container_id:\"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\" id:\"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\" pid:3064 exited_at:{seconds:1747414000 nanos:211407151}" May 16 16:46:40.211914 containerd[1561]: time="2025-05-16T16:46:40.211776937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\" id:\"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\" pid:3064 exited_at:{seconds:1747414000 nanos:211407151}" May 16 16:46:40.234770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400-rootfs.mount: Deactivated successfully. May 16 16:46:40.235608 containerd[1561]: time="2025-05-16T16:46:40.235557680Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 16:46:40.237139 containerd[1561]: time="2025-05-16T16:46:40.237028457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" id:\"0baad715f24139404ad61b141f233c1bd4e9c6153457fea5366967ffd1491c6f\" pid:4297 exited_at:{seconds:1747414000 nanos:236767860}" May 16 16:46:40.239204 containerd[1561]: time="2025-05-16T16:46:40.239169093Z" level=info msg="StopContainer for \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" with timeout 2 (s)" May 16 16:46:40.239439 containerd[1561]: time="2025-05-16T16:46:40.239418709Z" level=info msg="Stop container \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" with signal terminated" May 16 16:46:40.247111 systemd-networkd[1451]: lxc_health: Link DOWN May 16 16:46:40.247485 containerd[1561]: time="2025-05-16T16:46:40.247129485Z" level=info msg="StopContainer for \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\" returns successfully" May 16 16:46:40.247121 systemd-networkd[1451]: lxc_health: Lost carrier May 16 16:46:40.248036 containerd[1561]: time="2025-05-16T16:46:40.247688241Z" level=info msg="StopPodSandbox for \"c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5\"" May 16 16:46:40.248036 containerd[1561]: time="2025-05-16T16:46:40.247753916Z" level=info msg="Container to stop \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:46:40.259910 systemd[1]: cri-containerd-c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5.scope: Deactivated successfully. May 16 16:46:40.261495 containerd[1561]: time="2025-05-16T16:46:40.261462309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5\" id:\"c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5\" pid:2806 exit_status:137 exited_at:{seconds:1747414000 nanos:261181383}" May 16 16:46:40.264420 systemd[1]: cri-containerd-76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651.scope: Deactivated successfully. May 16 16:46:40.264740 systemd[1]: cri-containerd-76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651.scope: Consumed 6.594s CPU time, 126.1M memory peak, 152K read from disk, 13.3M written to disk. May 16 16:46:40.265756 containerd[1561]: time="2025-05-16T16:46:40.265717369Z" level=info msg="received exit event container_id:\"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" id:\"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" pid:3334 exited_at:{seconds:1747414000 nanos:265448565}" May 16 16:46:40.286616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651-rootfs.mount: Deactivated successfully. May 16 16:46:40.292685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5-rootfs.mount: Deactivated successfully. May 16 16:46:40.302013 containerd[1561]: time="2025-05-16T16:46:40.301977239Z" level=info msg="shim disconnected" id=c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5 namespace=k8s.io May 16 16:46:40.302013 containerd[1561]: time="2025-05-16T16:46:40.302005232Z" level=warning msg="cleaning up after shim disconnected" id=c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5 namespace=k8s.io May 16 16:46:40.307934 containerd[1561]: time="2025-05-16T16:46:40.302012836Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 16:46:40.308002 containerd[1561]: time="2025-05-16T16:46:40.302126433Z" level=info msg="StopContainer for \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" returns successfully" May 16 16:46:40.308522 containerd[1561]: time="2025-05-16T16:46:40.308492543Z" level=info msg="StopPodSandbox for \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\"" May 16 16:46:40.308579 containerd[1561]: time="2025-05-16T16:46:40.308564400Z" level=info msg="Container to stop \"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:46:40.308579 containerd[1561]: time="2025-05-16T16:46:40.308576173Z" level=info msg="Container to stop \"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:46:40.308626 containerd[1561]: time="2025-05-16T16:46:40.308584849Z" level=info msg="Container to stop \"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:46:40.308626 containerd[1561]: time="2025-05-16T16:46:40.308594016Z" level=info msg="Container to stop \"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:46:40.308626 containerd[1561]: time="2025-05-16T16:46:40.308602333Z" level=info msg="Container to stop \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:46:40.314929 systemd[1]: cri-containerd-93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366.scope: Deactivated successfully. May 16 16:46:40.329290 containerd[1561]: time="2025-05-16T16:46:40.329236071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" id:\"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" pid:3334 exited_at:{seconds:1747414000 nanos:265448565}" May 16 16:46:40.329290 containerd[1561]: time="2025-05-16T16:46:40.329287118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" id:\"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" pid:2858 exit_status:137 exited_at:{seconds:1747414000 nanos:316601298}" May 16 16:46:40.332628 containerd[1561]: time="2025-05-16T16:46:40.332585132Z" level=info msg="received exit event sandbox_id:\"c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5\" exit_status:137 exited_at:{seconds:1747414000 nanos:261181383}" May 16 16:46:40.333857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5-shm.mount: Deactivated successfully. May 16 16:46:40.339724 containerd[1561]: time="2025-05-16T16:46:40.339688178Z" level=info msg="TearDown network for sandbox \"c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5\" successfully" May 16 16:46:40.339724 containerd[1561]: time="2025-05-16T16:46:40.339722293Z" level=info msg="StopPodSandbox for \"c3b0d8e18c5ace477d5d2b8d5719563596c9e6caa558e619bfefe847314fa3a5\" returns successfully" May 16 16:46:40.339828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366-rootfs.mount: Deactivated successfully. May 16 16:46:40.434769 containerd[1561]: time="2025-05-16T16:46:40.434717968Z" level=info msg="received exit event sandbox_id:\"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" exit_status:137 exited_at:{seconds:1747414000 nanos:316601298}" May 16 16:46:40.435093 containerd[1561]: time="2025-05-16T16:46:40.434997812Z" level=info msg="shim disconnected" id=93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366 namespace=k8s.io May 16 16:46:40.435093 containerd[1561]: time="2025-05-16T16:46:40.435022599Z" level=warning msg="cleaning up after shim disconnected" id=93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366 namespace=k8s.io May 16 16:46:40.435093 containerd[1561]: time="2025-05-16T16:46:40.435026747Z" level=info msg="TearDown network for sandbox \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" successfully" May 16 16:46:40.435172 containerd[1561]: time="2025-05-16T16:46:40.435095688Z" level=info msg="StopPodSandbox for \"93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366\" returns successfully" May 16 16:46:40.435308 containerd[1561]: time="2025-05-16T16:46:40.435030294Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 16:46:40.494725 kubelet[2644]: I0516 16:46:40.493849 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/301231b7-54b3-4138-801b-5ba28862e91f-cilium-config-path\") pod \"301231b7-54b3-4138-801b-5ba28862e91f\" (UID: \"301231b7-54b3-4138-801b-5ba28862e91f\") " May 16 16:46:40.494725 kubelet[2644]: I0516 16:46:40.493923 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2vj5\" (UniqueName: \"kubernetes.io/projected/301231b7-54b3-4138-801b-5ba28862e91f-kube-api-access-p2vj5\") pod \"301231b7-54b3-4138-801b-5ba28862e91f\" (UID: \"301231b7-54b3-4138-801b-5ba28862e91f\") " May 16 16:46:40.497418 kubelet[2644]: I0516 16:46:40.497373 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/301231b7-54b3-4138-801b-5ba28862e91f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "301231b7-54b3-4138-801b-5ba28862e91f" (UID: "301231b7-54b3-4138-801b-5ba28862e91f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 16:46:40.497643 kubelet[2644]: I0516 16:46:40.497601 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301231b7-54b3-4138-801b-5ba28862e91f-kube-api-access-p2vj5" (OuterVolumeSpecName: "kube-api-access-p2vj5") pod "301231b7-54b3-4138-801b-5ba28862e91f" (UID: "301231b7-54b3-4138-801b-5ba28862e91f"). InnerVolumeSpecName "kube-api-access-p2vj5". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 16:46:40.594385 kubelet[2644]: I0516 16:46:40.594318 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-host-proc-sys-net\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594385 kubelet[2644]: I0516 16:46:40.594371 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-lib-modules\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594385 kubelet[2644]: I0516 16:46:40.594397 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99e30828-ab61-432c-b51c-aa75e8dccc1d-clustermesh-secrets\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594609 kubelet[2644]: I0516 16:46:40.594475 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:46:40.594609 kubelet[2644]: I0516 16:46:40.594478 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:46:40.594663 kubelet[2644]: I0516 16:46:40.594626 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-hostproc" (OuterVolumeSpecName: "hostproc") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:46:40.594859 kubelet[2644]: I0516 16:46:40.594417 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-hostproc\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594909 kubelet[2644]: I0516 16:46:40.594869 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-config-path\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594909 kubelet[2644]: I0516 16:46:40.594886 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-cgroup\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594909 kubelet[2644]: I0516 16:46:40.594904 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmqmx\" (UniqueName: \"kubernetes.io/projected/99e30828-ab61-432c-b51c-aa75e8dccc1d-kube-api-access-gmqmx\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594988 kubelet[2644]: I0516 16:46:40.594920 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99e30828-ab61-432c-b51c-aa75e8dccc1d-hubble-tls\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594988 kubelet[2644]: I0516 16:46:40.594933 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-host-proc-sys-kernel\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594988 kubelet[2644]: I0516 16:46:40.594949 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-etc-cni-netd\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594988 kubelet[2644]: I0516 16:46:40.594963 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cni-path\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594988 kubelet[2644]: I0516 16:46:40.594976 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-bpf-maps\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.594988 kubelet[2644]: I0516 16:46:40.594991 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-xtables-lock\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.595153 kubelet[2644]: I0516 16:46:40.595003 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-run\") pod \"99e30828-ab61-432c-b51c-aa75e8dccc1d\" (UID: \"99e30828-ab61-432c-b51c-aa75e8dccc1d\") " May 16 16:46:40.595153 kubelet[2644]: I0516 16:46:40.595033 2644 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.595153 kubelet[2644]: I0516 16:46:40.595062 2644 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.595153 kubelet[2644]: I0516 16:46:40.595072 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/301231b7-54b3-4138-801b-5ba28862e91f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.595153 kubelet[2644]: I0516 16:46:40.595079 2644 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.595153 kubelet[2644]: I0516 16:46:40.595087 2644 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p2vj5\" (UniqueName: \"kubernetes.io/projected/301231b7-54b3-4138-801b-5ba28862e91f-kube-api-access-p2vj5\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.595153 kubelet[2644]: I0516 16:46:40.595109 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:46:40.595327 kubelet[2644]: I0516 16:46:40.595135 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:46:40.595800 kubelet[2644]: I0516 16:46:40.595386 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:46:40.597935 kubelet[2644]: I0516 16:46:40.597784 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e30828-ab61-432c-b51c-aa75e8dccc1d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 16:46:40.597935 kubelet[2644]: I0516 16:46:40.597832 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:46:40.597935 kubelet[2644]: I0516 16:46:40.597849 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cni-path" (OuterVolumeSpecName: "cni-path") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:46:40.597935 kubelet[2644]: I0516 16:46:40.597864 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:46:40.597935 kubelet[2644]: I0516 16:46:40.597878 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 16:46:40.598272 kubelet[2644]: I0516 16:46:40.598243 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 16:46:40.598334 kubelet[2644]: I0516 16:46:40.598279 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99e30828-ab61-432c-b51c-aa75e8dccc1d-kube-api-access-gmqmx" (OuterVolumeSpecName: "kube-api-access-gmqmx") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "kube-api-access-gmqmx". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 16:46:40.598334 kubelet[2644]: I0516 16:46:40.598320 2644 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99e30828-ab61-432c-b51c-aa75e8dccc1d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "99e30828-ab61-432c-b51c-aa75e8dccc1d" (UID: "99e30828-ab61-432c-b51c-aa75e8dccc1d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 16:46:40.695685 kubelet[2644]: I0516 16:46:40.695595 2644 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99e30828-ab61-432c-b51c-aa75e8dccc1d-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.695685 kubelet[2644]: I0516 16:46:40.695648 2644 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.695685 kubelet[2644]: I0516 16:46:40.695663 2644 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.695685 kubelet[2644]: I0516 16:46:40.695675 2644 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.695685 kubelet[2644]: I0516 16:46:40.695687 2644 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.695685 kubelet[2644]: I0516 16:46:40.695697 2644 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.695685 kubelet[2644]: I0516 16:46:40.695707 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.695685 kubelet[2644]: I0516 16:46:40.695718 2644 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99e30828-ab61-432c-b51c-aa75e8dccc1d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.696011 kubelet[2644]: I0516 16:46:40.695729 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.696011 kubelet[2644]: I0516 16:46:40.695739 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99e30828-ab61-432c-b51c-aa75e8dccc1d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.696011 kubelet[2644]: I0516 16:46:40.695751 2644 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gmqmx\" (UniqueName: \"kubernetes.io/projected/99e30828-ab61-432c-b51c-aa75e8dccc1d-kube-api-access-gmqmx\") on node \"localhost\" DevicePath \"\"" May 16 16:46:40.807610 kubelet[2644]: I0516 16:46:40.806119 2644 scope.go:117] "RemoveContainer" containerID="76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651" May 16 16:46:40.809642 containerd[1561]: time="2025-05-16T16:46:40.809581204Z" level=info msg="RemoveContainer for \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\"" May 16 16:46:40.813954 systemd[1]: Removed slice kubepods-burstable-pod99e30828_ab61_432c_b51c_aa75e8dccc1d.slice - libcontainer container kubepods-burstable-pod99e30828_ab61_432c_b51c_aa75e8dccc1d.slice. May 16 16:46:40.814073 systemd[1]: kubepods-burstable-pod99e30828_ab61_432c_b51c_aa75e8dccc1d.slice: Consumed 6.700s CPU time, 126.4M memory peak, 160K read from disk, 16.6M written to disk. May 16 16:46:40.815717 systemd[1]: Removed slice kubepods-besteffort-pod301231b7_54b3_4138_801b_5ba28862e91f.slice - libcontainer container kubepods-besteffort-pod301231b7_54b3_4138_801b_5ba28862e91f.slice. May 16 16:46:40.818699 containerd[1561]: time="2025-05-16T16:46:40.818647567Z" level=info msg="RemoveContainer for \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" returns successfully" May 16 16:46:40.818962 kubelet[2644]: I0516 16:46:40.818935 2644 scope.go:117] "RemoveContainer" containerID="bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890" May 16 16:46:40.822196 containerd[1561]: time="2025-05-16T16:46:40.821536069Z" level=info msg="RemoveContainer for \"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\"" May 16 16:46:40.827881 containerd[1561]: time="2025-05-16T16:46:40.827828788Z" level=info msg="RemoveContainer for \"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\" returns successfully" May 16 16:46:40.828161 kubelet[2644]: I0516 16:46:40.828123 2644 scope.go:117] "RemoveContainer" containerID="96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71" May 16 16:46:40.830991 containerd[1561]: time="2025-05-16T16:46:40.830892835Z" level=info msg="RemoveContainer for \"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\"" May 16 16:46:40.836837 containerd[1561]: time="2025-05-16T16:46:40.836785853Z" level=info msg="RemoveContainer for \"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\" returns successfully" May 16 16:46:40.837126 kubelet[2644]: I0516 16:46:40.837068 2644 scope.go:117] "RemoveContainer" containerID="d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b" May 16 16:46:40.841842 containerd[1561]: time="2025-05-16T16:46:40.841796735Z" level=info msg="RemoveContainer for \"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\"" May 16 16:46:40.846141 containerd[1561]: time="2025-05-16T16:46:40.846100028Z" level=info msg="RemoveContainer for \"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\" returns successfully" May 16 16:46:40.846346 kubelet[2644]: I0516 16:46:40.846316 2644 scope.go:117] "RemoveContainer" containerID="1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51" May 16 16:46:40.847967 containerd[1561]: time="2025-05-16T16:46:40.847928477Z" level=info msg="RemoveContainer for \"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\"" May 16 16:46:40.851535 containerd[1561]: time="2025-05-16T16:46:40.851502989Z" level=info msg="RemoveContainer for \"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\" returns successfully" May 16 16:46:40.851682 kubelet[2644]: I0516 16:46:40.851655 2644 scope.go:117] "RemoveContainer" containerID="76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651" May 16 16:46:40.851850 containerd[1561]: time="2025-05-16T16:46:40.851803953Z" level=error msg="ContainerStatus for \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\": not found" May 16 16:46:40.852017 kubelet[2644]: E0516 16:46:40.851984 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\": not found" containerID="76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651" May 16 16:46:40.852085 kubelet[2644]: I0516 16:46:40.852025 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651"} err="failed to get container status \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\": rpc error: code = NotFound desc = an error occurred when try to find container \"76b954cc740dd6adee55f395502559f294c1cbef6e236b551b2dfe8bc1d66651\": not found" May 16 16:46:40.852085 kubelet[2644]: I0516 16:46:40.852083 2644 scope.go:117] "RemoveContainer" containerID="bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890" May 16 16:46:40.852300 containerd[1561]: time="2025-05-16T16:46:40.852275193Z" level=error msg="ContainerStatus for \"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\": not found" May 16 16:46:40.852405 kubelet[2644]: E0516 16:46:40.852381 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\": not found" containerID="bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890" May 16 16:46:40.852449 kubelet[2644]: I0516 16:46:40.852412 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890"} err="failed to get container status \"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfb9703b3ac0508cc24c09c9c63f102f0626749907a9467f48f2742afe0de890\": not found" May 16 16:46:40.852449 kubelet[2644]: I0516 16:46:40.852430 2644 scope.go:117] "RemoveContainer" containerID="96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71" May 16 16:46:40.852631 containerd[1561]: time="2025-05-16T16:46:40.852579514Z" level=error msg="ContainerStatus for \"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\": not found" May 16 16:46:40.852688 kubelet[2644]: E0516 16:46:40.852657 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\": not found" containerID="96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71" May 16 16:46:40.852721 kubelet[2644]: I0516 16:46:40.852688 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71"} err="failed to get container status \"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\": rpc error: code = NotFound desc = an error occurred when try to find container \"96313a7d9dac18eadb0de63a49a48fd17616598d68cf454ad6e32ada03db0f71\": not found" May 16 16:46:40.852721 kubelet[2644]: I0516 16:46:40.852701 2644 scope.go:117] "RemoveContainer" containerID="d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b" May 16 16:46:40.852871 containerd[1561]: time="2025-05-16T16:46:40.852833589Z" level=error msg="ContainerStatus for \"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\": not found" May 16 16:46:40.852964 kubelet[2644]: E0516 16:46:40.852942 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\": not found" containerID="d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b" May 16 16:46:40.852990 kubelet[2644]: I0516 16:46:40.852966 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b"} err="failed to get container status \"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d985a16d82da41489989b662b879b1051a10416e00b622b74e62274907e5263b\": not found" May 16 16:46:40.852990 kubelet[2644]: I0516 16:46:40.852981 2644 scope.go:117] "RemoveContainer" containerID="1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51" May 16 16:46:40.853227 containerd[1561]: time="2025-05-16T16:46:40.853178978Z" level=error msg="ContainerStatus for \"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\": not found" May 16 16:46:40.853355 kubelet[2644]: E0516 16:46:40.853334 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\": not found" containerID="1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51" May 16 16:46:40.853390 kubelet[2644]: I0516 16:46:40.853355 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51"} err="failed to get container status \"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fec7510b556c557901c47cf80cdc5d9f7ea11f0230f3b93ed00ef3012079d51\": not found" May 16 16:46:40.853390 kubelet[2644]: I0516 16:46:40.853369 2644 scope.go:117] "RemoveContainer" containerID="478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400" May 16 16:46:40.854580 containerd[1561]: time="2025-05-16T16:46:40.854554262Z" level=info msg="RemoveContainer for \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\"" May 16 16:46:40.857760 containerd[1561]: time="2025-05-16T16:46:40.857721547Z" level=info msg="RemoveContainer for \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\" returns successfully" May 16 16:46:40.857911 kubelet[2644]: I0516 16:46:40.857880 2644 scope.go:117] "RemoveContainer" containerID="478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400" May 16 16:46:40.858121 containerd[1561]: time="2025-05-16T16:46:40.858071215Z" level=error msg="ContainerStatus for \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\": not found" May 16 16:46:40.858225 kubelet[2644]: E0516 16:46:40.858205 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\": not found" containerID="478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400" May 16 16:46:40.858265 kubelet[2644]: I0516 16:46:40.858226 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400"} err="failed to get container status \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\": rpc error: code = NotFound desc = an error occurred when try to find container \"478e11407d4b21aacb3a13382bb73a5683bfc0263c97c948064de04cefde1400\": not found" May 16 16:46:41.101546 kubelet[2644]: I0516 16:46:41.101393 2644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301231b7-54b3-4138-801b-5ba28862e91f" path="/var/lib/kubelet/pods/301231b7-54b3-4138-801b-5ba28862e91f/volumes" May 16 16:46:41.102038 kubelet[2644]: I0516 16:46:41.102009 2644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99e30828-ab61-432c-b51c-aa75e8dccc1d" path="/var/lib/kubelet/pods/99e30828-ab61-432c-b51c-aa75e8dccc1d/volumes" May 16 16:46:41.234122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93b48cb5180bd7a2293cb72bae64f1d8fe31b51cea3f647b86668c6496b77366-shm.mount: Deactivated successfully. May 16 16:46:41.234266 systemd[1]: var-lib-kubelet-pods-99e30828\x2dab61\x2d432c\x2db51c\x2daa75e8dccc1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgmqmx.mount: Deactivated successfully. May 16 16:46:41.234354 systemd[1]: var-lib-kubelet-pods-301231b7\x2d54b3\x2d4138\x2d801b\x2d5ba28862e91f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp2vj5.mount: Deactivated successfully. May 16 16:46:41.234436 systemd[1]: var-lib-kubelet-pods-99e30828\x2dab61\x2d432c\x2db51c\x2daa75e8dccc1d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 16:46:41.234516 systemd[1]: var-lib-kubelet-pods-99e30828\x2dab61\x2d432c\x2db51c\x2daa75e8dccc1d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 16:46:42.164721 sshd[4269]: Connection closed by 10.0.0.1 port 36662 May 16 16:46:42.165304 sshd-session[4267]: pam_unix(sshd:session): session closed for user core May 16 16:46:42.178628 systemd[1]: sshd@23-10.0.0.104:22-10.0.0.1:36662.service: Deactivated successfully. May 16 16:46:42.180804 systemd[1]: session-24.scope: Deactivated successfully. May 16 16:46:42.181820 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. May 16 16:46:42.185812 systemd[1]: Started sshd@24-10.0.0.104:22-10.0.0.1:36672.service - OpenSSH per-connection server daemon (10.0.0.1:36672). May 16 16:46:42.186385 systemd-logind[1534]: Removed session 24. May 16 16:46:42.231281 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 36672 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:42.232841 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:42.237299 systemd-logind[1534]: New session 25 of user core. May 16 16:46:42.249150 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 16:46:42.662187 sshd[4425]: Connection closed by 10.0.0.1 port 36672 May 16 16:46:42.662672 sshd-session[4423]: pam_unix(sshd:session): session closed for user core May 16 16:46:42.675150 systemd[1]: sshd@24-10.0.0.104:22-10.0.0.1:36672.service: Deactivated successfully. May 16 16:46:42.679392 systemd[1]: session-25.scope: Deactivated successfully. May 16 16:46:42.681594 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. May 16 16:46:42.685350 systemd[1]: Started sshd@25-10.0.0.104:22-10.0.0.1:36686.service - OpenSSH per-connection server daemon (10.0.0.1:36686). May 16 16:46:42.691110 systemd-logind[1534]: Removed session 25. May 16 16:46:42.702867 systemd[1]: Created slice kubepods-burstable-pod7d2315f8_0c0e_4023_9e80_52273dddd7b5.slice - libcontainer container kubepods-burstable-pod7d2315f8_0c0e_4023_9e80_52273dddd7b5.slice. May 16 16:46:42.732309 sshd[4438]: Accepted publickey for core from 10.0.0.1 port 36686 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:42.734023 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:42.739008 systemd-logind[1534]: New session 26 of user core. May 16 16:46:42.754284 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 16:46:42.806671 sshd[4440]: Connection closed by 10.0.0.1 port 36686 May 16 16:46:42.807089 sshd-session[4438]: pam_unix(sshd:session): session closed for user core May 16 16:46:42.807581 kubelet[2644]: I0516 16:46:42.807499 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7d2315f8-0c0e-4023-9e80-52273dddd7b5-hostproc\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.807581 kubelet[2644]: I0516 16:46:42.807563 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7d2315f8-0c0e-4023-9e80-52273dddd7b5-cilium-ipsec-secrets\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808035 kubelet[2644]: I0516 16:46:42.807872 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d2315f8-0c0e-4023-9e80-52273dddd7b5-lib-modules\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808035 kubelet[2644]: I0516 16:46:42.807902 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7d2315f8-0c0e-4023-9e80-52273dddd7b5-host-proc-sys-kernel\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808035 kubelet[2644]: I0516 16:46:42.807918 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7d2315f8-0c0e-4023-9e80-52273dddd7b5-cni-path\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808672 kubelet[2644]: I0516 16:46:42.808637 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d2315f8-0c0e-4023-9e80-52273dddd7b5-etc-cni-netd\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808672 kubelet[2644]: I0516 16:46:42.808664 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7d2315f8-0c0e-4023-9e80-52273dddd7b5-bpf-maps\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808761 kubelet[2644]: I0516 16:46:42.808678 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz2p6\" (UniqueName: \"kubernetes.io/projected/7d2315f8-0c0e-4023-9e80-52273dddd7b5-kube-api-access-bz2p6\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808761 kubelet[2644]: I0516 16:46:42.808695 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d2315f8-0c0e-4023-9e80-52273dddd7b5-cilium-config-path\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808761 kubelet[2644]: I0516 16:46:42.808710 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7d2315f8-0c0e-4023-9e80-52273dddd7b5-hubble-tls\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808761 kubelet[2644]: I0516 16:46:42.808730 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7d2315f8-0c0e-4023-9e80-52273dddd7b5-cilium-run\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808761 kubelet[2644]: I0516 16:46:42.808745 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d2315f8-0c0e-4023-9e80-52273dddd7b5-xtables-lock\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808888 kubelet[2644]: I0516 16:46:42.808764 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7d2315f8-0c0e-4023-9e80-52273dddd7b5-host-proc-sys-net\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808888 kubelet[2644]: I0516 16:46:42.808781 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7d2315f8-0c0e-4023-9e80-52273dddd7b5-cilium-cgroup\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.808888 kubelet[2644]: I0516 16:46:42.808807 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7d2315f8-0c0e-4023-9e80-52273dddd7b5-clustermesh-secrets\") pod \"cilium-skdzf\" (UID: \"7d2315f8-0c0e-4023-9e80-52273dddd7b5\") " pod="kube-system/cilium-skdzf" May 16 16:46:42.818230 systemd[1]: sshd@25-10.0.0.104:22-10.0.0.1:36686.service: Deactivated successfully. May 16 16:46:42.820322 systemd[1]: session-26.scope: Deactivated successfully. May 16 16:46:42.821272 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. May 16 16:46:42.824858 systemd[1]: Started sshd@26-10.0.0.104:22-10.0.0.1:36702.service - OpenSSH per-connection server daemon (10.0.0.1:36702). May 16 16:46:42.825698 systemd-logind[1534]: Removed session 26. May 16 16:46:42.877830 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 36702 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:46:42.879536 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:46:42.885089 systemd-logind[1534]: New session 27 of user core. May 16 16:46:42.894201 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 16:46:43.014320 kubelet[2644]: E0516 16:46:43.014184 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:43.018214 containerd[1561]: time="2025-05-16T16:46:43.018162838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-skdzf,Uid:7d2315f8-0c0e-4023-9e80-52273dddd7b5,Namespace:kube-system,Attempt:0,}" May 16 16:46:43.035536 containerd[1561]: time="2025-05-16T16:46:43.035471388Z" level=info msg="connecting to shim 4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c" address="unix:///run/containerd/s/57ebeef7328aa147c48cd45876eca3e6a997bf23f5e2d609336ff2759dc084c1" namespace=k8s.io protocol=ttrpc version=3 May 16 16:46:43.071395 systemd[1]: Started cri-containerd-4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c.scope - libcontainer container 4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c. May 16 16:46:43.101565 containerd[1561]: time="2025-05-16T16:46:43.101509484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-skdzf,Uid:7d2315f8-0c0e-4023-9e80-52273dddd7b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c\"" May 16 16:46:43.102182 kubelet[2644]: E0516 16:46:43.102158 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:43.108949 containerd[1561]: time="2025-05-16T16:46:43.108879891Z" level=info msg="CreateContainer within sandbox \"4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 16:46:43.116175 containerd[1561]: time="2025-05-16T16:46:43.116110442Z" level=info msg="Container 61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7: CDI devices from CRI Config.CDIDevices: []" May 16 16:46:43.122560 containerd[1561]: time="2025-05-16T16:46:43.122507285Z" level=info msg="CreateContainer within sandbox \"4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7\"" May 16 16:46:43.123033 containerd[1561]: time="2025-05-16T16:46:43.123010253Z" level=info msg="StartContainer for \"61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7\"" May 16 16:46:43.124698 containerd[1561]: time="2025-05-16T16:46:43.124662742Z" level=info msg="connecting to shim 61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7" address="unix:///run/containerd/s/57ebeef7328aa147c48cd45876eca3e6a997bf23f5e2d609336ff2759dc084c1" protocol=ttrpc version=3 May 16 16:46:43.153405 systemd[1]: Started cri-containerd-61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7.scope - libcontainer container 61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7. May 16 16:46:43.189880 containerd[1561]: time="2025-05-16T16:46:43.189819309Z" level=info msg="StartContainer for \"61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7\" returns successfully" May 16 16:46:43.199270 systemd[1]: cri-containerd-61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7.scope: Deactivated successfully. May 16 16:46:43.200862 containerd[1561]: time="2025-05-16T16:46:43.200655652Z" level=info msg="received exit event container_id:\"61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7\" id:\"61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7\" pid:4519 exited_at:{seconds:1747414003 nanos:200328418}" May 16 16:46:43.200862 containerd[1561]: time="2025-05-16T16:46:43.200749772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7\" id:\"61ec59bc6d1316944438e77ab4403ac701d2c153cf77ced571ba346588b8ceb7\" pid:4519 exited_at:{seconds:1747414003 nanos:200328418}" May 16 16:46:43.818316 kubelet[2644]: E0516 16:46:43.818281 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:43.823853 containerd[1561]: time="2025-05-16T16:46:43.823806821Z" level=info msg="CreateContainer within sandbox \"4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 16:46:43.831652 containerd[1561]: time="2025-05-16T16:46:43.831489785Z" level=info msg="Container 93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265: CDI devices from CRI Config.CDIDevices: []" May 16 16:46:43.838735 containerd[1561]: time="2025-05-16T16:46:43.838695368Z" level=info msg="CreateContainer within sandbox \"4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265\"" May 16 16:46:43.839463 containerd[1561]: time="2025-05-16T16:46:43.839249114Z" level=info msg="StartContainer for \"93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265\"" May 16 16:46:43.842802 containerd[1561]: time="2025-05-16T16:46:43.842732774Z" level=info msg="connecting to shim 93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265" address="unix:///run/containerd/s/57ebeef7328aa147c48cd45876eca3e6a997bf23f5e2d609336ff2759dc084c1" protocol=ttrpc version=3 May 16 16:46:43.869332 systemd[1]: Started cri-containerd-93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265.scope - libcontainer container 93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265. May 16 16:46:43.902747 containerd[1561]: time="2025-05-16T16:46:43.902691883Z" level=info msg="StartContainer for \"93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265\" returns successfully" May 16 16:46:43.910230 systemd[1]: cri-containerd-93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265.scope: Deactivated successfully. May 16 16:46:43.910759 containerd[1561]: time="2025-05-16T16:46:43.910517248Z" level=info msg="received exit event container_id:\"93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265\" id:\"93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265\" pid:4564 exited_at:{seconds:1747414003 nanos:910287791}" May 16 16:46:43.910759 containerd[1561]: time="2025-05-16T16:46:43.910612028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265\" id:\"93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265\" pid:4564 exited_at:{seconds:1747414003 nanos:910287791}" May 16 16:46:43.934021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93f447cebd73fdc72f5c1233011942c89b8ce4329878eaba32e246664f0b1265-rootfs.mount: Deactivated successfully. May 16 16:46:44.690529 kubelet[2644]: E0516 16:46:44.690457 2644 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 16:46:44.822622 kubelet[2644]: E0516 16:46:44.822582 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:44.829820 containerd[1561]: time="2025-05-16T16:46:44.829768617Z" level=info msg="CreateContainer within sandbox \"4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 16:46:44.849486 containerd[1561]: time="2025-05-16T16:46:44.849438601Z" level=info msg="Container 70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca: CDI devices from CRI Config.CDIDevices: []" May 16 16:46:44.849841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731322913.mount: Deactivated successfully. May 16 16:46:44.862775 containerd[1561]: time="2025-05-16T16:46:44.862712090Z" level=info msg="CreateContainer within sandbox \"4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca\"" May 16 16:46:44.863365 containerd[1561]: time="2025-05-16T16:46:44.863336239Z" level=info msg="StartContainer for \"70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca\"" May 16 16:46:44.864691 containerd[1561]: time="2025-05-16T16:46:44.864664650Z" level=info msg="connecting to shim 70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca" address="unix:///run/containerd/s/57ebeef7328aa147c48cd45876eca3e6a997bf23f5e2d609336ff2759dc084c1" protocol=ttrpc version=3 May 16 16:46:44.886200 systemd[1]: Started cri-containerd-70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca.scope - libcontainer container 70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca. May 16 16:46:44.930137 systemd[1]: cri-containerd-70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca.scope: Deactivated successfully. May 16 16:46:44.930707 containerd[1561]: time="2025-05-16T16:46:44.930675632Z" level=info msg="StartContainer for \"70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca\" returns successfully" May 16 16:46:44.931899 containerd[1561]: time="2025-05-16T16:46:44.931863957Z" level=info msg="received exit event container_id:\"70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca\" id:\"70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca\" pid:4610 exited_at:{seconds:1747414004 nanos:930747740}" May 16 16:46:44.933887 containerd[1561]: time="2025-05-16T16:46:44.933845231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca\" id:\"70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca\" pid:4610 exited_at:{seconds:1747414004 nanos:930747740}" May 16 16:46:44.957208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70d896c451d4c2e89ce55395ebb911a8c50ad2f3bd6a231d6fdcfed4e2a755ca-rootfs.mount: Deactivated successfully. May 16 16:46:45.827140 kubelet[2644]: E0516 16:46:45.827086 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:45.832511 containerd[1561]: time="2025-05-16T16:46:45.832457662Z" level=info msg="CreateContainer within sandbox \"4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 16:46:45.841989 containerd[1561]: time="2025-05-16T16:46:45.841926385Z" level=info msg="Container e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b: CDI devices from CRI Config.CDIDevices: []" May 16 16:46:45.851819 containerd[1561]: time="2025-05-16T16:46:45.851762737Z" level=info msg="CreateContainer within sandbox \"4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b\"" May 16 16:46:45.852395 containerd[1561]: time="2025-05-16T16:46:45.852368400Z" level=info msg="StartContainer for \"e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b\"" May 16 16:46:45.853421 containerd[1561]: time="2025-05-16T16:46:45.853372753Z" level=info msg="connecting to shim e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b" address="unix:///run/containerd/s/57ebeef7328aa147c48cd45876eca3e6a997bf23f5e2d609336ff2759dc084c1" protocol=ttrpc version=3 May 16 16:46:45.876333 systemd[1]: Started cri-containerd-e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b.scope - libcontainer container e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b. May 16 16:46:45.910741 systemd[1]: cri-containerd-e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b.scope: Deactivated successfully. May 16 16:46:45.912202 containerd[1561]: time="2025-05-16T16:46:45.912115264Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b\" id:\"e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b\" pid:4648 exited_at:{seconds:1747414005 nanos:911476537}" May 16 16:46:45.912508 containerd[1561]: time="2025-05-16T16:46:45.912468707Z" level=info msg="received exit event container_id:\"e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b\" id:\"e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b\" pid:4648 exited_at:{seconds:1747414005 nanos:911476537}" May 16 16:46:45.920678 containerd[1561]: time="2025-05-16T16:46:45.920633957Z" level=info msg="StartContainer for \"e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b\" returns successfully" May 16 16:46:45.936505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0bb533612fe6f974ca09a225004b19f235248b3b4a6bb36806dc08df0bfad2b-rootfs.mount: Deactivated successfully. May 16 16:46:46.099549 kubelet[2644]: E0516 16:46:46.099387 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:46.835469 kubelet[2644]: E0516 16:46:46.835426 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:46.920823 containerd[1561]: time="2025-05-16T16:46:46.920753087Z" level=info msg="CreateContainer within sandbox \"4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 16:46:46.932923 containerd[1561]: time="2025-05-16T16:46:46.932854145Z" level=info msg="Container 61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691: CDI devices from CRI Config.CDIDevices: []" May 16 16:46:46.936877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199309482.mount: Deactivated successfully. May 16 16:46:46.941374 containerd[1561]: time="2025-05-16T16:46:46.941324379Z" level=info msg="CreateContainer within sandbox \"4a0fcc057294f7ed25d3c42fa28efe79aabb774fc70662d2c4cc2da536479b2c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691\"" May 16 16:46:46.941988 containerd[1561]: time="2025-05-16T16:46:46.941950131Z" level=info msg="StartContainer for \"61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691\"" May 16 16:46:46.943274 containerd[1561]: time="2025-05-16T16:46:46.943246999Z" level=info msg="connecting to shim 61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691" address="unix:///run/containerd/s/57ebeef7328aa147c48cd45876eca3e6a997bf23f5e2d609336ff2759dc084c1" protocol=ttrpc version=3 May 16 16:46:46.975359 systemd[1]: Started cri-containerd-61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691.scope - libcontainer container 61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691. May 16 16:46:47.017135 containerd[1561]: time="2025-05-16T16:46:47.017091556Z" level=info msg="StartContainer for \"61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691\" returns successfully" May 16 16:46:47.097943 containerd[1561]: time="2025-05-16T16:46:47.097750268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691\" id:\"e8538d41eb172a05c79901aff91278d43f3fa952206c51414ddaf9ac16e3eb89\" pid:4716 exited_at:{seconds:1747414007 nanos:96526148}" May 16 16:46:47.455094 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 16 16:46:47.842438 kubelet[2644]: E0516 16:46:47.842274 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:47.857970 kubelet[2644]: I0516 16:46:47.857894 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-skdzf" podStartSLOduration=5.8578771 podStartE2EDuration="5.8578771s" podCreationTimestamp="2025-05-16 16:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:46:47.85698356 +0000 UTC m=+88.859019146" watchObservedRunningTime="2025-05-16 16:46:47.8578771 +0000 UTC m=+88.859912666" May 16 16:46:49.014973 kubelet[2644]: E0516 16:46:49.014935 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:49.184570 containerd[1561]: time="2025-05-16T16:46:49.184524961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691\" id:\"005e000950d5e443421c3d2d99fe50a20e2ed939627b590437ef38cdf3096138\" pid:4869 exit_status:1 exited_at:{seconds:1747414009 nanos:183819520}" May 16 16:46:50.617981 systemd-networkd[1451]: lxc_health: Link UP May 16 16:46:50.633271 systemd-networkd[1451]: lxc_health: Gained carrier May 16 16:46:51.015756 kubelet[2644]: E0516 16:46:51.015709 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:51.306701 containerd[1561]: time="2025-05-16T16:46:51.306559032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691\" id:\"ba3e337e259dc76bee7d45cb54b1d3d1f2be37449ea3002eb1856d7e4c445191\" pid:5243 exited_at:{seconds:1747414011 nanos:305734215}" May 16 16:46:51.850307 kubelet[2644]: E0516 16:46:51.850260 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:52.561254 systemd-networkd[1451]: lxc_health: Gained IPv6LL May 16 16:46:52.852703 kubelet[2644]: E0516 16:46:52.852181 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:53.396642 containerd[1561]: time="2025-05-16T16:46:53.396581227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691\" id:\"5f50d7dcfe31dcc0059cc17ee1e4d55c231b215a631b75b25c138c262d9486ff\" pid:5281 exited_at:{seconds:1747414013 nanos:396204592}" May 16 16:46:55.099579 kubelet[2644]: E0516 16:46:55.099534 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:46:55.479215 containerd[1561]: time="2025-05-16T16:46:55.479152650Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691\" id:\"46862be14e2ab17fff22adf4d24fff664047e291e3cbdfcb66586393db339c47\" pid:5312 exited_at:{seconds:1747414015 nanos:478604059}" May 16 16:46:57.591675 containerd[1561]: time="2025-05-16T16:46:57.591621109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61ebe4c88247155dadf2ce38d530460369da88db2a131333afa449fb13e8c691\" id:\"98dcbd0c1d0b17a5900ed4242da1313bae6688403b2263d3612de937eefe0f7b\" pid:5337 exited_at:{seconds:1747414017 nanos:591303957}" May 16 16:46:57.608271 sshd[4450]: Connection closed by 10.0.0.1 port 36702 May 16 16:46:57.629761 sshd-session[4447]: pam_unix(sshd:session): session closed for user core May 16 16:46:57.634130 systemd[1]: sshd@26-10.0.0.104:22-10.0.0.1:36702.service: Deactivated successfully. May 16 16:46:57.636568 systemd[1]: session-27.scope: Deactivated successfully. May 16 16:46:57.637544 systemd-logind[1534]: Session 27 logged out. Waiting for processes to exit. May 16 16:46:57.639210 systemd-logind[1534]: Removed session 27.