Sep 9 05:32:49.830579 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 03:39:34 -00 2025 Sep 9 05:32:49.830599 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:32:49.830610 kernel: BIOS-provided physical RAM map: Sep 9 05:32:49.830617 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 05:32:49.830623 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 05:32:49.830630 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 05:32:49.830637 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 05:32:49.830644 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 05:32:49.830650 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 05:32:49.830658 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 05:32:49.830665 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 9 05:32:49.830671 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 05:32:49.830677 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 05:32:49.830684 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 05:32:49.830692 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 05:32:49.830701 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 05:32:49.830708 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 05:32:49.830714 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 05:32:49.830721 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 05:32:49.830728 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 05:32:49.830735 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 05:32:49.830741 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 05:32:49.830748 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 05:32:49.830755 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 05:32:49.830761 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 05:32:49.830770 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 05:32:49.830777 kernel: NX (Execute Disable) protection: active Sep 9 05:32:49.830784 kernel: APIC: Static calls initialized Sep 9 05:32:49.830791 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 9 05:32:49.830798 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 9 05:32:49.830804 kernel: extended physical RAM map: Sep 9 05:32:49.830811 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 05:32:49.830818 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 05:32:49.830825 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 05:32:49.830832 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 05:32:49.830839 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 05:32:49.830848 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 05:32:49.830854 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 05:32:49.830861 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 9 05:32:49.830868 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 9 05:32:49.830878 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 9 05:32:49.830885 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 9 05:32:49.830894 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 9 05:32:49.830901 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 05:32:49.830908 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 05:32:49.830915 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 05:32:49.830922 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 05:32:49.830929 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 05:32:49.830936 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 05:32:49.830943 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 05:32:49.830950 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 05:32:49.830957 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 05:32:49.830966 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 05:32:49.830973 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 05:32:49.830981 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 05:32:49.830988 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 05:32:49.830995 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 05:32:49.831001 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 05:32:49.831008 kernel: efi: EFI v2.7 by EDK II Sep 9 05:32:49.831016 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 9 05:32:49.831023 kernel: random: crng init done Sep 9 05:32:49.831030 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 9 05:32:49.831037 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 9 05:32:49.831046 kernel: secureboot: Secure boot disabled Sep 9 05:32:49.831053 kernel: SMBIOS 2.8 present. Sep 9 05:32:49.831060 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 05:32:49.831067 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:32:49.831074 kernel: Hypervisor detected: KVM Sep 9 05:32:49.831081 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 05:32:49.831088 kernel: kvm-clock: using sched offset of 3605225093 cycles Sep 9 05:32:49.831095 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 05:32:49.831102 kernel: tsc: Detected 2794.750 MHz processor Sep 9 05:32:49.831127 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 05:32:49.831134 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 05:32:49.831151 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 9 05:32:49.831158 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 05:32:49.831165 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 05:32:49.831172 kernel: Using GB pages for direct mapping Sep 9 05:32:49.831180 kernel: ACPI: Early table checksum verification disabled Sep 9 05:32:49.831187 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 05:32:49.831195 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 05:32:49.831202 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:32:49.831210 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:32:49.831219 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 05:32:49.831226 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:32:49.831233 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:32:49.831241 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:32:49.831256 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:32:49.831264 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 05:32:49.831271 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 05:32:49.831285 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 05:32:49.831293 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 05:32:49.831310 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 05:32:49.831317 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 05:32:49.831325 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 05:32:49.831332 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 05:32:49.831339 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 05:32:49.831346 kernel: No NUMA configuration found Sep 9 05:32:49.831353 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 9 05:32:49.831360 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 9 05:32:49.831368 kernel: Zone ranges: Sep 9 05:32:49.831381 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 05:32:49.831389 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 9 05:32:49.831396 kernel: Normal empty Sep 9 05:32:49.831403 kernel: Device empty Sep 9 05:32:49.831410 kernel: Movable zone start for each node Sep 9 05:32:49.831417 kernel: Early memory node ranges Sep 9 05:32:49.831424 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 05:32:49.831432 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 05:32:49.831439 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 05:32:49.831446 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 9 05:32:49.831455 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 9 05:32:49.831462 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 9 05:32:49.831469 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 9 05:32:49.831477 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 9 05:32:49.831484 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 9 05:32:49.831491 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 05:32:49.831498 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 05:32:49.831514 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 05:32:49.831521 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 05:32:49.831529 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 9 05:32:49.831536 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 9 05:32:49.831544 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 05:32:49.831553 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 05:32:49.831560 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 9 05:32:49.831568 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 05:32:49.831576 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 05:32:49.831583 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 05:32:49.831593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 05:32:49.831600 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 05:32:49.831608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 05:32:49.831615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 05:32:49.831623 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 05:32:49.831630 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 05:32:49.831637 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 05:32:49.831645 kernel: TSC deadline timer available Sep 9 05:32:49.831652 kernel: CPU topo: Max. logical packages: 1 Sep 9 05:32:49.831662 kernel: CPU topo: Max. logical dies: 1 Sep 9 05:32:49.831669 kernel: CPU topo: Max. dies per package: 1 Sep 9 05:32:49.831676 kernel: CPU topo: Max. threads per core: 1 Sep 9 05:32:49.831684 kernel: CPU topo: Num. cores per package: 4 Sep 9 05:32:49.831691 kernel: CPU topo: Num. threads per package: 4 Sep 9 05:32:49.831698 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 05:32:49.831706 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 05:32:49.831713 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 05:32:49.831721 kernel: kvm-guest: setup PV sched yield Sep 9 05:32:49.831730 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 05:32:49.831738 kernel: Booting paravirtualized kernel on KVM Sep 9 05:32:49.831745 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 05:32:49.831753 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 05:32:49.831761 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 05:32:49.831768 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 05:32:49.831776 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 05:32:49.831783 kernel: kvm-guest: PV spinlocks enabled Sep 9 05:32:49.831790 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 05:32:49.831801 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:32:49.831809 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:32:49.831817 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 05:32:49.831824 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 05:32:49.831832 kernel: Fallback order for Node 0: 0 Sep 9 05:32:49.831839 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 9 05:32:49.831846 kernel: Policy zone: DMA32 Sep 9 05:32:49.831854 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:32:49.831864 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 05:32:49.831871 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 05:32:49.831879 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 05:32:49.831886 kernel: Dynamic Preempt: voluntary Sep 9 05:32:49.831893 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:32:49.831901 kernel: rcu: RCU event tracing is enabled. Sep 9 05:32:49.831909 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 05:32:49.831917 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:32:49.831924 kernel: Rude variant of Tasks RCU enabled. Sep 9 05:32:49.831932 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:32:49.831941 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:32:49.831949 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 05:32:49.831957 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:32:49.831964 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:32:49.831972 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:32:49.831979 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 05:32:49.831987 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:32:49.831994 kernel: Console: colour dummy device 80x25 Sep 9 05:32:49.832002 kernel: printk: legacy console [ttyS0] enabled Sep 9 05:32:49.832011 kernel: ACPI: Core revision 20240827 Sep 9 05:32:49.832019 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 05:32:49.832026 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 05:32:49.832034 kernel: x2apic enabled Sep 9 05:32:49.832041 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 05:32:49.832049 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 05:32:49.832056 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 05:32:49.832064 kernel: kvm-guest: setup PV IPIs Sep 9 05:32:49.832071 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 05:32:49.832081 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 9 05:32:49.832088 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 9 05:32:49.832096 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 05:32:49.832103 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 05:32:49.832130 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 05:32:49.832144 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 05:32:49.832151 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 05:32:49.832159 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 05:32:49.832169 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 05:32:49.832176 kernel: active return thunk: retbleed_return_thunk Sep 9 05:32:49.832184 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 05:32:49.832191 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 05:32:49.832207 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 05:32:49.832222 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 05:32:49.832231 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 05:32:49.832239 kernel: active return thunk: srso_return_thunk Sep 9 05:32:49.832246 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 05:32:49.832256 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 05:32:49.832264 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 05:32:49.832271 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 05:32:49.832279 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 05:32:49.832286 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 05:32:49.832298 kernel: Freeing SMP alternatives memory: 32K Sep 9 05:32:49.832305 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:32:49.832313 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:32:49.832320 kernel: landlock: Up and running. Sep 9 05:32:49.832329 kernel: SELinux: Initializing. Sep 9 05:32:49.832337 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:32:49.832345 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:32:49.832352 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 05:32:49.832360 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 05:32:49.832367 kernel: ... version: 0 Sep 9 05:32:49.832374 kernel: ... bit width: 48 Sep 9 05:32:49.832382 kernel: ... generic registers: 6 Sep 9 05:32:49.832389 kernel: ... value mask: 0000ffffffffffff Sep 9 05:32:49.832398 kernel: ... max period: 00007fffffffffff Sep 9 05:32:49.832405 kernel: ... fixed-purpose events: 0 Sep 9 05:32:49.832413 kernel: ... event mask: 000000000000003f Sep 9 05:32:49.832420 kernel: signal: max sigframe size: 1776 Sep 9 05:32:49.832428 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:32:49.832435 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:32:49.832443 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:32:49.832450 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:32:49.832458 kernel: smpboot: x86: Booting SMP configuration: Sep 9 05:32:49.832467 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 05:32:49.832474 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 05:32:49.832482 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 9 05:32:49.832489 kernel: Memory: 2422672K/2565800K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 137196K reserved, 0K cma-reserved) Sep 9 05:32:49.832497 kernel: devtmpfs: initialized Sep 9 05:32:49.832504 kernel: x86/mm: Memory block size: 128MB Sep 9 05:32:49.832512 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 05:32:49.832519 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 05:32:49.832527 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 9 05:32:49.832536 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 05:32:49.832544 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 9 05:32:49.832551 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 05:32:49.832559 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:32:49.832566 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 05:32:49.832574 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:32:49.832581 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:32:49.832589 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:32:49.832596 kernel: audit: type=2000 audit(1757395968.140:1): state=initialized audit_enabled=0 res=1 Sep 9 05:32:49.832605 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:32:49.832613 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 05:32:49.832620 kernel: cpuidle: using governor menu Sep 9 05:32:49.832628 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:32:49.832635 kernel: dca service started, version 1.12.1 Sep 9 05:32:49.832642 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 05:32:49.832650 kernel: PCI: Using configuration type 1 for base access Sep 9 05:32:49.832658 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 05:32:49.832672 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:32:49.832679 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:32:49.832687 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:32:49.832694 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:32:49.832702 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:32:49.832709 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:32:49.832717 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:32:49.832724 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 05:32:49.832731 kernel: ACPI: Interpreter enabled Sep 9 05:32:49.832740 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 05:32:49.832748 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 05:32:49.832755 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 05:32:49.832763 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 05:32:49.832770 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 05:32:49.832777 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:32:49.832958 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:32:49.833090 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 05:32:49.833240 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 05:32:49.833251 kernel: PCI host bridge to bus 0000:00 Sep 9 05:32:49.833384 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 05:32:49.833499 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 05:32:49.833604 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 05:32:49.833706 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 05:32:49.833809 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 05:32:49.833916 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 05:32:49.834020 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:32:49.834192 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:32:49.834326 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 05:32:49.834442 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 05:32:49.834555 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 05:32:49.834672 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 05:32:49.834785 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 05:32:49.834910 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 05:32:49.835026 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 05:32:49.835182 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 05:32:49.835300 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 05:32:49.835425 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 05:32:49.835545 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 05:32:49.835659 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 05:32:49.835772 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 05:32:49.835895 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 05:32:49.836009 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 05:32:49.836153 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 05:32:49.836272 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 05:32:49.836390 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 05:32:49.836516 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 05:32:49.836630 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 05:32:49.836753 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 05:32:49.836868 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 05:32:49.836982 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 05:32:49.837127 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 05:32:49.837273 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 05:32:49.837283 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 05:32:49.837291 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 05:32:49.837299 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 05:32:49.837306 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 05:32:49.837314 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 05:32:49.837322 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 05:32:49.837329 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 05:32:49.837340 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 05:32:49.837347 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 05:32:49.837355 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 05:32:49.837363 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 05:32:49.837370 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 05:32:49.837378 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 05:32:49.837385 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 05:32:49.837393 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 05:32:49.837400 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 05:32:49.837409 kernel: iommu: Default domain type: Translated Sep 9 05:32:49.837417 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 05:32:49.837424 kernel: efivars: Registered efivars operations Sep 9 05:32:49.837432 kernel: PCI: Using ACPI for IRQ routing Sep 9 05:32:49.837440 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 05:32:49.837448 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 05:32:49.837455 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 9 05:32:49.837463 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 9 05:32:49.837470 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 9 05:32:49.837479 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 9 05:32:49.837487 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 9 05:32:49.837495 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 9 05:32:49.837503 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 9 05:32:49.837618 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 05:32:49.837733 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 05:32:49.837846 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 05:32:49.837856 kernel: vgaarb: loaded Sep 9 05:32:49.837866 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 05:32:49.837874 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 05:32:49.837881 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 05:32:49.837889 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:32:49.837897 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:32:49.837905 kernel: pnp: PnP ACPI init Sep 9 05:32:49.838049 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 05:32:49.838063 kernel: pnp: PnP ACPI: found 6 devices Sep 9 05:32:49.838072 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 05:32:49.838081 kernel: NET: Registered PF_INET protocol family Sep 9 05:32:49.838089 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 05:32:49.838096 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 05:32:49.838104 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:32:49.838130 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 05:32:49.838144 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 05:32:49.838152 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 05:32:49.838160 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:32:49.838170 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:32:49.838178 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:32:49.838186 kernel: NET: Registered PF_XDP protocol family Sep 9 05:32:49.838305 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 05:32:49.838421 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 05:32:49.838528 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 05:32:49.838632 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 05:32:49.838737 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 05:32:49.838847 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 05:32:49.838956 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 05:32:49.839060 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 05:32:49.839070 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:32:49.839079 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 9 05:32:49.839087 kernel: Initialise system trusted keyrings Sep 9 05:32:49.839101 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 05:32:49.839126 kernel: Key type asymmetric registered Sep 9 05:32:49.839155 kernel: Asymmetric key parser 'x509' registered Sep 9 05:32:49.839163 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 05:32:49.839171 kernel: io scheduler mq-deadline registered Sep 9 05:32:49.839179 kernel: io scheduler kyber registered Sep 9 05:32:49.839187 kernel: io scheduler bfq registered Sep 9 05:32:49.839195 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 05:32:49.839206 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 05:32:49.839214 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 05:32:49.839222 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 05:32:49.839230 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:32:49.839238 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 05:32:49.839246 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 05:32:49.839254 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 05:32:49.839262 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 05:32:49.839388 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 05:32:49.839526 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 05:32:49.839647 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T05:32:49 UTC (1757395969) Sep 9 05:32:49.839755 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 05:32:49.839766 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 05:32:49.839773 kernel: efifb: probing for efifb Sep 9 05:32:49.839782 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 9 05:32:49.839790 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 05:32:49.839798 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 05:32:49.839809 kernel: efifb: scrolling: redraw Sep 9 05:32:49.839817 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 05:32:49.839824 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 05:32:49.839833 kernel: fb0: EFI VGA frame buffer device Sep 9 05:32:49.839841 kernel: pstore: Using crash dump compression: deflate Sep 9 05:32:49.839848 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 05:32:49.839856 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:32:49.839864 kernel: Segment Routing with IPv6 Sep 9 05:32:49.839874 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:32:49.839884 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:32:49.839892 kernel: Key type dns_resolver registered Sep 9 05:32:49.839899 kernel: IPI shorthand broadcast: enabled Sep 9 05:32:49.839907 kernel: sched_clock: Marking stable (2713001828, 153044769)->(2881042253, -14995656) Sep 9 05:32:49.839915 kernel: registered taskstats version 1 Sep 9 05:32:49.839923 kernel: Loading compiled-in X.509 certificates Sep 9 05:32:49.839931 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 884b9ad6a330f59ae6e6488b20a5491e41ff24a3' Sep 9 05:32:49.839939 kernel: Demotion targets for Node 0: null Sep 9 05:32:49.839946 kernel: Key type .fscrypt registered Sep 9 05:32:49.839956 kernel: Key type fscrypt-provisioning registered Sep 9 05:32:49.839964 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:32:49.839971 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:32:49.839979 kernel: ima: No architecture policies found Sep 9 05:32:49.839987 kernel: clk: Disabling unused clocks Sep 9 05:32:49.839995 kernel: Warning: unable to open an initial console. Sep 9 05:32:49.840003 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 9 05:32:49.840011 kernel: Write protecting the kernel read-only data: 24576k Sep 9 05:32:49.840018 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 05:32:49.840028 kernel: Run /init as init process Sep 9 05:32:49.840036 kernel: with arguments: Sep 9 05:32:49.840044 kernel: /init Sep 9 05:32:49.840052 kernel: with environment: Sep 9 05:32:49.840059 kernel: HOME=/ Sep 9 05:32:49.840067 kernel: TERM=linux Sep 9 05:32:49.840075 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:32:49.840084 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:32:49.840097 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:32:49.840106 systemd[1]: Detected virtualization kvm. Sep 9 05:32:49.840172 systemd[1]: Detected architecture x86-64. Sep 9 05:32:49.840181 systemd[1]: Running in initrd. Sep 9 05:32:49.840189 systemd[1]: No hostname configured, using default hostname. Sep 9 05:32:49.840198 systemd[1]: Hostname set to . Sep 9 05:32:49.840206 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:32:49.840215 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:32:49.840226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:32:49.840234 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:32:49.840243 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:32:49.840252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:32:49.840260 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:32:49.840270 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:32:49.840282 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:32:49.840290 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:32:49.840298 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:32:49.840307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:32:49.840315 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:32:49.840323 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:32:49.840332 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:32:49.840340 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:32:49.840348 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:32:49.840358 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:32:49.840367 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:32:49.840375 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:32:49.840383 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:32:49.840392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:32:49.840400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:32:49.840408 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:32:49.840417 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:32:49.840425 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:32:49.840436 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:32:49.840444 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:32:49.840453 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:32:49.840461 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:32:49.840469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:32:49.840478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:32:49.840486 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:32:49.840497 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:32:49.840505 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:32:49.840514 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:32:49.840543 systemd-journald[218]: Collecting audit messages is disabled. Sep 9 05:32:49.840566 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:32:49.840575 systemd-journald[218]: Journal started Sep 9 05:32:49.840594 systemd-journald[218]: Runtime Journal (/run/log/journal/82aabe2f0c864acb8677e469439d1977) is 6M, max 48.4M, 42.4M free. Sep 9 05:32:49.830176 systemd-modules-load[221]: Inserted module 'overlay' Sep 9 05:32:49.843322 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:32:49.847022 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:32:49.852258 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:32:49.857129 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:32:49.859622 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 9 05:32:49.860698 kernel: Bridge firewalling registered Sep 9 05:32:49.861460 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:32:49.863220 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:32:49.867566 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:32:49.868860 systemd-tmpfiles[238]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:32:49.873421 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:32:49.874058 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:32:49.884321 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:32:49.894246 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:32:49.896201 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:32:49.907700 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:32:49.911726 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:32:49.944901 systemd-resolved[254]: Positive Trust Anchors: Sep 9 05:32:49.944918 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:32:49.944947 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:32:49.947501 systemd-resolved[254]: Defaulting to hostname 'linux'. Sep 9 05:32:49.957448 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:32:49.948539 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:32:49.956646 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:32:50.064190 kernel: SCSI subsystem initialized Sep 9 05:32:50.073158 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:32:50.083155 kernel: iscsi: registered transport (tcp) Sep 9 05:32:50.104401 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:32:50.104492 kernel: QLogic iSCSI HBA Driver Sep 9 05:32:50.125489 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:32:50.153270 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:32:50.155050 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:32:50.220915 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:32:50.222522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:32:50.283148 kernel: raid6: avx2x4 gen() 22272 MB/s Sep 9 05:32:50.300141 kernel: raid6: avx2x2 gen() 31261 MB/s Sep 9 05:32:50.317173 kernel: raid6: avx2x1 gen() 25845 MB/s Sep 9 05:32:50.317193 kernel: raid6: using algorithm avx2x2 gen() 31261 MB/s Sep 9 05:32:50.335186 kernel: raid6: .... xor() 19880 MB/s, rmw enabled Sep 9 05:32:50.335208 kernel: raid6: using avx2x2 recovery algorithm Sep 9 05:32:50.355153 kernel: xor: automatically using best checksumming function avx Sep 9 05:32:50.516157 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:32:50.525434 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:32:50.527085 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:32:50.557920 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 9 05:32:50.563339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:32:50.564355 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:32:50.590246 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Sep 9 05:32:50.618799 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:32:50.621259 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:32:50.687279 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:32:50.689344 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:32:50.726145 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 05:32:50.731210 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 05:32:50.738175 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 05:32:50.744346 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:32:50.744417 kernel: GPT:9289727 != 19775487 Sep 9 05:32:50.744429 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:32:50.744453 kernel: GPT:9289727 != 19775487 Sep 9 05:32:50.744464 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:32:50.744474 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:32:50.751148 kernel: libata version 3.00 loaded. Sep 9 05:32:50.758266 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 05:32:50.758461 kernel: AES CTR mode by8 optimization enabled Sep 9 05:32:50.758473 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 05:32:50.762739 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 05:32:50.762902 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 05:32:50.763043 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 05:32:50.766158 kernel: scsi host0: ahci Sep 9 05:32:50.766210 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 05:32:50.772516 kernel: scsi host1: ahci Sep 9 05:32:50.774391 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:32:50.774517 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:32:50.778931 kernel: scsi host2: ahci Sep 9 05:32:50.779169 kernel: scsi host3: ahci Sep 9 05:32:50.780992 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:32:50.791211 kernel: scsi host4: ahci Sep 9 05:32:50.791398 kernel: scsi host5: ahci Sep 9 05:32:50.791540 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 9 05:32:50.791551 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 9 05:32:50.791568 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 9 05:32:50.791578 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 9 05:32:50.791588 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 9 05:32:50.791599 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 9 05:32:50.789358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:32:50.823622 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 05:32:50.838947 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 05:32:50.839029 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 05:32:50.849586 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 05:32:50.860171 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:32:50.862988 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:32:50.863058 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:32:50.863105 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:32:50.868438 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:32:50.871376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:32:50.872673 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:32:50.889388 disk-uuid[634]: Primary Header is updated. Sep 9 05:32:50.889388 disk-uuid[634]: Secondary Entries is updated. Sep 9 05:32:50.889388 disk-uuid[634]: Secondary Header is updated. Sep 9 05:32:50.894142 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:32:50.896624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:32:50.901655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:32:51.100165 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 05:32:51.100225 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 05:32:51.100237 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 05:32:51.100255 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 05:32:51.101145 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 05:32:51.102141 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 05:32:51.103147 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 05:32:51.103160 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 05:32:51.103552 kernel: ata3.00: applying bridge limits Sep 9 05:32:51.104646 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 05:32:51.104657 kernel: ata3.00: configured for UDMA/100 Sep 9 05:32:51.107145 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 05:32:51.163660 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 05:32:51.164030 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 05:32:51.194138 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 05:32:51.586128 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:32:51.586779 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:32:51.589375 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:32:51.591541 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:32:51.594343 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:32:51.635004 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:32:51.898203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:32:51.899604 disk-uuid[637]: The operation has completed successfully. Sep 9 05:32:51.929486 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:32:51.929608 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:32:51.961859 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:32:51.993542 sh[667]: Success Sep 9 05:32:52.013074 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:32:52.013139 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:32:52.013152 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:32:52.022141 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 05:32:52.048290 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:32:52.052037 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:32:52.067431 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:32:52.073142 kernel: BTRFS: device fsid 9ca60a92-6b53-4529-adc0-1f4392d2ad56 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (679) Sep 9 05:32:52.073192 kernel: BTRFS info (device dm-0): first mount of filesystem 9ca60a92-6b53-4529-adc0-1f4392d2ad56 Sep 9 05:32:52.074904 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:32:52.079329 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:32:52.079350 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:32:52.080412 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:32:52.081670 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:32:52.083593 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:32:52.084268 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:32:52.086429 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:32:52.109094 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 9 05:32:52.109137 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:32:52.110130 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:32:52.112723 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:32:52.112746 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:32:52.118140 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:32:52.118635 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:32:52.119812 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:32:52.208033 ignition[751]: Ignition 2.22.0 Sep 9 05:32:52.208046 ignition[751]: Stage: fetch-offline Sep 9 05:32:52.208077 ignition[751]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:32:52.208094 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:32:52.208191 ignition[751]: parsed url from cmdline: "" Sep 9 05:32:52.208195 ignition[751]: no config URL provided Sep 9 05:32:52.208200 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:32:52.208209 ignition[751]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:32:52.208231 ignition[751]: op(1): [started] loading QEMU firmware config module Sep 9 05:32:52.208236 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 05:32:52.217268 ignition[751]: op(1): [finished] loading QEMU firmware config module Sep 9 05:32:52.224460 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:32:52.227328 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:32:52.259187 ignition[751]: parsing config with SHA512: feb88da459b4c87f71cd1085b6d2b5e2ba958899ea84feb44700bbce4edf7d4ee04a13ae497de5e0f6d95303517eee334b08fe59ba2252d60fffea921a29d0a0 Sep 9 05:32:52.264969 unknown[751]: fetched base config from "system" Sep 9 05:32:52.265203 unknown[751]: fetched user config from "qemu" Sep 9 05:32:52.267567 ignition[751]: fetch-offline: fetch-offline passed Sep 9 05:32:52.267680 ignition[751]: Ignition finished successfully Sep 9 05:32:52.270970 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:32:52.279868 systemd-networkd[857]: lo: Link UP Sep 9 05:32:52.279877 systemd-networkd[857]: lo: Gained carrier Sep 9 05:32:52.281333 systemd-networkd[857]: Enumeration completed Sep 9 05:32:52.281508 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:32:52.281672 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:32:52.281677 systemd-networkd[857]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:32:52.282920 systemd-networkd[857]: eth0: Link UP Sep 9 05:32:52.283049 systemd-networkd[857]: eth0: Gained carrier Sep 9 05:32:52.283058 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:32:52.283974 systemd[1]: Reached target network.target - Network. Sep 9 05:32:52.284389 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 05:32:52.285201 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:32:52.303150 systemd-networkd[857]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:32:52.319203 ignition[861]: Ignition 2.22.0 Sep 9 05:32:52.319217 ignition[861]: Stage: kargs Sep 9 05:32:52.319350 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:32:52.319360 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:32:52.320027 ignition[861]: kargs: kargs passed Sep 9 05:32:52.320069 ignition[861]: Ignition finished successfully Sep 9 05:32:52.324848 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:32:52.326785 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:32:52.366729 ignition[870]: Ignition 2.22.0 Sep 9 05:32:52.366740 ignition[870]: Stage: disks Sep 9 05:32:52.366866 ignition[870]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:32:52.366875 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:32:52.367728 ignition[870]: disks: disks passed Sep 9 05:32:52.367770 ignition[870]: Ignition finished successfully Sep 9 05:32:52.371307 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:32:52.372720 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:32:52.375614 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:32:52.376791 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:32:52.376847 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:32:52.379786 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:32:52.383398 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:32:52.406543 systemd-fsck[881]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 05:32:52.414854 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:32:52.418014 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:32:52.523133 kernel: EXT4-fs (vda9): mounted filesystem d2d7815e-fa16-4396-ab9d-ac540c1d8856 r/w with ordered data mode. Quota mode: none. Sep 9 05:32:52.523446 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:32:52.524869 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:32:52.527391 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:32:52.528919 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:32:52.530021 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 05:32:52.530059 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:32:52.530090 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:32:52.547003 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:32:52.548284 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:32:52.553888 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Sep 9 05:32:52.553910 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:32:52.553921 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:32:52.556762 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:32:52.556804 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:32:52.559099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:32:52.585635 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:32:52.589386 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:32:52.594154 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:32:52.598486 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:32:52.682350 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:32:52.685450 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:32:52.687897 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:32:52.711145 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:32:52.722695 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:32:52.735393 ignition[1003]: INFO : Ignition 2.22.0 Sep 9 05:32:52.735393 ignition[1003]: INFO : Stage: mount Sep 9 05:32:52.737900 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:32:52.737900 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:32:52.737900 ignition[1003]: INFO : mount: mount passed Sep 9 05:32:52.737900 ignition[1003]: INFO : Ignition finished successfully Sep 9 05:32:52.741624 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:32:52.744821 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:32:53.073246 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:32:53.074910 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:32:53.100395 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Sep 9 05:32:53.100428 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:32:53.100440 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:32:53.104142 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:32:53.104190 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:32:53.105727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:32:53.134985 ignition[1032]: INFO : Ignition 2.22.0 Sep 9 05:32:53.134985 ignition[1032]: INFO : Stage: files Sep 9 05:32:53.136728 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:32:53.136728 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:32:53.139193 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:32:53.141158 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:32:53.141158 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:32:53.144073 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:32:53.144073 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:32:53.147096 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:32:53.147096 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 05:32:53.147096 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 9 05:32:53.144327 unknown[1032]: wrote ssh authorized keys file for user: core Sep 9 05:32:53.189697 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:32:53.298271 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 05:32:53.300312 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:32:53.300312 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 05:32:53.396228 systemd-networkd[857]: eth0: Gained IPv6LL Sep 9 05:32:53.500273 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 05:32:53.625266 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:32:53.627268 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:32:53.627268 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:32:53.627268 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:32:53.627268 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:32:53.627268 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:32:53.627268 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:32:53.627268 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:32:53.627268 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:32:53.641339 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:32:53.641339 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:32:53.641339 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:32:53.641339 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:32:53.641339 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:32:53.641339 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 9 05:32:54.136548 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 05:32:54.786699 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:32:54.786699 ignition[1032]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 05:32:54.790820 ignition[1032]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:32:54.793287 ignition[1032]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:32:54.793287 ignition[1032]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 05:32:54.793287 ignition[1032]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 05:32:54.797969 ignition[1032]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:32:54.797969 ignition[1032]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:32:54.797969 ignition[1032]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 05:32:54.797969 ignition[1032]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 05:32:54.812129 ignition[1032]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:32:54.818431 ignition[1032]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:32:54.820067 ignition[1032]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 05:32:54.820067 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:32:54.820067 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:32:54.820067 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:32:54.820067 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:32:54.820067 ignition[1032]: INFO : files: files passed Sep 9 05:32:54.820067 ignition[1032]: INFO : Ignition finished successfully Sep 9 05:32:54.827049 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:32:54.830199 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:32:54.832004 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:32:54.853001 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:32:54.853166 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:32:54.856317 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 05:32:54.858929 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:32:54.858929 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:32:54.861936 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:32:54.863762 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:32:54.865207 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:32:54.868840 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:32:54.914199 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:32:54.914323 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:32:54.915733 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:32:54.917556 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:32:54.919461 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:32:54.920283 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:32:54.949952 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:32:54.951306 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:32:54.979948 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:32:54.980102 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:32:54.982243 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:32:54.984300 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:32:54.984408 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:32:54.988901 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:32:54.989043 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:32:54.990852 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:32:54.991206 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:32:54.991657 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:32:54.991979 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:32:54.992468 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:32:54.992784 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:32:54.993140 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:32:54.993601 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:32:54.993915 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:32:54.994381 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:32:54.994482 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:32:55.008982 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:32:55.009487 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:32:55.009767 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:32:55.016043 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:32:55.018449 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:32:55.018557 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:32:55.021293 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:32:55.021406 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:32:55.022462 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:32:55.022697 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:32:55.029201 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:32:55.029353 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:32:55.031803 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:32:55.032147 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:32:55.032228 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:32:55.032622 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:32:55.032698 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:32:55.036572 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:32:55.036678 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:32:55.038308 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:32:55.038408 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:32:55.040888 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:32:55.042553 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:32:55.044711 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:32:55.044823 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:32:55.046047 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:32:55.046168 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:32:55.054005 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:32:55.062346 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:32:55.077972 ignition[1087]: INFO : Ignition 2.22.0 Sep 9 05:32:55.077972 ignition[1087]: INFO : Stage: umount Sep 9 05:32:55.079640 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:32:55.079640 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:32:55.079640 ignition[1087]: INFO : umount: umount passed Sep 9 05:32:55.079640 ignition[1087]: INFO : Ignition finished successfully Sep 9 05:32:55.083432 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:32:55.083577 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:32:55.086574 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:32:55.087057 systemd[1]: Stopped target network.target - Network. Sep 9 05:32:55.087596 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:32:55.087644 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:32:55.087939 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:32:55.087978 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:32:55.088592 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:32:55.088640 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:32:55.088905 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:32:55.088944 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:32:55.089493 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:32:55.089798 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:32:55.107434 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:32:55.107574 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:32:55.110896 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:32:55.111230 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:32:55.111275 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:32:55.116289 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:32:55.116565 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:32:55.116681 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:32:55.121803 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:32:55.122658 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:32:55.124148 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:32:55.124196 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:32:55.126540 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:32:55.127416 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:32:55.127466 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:32:55.129297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:32:55.129346 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:32:55.133221 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:32:55.133267 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:32:55.136267 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:32:55.139085 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:32:55.147160 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:32:55.147282 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:32:55.167872 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:32:55.168066 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:32:55.169218 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:32:55.169268 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:32:55.171223 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:32:55.171261 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:32:55.173092 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:32:55.173151 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:32:55.175471 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:32:55.175518 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:32:55.179342 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:32:55.179392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:32:55.184080 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:32:55.187278 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:32:55.187334 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:32:55.190819 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:32:55.190870 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:32:55.194239 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 05:32:55.194297 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:32:55.197779 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:32:55.197829 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:32:55.198054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:32:55.198092 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:32:55.213989 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:32:55.214172 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:32:55.271070 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:32:55.271243 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:32:55.272388 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:32:55.273778 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:32:55.273836 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:32:55.276429 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:32:55.295700 systemd[1]: Switching root. Sep 9 05:32:55.326499 systemd-journald[218]: Journal stopped Sep 9 05:32:56.492945 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 9 05:32:56.493021 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:32:56.493035 kernel: SELinux: policy capability open_perms=1 Sep 9 05:32:56.493046 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:32:56.493058 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:32:56.493075 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:32:56.493086 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:32:56.493098 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:32:56.493123 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:32:56.493134 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:32:56.493146 kernel: audit: type=1403 audit(1757395975.771:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:32:56.493158 systemd[1]: Successfully loaded SELinux policy in 62.827ms. Sep 9 05:32:56.493181 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.228ms. Sep 9 05:32:56.493196 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:32:56.493209 systemd[1]: Detected virtualization kvm. Sep 9 05:32:56.493221 systemd[1]: Detected architecture x86-64. Sep 9 05:32:56.493237 systemd[1]: Detected first boot. Sep 9 05:32:56.493250 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:32:56.493262 zram_generator::config[1132]: No configuration found. Sep 9 05:32:56.493275 kernel: Guest personality initialized and is inactive Sep 9 05:32:56.493287 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 05:32:56.493299 kernel: Initialized host personality Sep 9 05:32:56.493312 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:32:56.493323 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:32:56.493336 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:32:56.493352 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:32:56.493363 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:32:56.493375 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:32:56.493387 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:32:56.493399 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:32:56.493413 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:32:56.493425 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:32:56.493437 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:32:56.493449 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:32:56.493461 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:32:56.493473 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:32:56.493486 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:32:56.493498 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:32:56.493510 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:32:56.493523 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:32:56.493536 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:32:56.493549 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:32:56.493561 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 05:32:56.493573 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:32:56.493584 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:32:56.493596 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:32:56.493613 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:32:56.493628 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:32:56.493640 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:32:56.493651 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:32:56.493663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:32:56.493675 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:32:56.493687 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:32:56.493698 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:32:56.493710 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:32:56.493722 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:32:56.493736 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:32:56.493748 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:32:56.493760 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:32:56.493773 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:32:56.493784 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:32:56.493796 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:32:56.493808 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:32:56.493821 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:32:56.493833 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:32:56.493847 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:32:56.493859 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:32:56.493874 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:32:56.493886 systemd[1]: Reached target machines.target - Containers. Sep 9 05:32:56.493898 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:32:56.493991 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:32:56.494008 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:32:56.494048 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:32:56.494064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:32:56.494076 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:32:56.494089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:32:56.494100 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:32:56.494259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:32:56.494273 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:32:56.494295 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:32:56.494307 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:32:56.494327 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:32:56.494342 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:32:56.494354 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:32:56.494366 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:32:56.494378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:32:56.494390 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:32:56.494402 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:32:56.494413 kernel: ACPI: bus type drm_connector registered Sep 9 05:32:56.494446 systemd-journald[1214]: Collecting audit messages is disabled. Sep 9 05:32:56.494472 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:32:56.494484 systemd-journald[1214]: Journal started Sep 9 05:32:56.494508 systemd-journald[1214]: Runtime Journal (/run/log/journal/82aabe2f0c864acb8677e469439d1977) is 6M, max 48.4M, 42.4M free. Sep 9 05:32:56.276348 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:32:56.296927 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 05:32:56.297396 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:32:56.497514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:32:56.497587 kernel: loop: module loaded Sep 9 05:32:56.500139 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:32:56.500177 kernel: fuse: init (API version 7.41) Sep 9 05:32:56.500192 systemd[1]: Stopped verity-setup.service. Sep 9 05:32:56.502141 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:32:56.506137 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:32:56.507801 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:32:56.508929 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:32:56.512183 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:32:56.513340 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:32:56.514610 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:32:56.515896 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:32:56.517360 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:32:56.518951 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:32:56.520565 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:32:56.520782 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:32:56.522416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:32:56.522650 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:32:56.524059 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:32:56.524317 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:32:56.525626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:32:56.525854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:32:56.527439 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:32:56.527655 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:32:56.528971 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:32:56.529290 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:32:56.530655 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:32:56.532080 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:32:56.533608 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:32:56.535222 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:32:56.548384 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:32:56.550749 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:32:56.552806 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:32:56.553935 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:32:56.553962 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:32:56.555829 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:32:56.562935 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:32:56.565297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:32:56.567251 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:32:56.570738 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:32:56.572008 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:32:56.574204 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:32:56.575380 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:32:56.576722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:32:56.585796 systemd-journald[1214]: Time spent on flushing to /var/log/journal/82aabe2f0c864acb8677e469439d1977 is 17.038ms for 1073 entries. Sep 9 05:32:56.585796 systemd-journald[1214]: System Journal (/var/log/journal/82aabe2f0c864acb8677e469439d1977) is 8M, max 195.6M, 187.6M free. Sep 9 05:32:56.621329 systemd-journald[1214]: Received client request to flush runtime journal. Sep 9 05:32:56.621375 kernel: loop0: detected capacity change from 0 to 128016 Sep 9 05:32:56.582340 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:32:56.586777 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:32:56.589602 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:32:56.592298 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:32:56.596290 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:32:56.600816 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:32:56.604771 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:32:56.606319 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:32:56.625472 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:32:56.627773 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:32:56.638281 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 9 05:32:56.638298 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 9 05:32:56.641140 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:32:56.642374 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:32:56.643102 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:32:56.644915 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:32:56.649082 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:32:56.663131 kernel: loop1: detected capacity change from 0 to 224512 Sep 9 05:32:56.688142 kernel: loop2: detected capacity change from 0 to 110984 Sep 9 05:32:56.688484 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:32:56.693804 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:32:56.716635 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 9 05:32:56.716655 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 9 05:32:56.721292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:32:56.730490 kernel: loop3: detected capacity change from 0 to 128016 Sep 9 05:32:56.738139 kernel: loop4: detected capacity change from 0 to 224512 Sep 9 05:32:56.746131 kernel: loop5: detected capacity change from 0 to 110984 Sep 9 05:32:56.754690 (sd-merge)[1276]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 05:32:56.755308 (sd-merge)[1276]: Merged extensions into '/usr'. Sep 9 05:32:56.762093 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:32:56.762124 systemd[1]: Reloading... Sep 9 05:32:56.839205 zram_generator::config[1314]: No configuration found. Sep 9 05:32:56.895136 ldconfig[1246]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:32:57.012766 systemd[1]: Reloading finished in 250 ms. Sep 9 05:32:57.043445 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:32:57.045056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:32:57.060389 systemd[1]: Starting ensure-sysext.service... Sep 9 05:32:57.062135 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:32:57.072167 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:32:57.072184 systemd[1]: Reloading... Sep 9 05:32:57.081346 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:32:57.081385 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:32:57.081703 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:32:57.081969 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:32:57.082855 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:32:57.083151 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 9 05:32:57.083222 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 9 05:32:57.107846 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:32:57.109200 systemd-tmpfiles[1340]: Skipping /boot Sep 9 05:32:57.121358 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:32:57.121368 systemd-tmpfiles[1340]: Skipping /boot Sep 9 05:32:57.133148 zram_generator::config[1367]: No configuration found. Sep 9 05:32:57.307508 systemd[1]: Reloading finished in 234 ms. Sep 9 05:32:57.333868 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:32:57.354174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:32:57.363023 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:32:57.365724 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:32:57.384084 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:32:57.387357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:32:57.392334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:32:57.395343 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:32:57.398799 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:32:57.399096 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:32:57.404287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:32:57.407425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:32:57.410627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:32:57.413456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:32:57.413562 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:32:57.413646 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:32:57.414873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:32:57.415104 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:32:57.416877 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:32:57.417198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:32:57.419104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:32:57.419331 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:32:57.425798 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:32:57.433440 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:32:57.433647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:32:57.435910 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:32:57.436337 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Sep 9 05:32:57.439192 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:32:57.441462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:32:57.444273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:32:57.444375 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:32:57.448424 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:32:57.453851 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:32:57.454940 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:32:57.457466 augenrules[1443]: No rules Sep 9 05:32:57.457755 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:32:57.459520 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:32:57.459763 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:32:57.472489 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:32:57.474153 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:32:57.475939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:32:57.476192 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:32:57.477924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:32:57.478176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:32:57.480065 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:32:57.480311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:32:57.481903 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:32:57.505176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:32:57.507351 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:32:57.508425 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:32:57.510292 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:32:57.512552 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:32:57.515623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:32:57.522517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:32:57.523672 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:32:57.523775 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:32:57.529440 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:32:57.530522 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:32:57.530616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:32:57.532407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:32:57.532629 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:32:57.534239 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:32:57.534446 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:32:57.535940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:32:57.536176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:32:57.538050 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:32:57.538389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:32:57.540323 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:32:57.549269 augenrules[1477]: /sbin/augenrules: No change Sep 9 05:32:57.550454 systemd[1]: Finished ensure-sysext.service. Sep 9 05:32:57.554069 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:32:57.554316 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:32:57.557212 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 05:32:57.559384 augenrules[1513]: No rules Sep 9 05:32:57.561211 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:32:57.562317 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:32:57.603638 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 05:32:57.636675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:32:57.639428 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:32:57.643148 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 05:32:57.654156 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 9 05:32:57.659138 kernel: ACPI: button: Power Button [PWRF] Sep 9 05:32:57.663474 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:32:57.681830 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 05:32:57.682161 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 05:32:57.682334 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 05:32:57.719624 systemd-networkd[1485]: lo: Link UP Sep 9 05:32:57.719635 systemd-networkd[1485]: lo: Gained carrier Sep 9 05:32:57.721257 systemd-networkd[1485]: Enumeration completed Sep 9 05:32:57.721360 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:32:57.721812 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:32:57.721824 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:32:57.722639 systemd-networkd[1485]: eth0: Link UP Sep 9 05:32:57.722815 systemd-networkd[1485]: eth0: Gained carrier Sep 9 05:32:57.722834 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:32:57.731163 systemd-networkd[1485]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:32:57.737548 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:32:57.742370 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:32:57.814242 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:32:57.821095 kernel: kvm_amd: TSC scaling supported Sep 9 05:32:57.821225 kernel: kvm_amd: Nested Virtualization enabled Sep 9 05:32:57.821262 kernel: kvm_amd: Nested Paging enabled Sep 9 05:32:57.821277 kernel: kvm_amd: LBR virtualization supported Sep 9 05:32:57.822314 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 05:32:57.822334 kernel: kvm_amd: Virtual GIF supported Sep 9 05:32:57.845145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:32:57.859920 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 05:32:57.861490 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:32:59.248083 systemd-timesyncd[1514]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 05:32:59.248135 systemd-timesyncd[1514]: Initial clock synchronization to Tue 2025-09-09 05:32:59.247970 UTC. Sep 9 05:32:59.251134 systemd-resolved[1409]: Positive Trust Anchors: Sep 9 05:32:59.251159 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:32:59.251190 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:32:59.256226 systemd-resolved[1409]: Defaulting to hostname 'linux'. Sep 9 05:32:59.260607 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:32:59.262542 systemd[1]: Reached target network.target - Network. Sep 9 05:32:59.263692 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:32:59.266087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:32:59.266373 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:32:59.269445 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:32:59.271706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:32:59.297658 kernel: EDAC MC: Ver: 3.0.0 Sep 9 05:32:59.323167 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:32:59.324622 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:32:59.325824 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:32:59.327085 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:32:59.331616 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 05:32:59.333075 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:32:59.334215 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:32:59.335741 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:32:59.337061 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:32:59.337088 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:32:59.338017 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:32:59.339604 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:32:59.342217 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:32:59.345133 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:32:59.346467 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:32:59.347688 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:32:59.351166 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:32:59.352449 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:32:59.354139 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:32:59.355836 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:32:59.356777 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:32:59.357740 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:32:59.357765 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:32:59.358723 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:32:59.360666 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:32:59.362520 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:32:59.364594 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:32:59.366552 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:32:59.368125 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:32:59.376027 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 05:32:59.380263 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:32:59.383696 jq[1573]: false Sep 9 05:32:59.383879 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:32:59.386009 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:32:59.386261 oslogin_cache_refresh[1575]: Refreshing passwd entry cache Sep 9 05:32:59.388343 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Refreshing passwd entry cache Sep 9 05:32:59.388730 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:32:59.392759 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:32:59.394870 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:32:59.395389 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:32:59.395438 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Failure getting users, quitting Sep 9 05:32:59.395438 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:32:59.395424 oslogin_cache_refresh[1575]: Failure getting users, quitting Sep 9 05:32:59.395441 oslogin_cache_refresh[1575]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:32:59.395705 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Refreshing group entry cache Sep 9 05:32:59.395502 oslogin_cache_refresh[1575]: Refreshing group entry cache Sep 9 05:32:59.397823 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:32:59.401332 extend-filesystems[1574]: Found /dev/vda6 Sep 9 05:32:59.402344 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:32:59.402468 oslogin_cache_refresh[1575]: Failure getting groups, quitting Sep 9 05:32:59.406313 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Failure getting groups, quitting Sep 9 05:32:59.406313 google_oslogin_nss_cache[1575]: oslogin_cache_refresh[1575]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:32:59.402482 oslogin_cache_refresh[1575]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:32:59.407174 extend-filesystems[1574]: Found /dev/vda9 Sep 9 05:32:59.410070 extend-filesystems[1574]: Checking size of /dev/vda9 Sep 9 05:32:59.414121 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:32:59.418674 jq[1589]: true Sep 9 05:32:59.416057 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:32:59.417608 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:32:59.417975 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 05:32:59.418212 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 05:32:59.420319 extend-filesystems[1574]: Resized partition /dev/vda9 Sep 9 05:32:59.428729 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 05:32:59.422114 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:32:59.428828 extend-filesystems[1601]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:32:59.422353 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:32:59.435997 update_engine[1587]: I20250909 05:32:59.433089 1587 main.cc:92] Flatcar Update Engine starting Sep 9 05:32:59.425666 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:32:59.425908 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:32:59.446011 (ntainerd)[1604]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:32:59.453905 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 05:32:59.470650 tar[1602]: linux-amd64/LICENSE Sep 9 05:32:59.470898 jq[1603]: true Sep 9 05:32:59.480969 tar[1602]: linux-amd64/helm Sep 9 05:32:59.481622 systemd-logind[1585]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 05:32:59.481668 systemd-logind[1585]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 05:32:59.481885 systemd-logind[1585]: New seat seat0. Sep 9 05:32:59.484439 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:32:59.487916 extend-filesystems[1601]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 05:32:59.487916 extend-filesystems[1601]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 05:32:59.487916 extend-filesystems[1601]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 05:32:59.495705 extend-filesystems[1574]: Resized filesystem in /dev/vda9 Sep 9 05:32:59.489175 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:32:59.489459 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:32:59.499358 dbus-daemon[1571]: [system] SELinux support is enabled Sep 9 05:32:59.501239 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:32:59.502657 update_engine[1587]: I20250909 05:32:59.502594 1587 update_check_scheduler.cc:74] Next update check in 7m17s Sep 9 05:32:59.510937 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:32:59.510963 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:32:59.512362 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:32:59.512379 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:32:59.513398 dbus-daemon[1571]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 05:32:59.513667 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:32:59.517499 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:32:59.529871 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:32:59.532904 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:32:59.535170 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 05:32:59.568647 locksmithd[1635]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:32:59.644695 containerd[1604]: time="2025-09-09T05:32:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:32:59.646803 containerd[1604]: time="2025-09-09T05:32:59.646754657Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:32:59.655082 containerd[1604]: time="2025-09-09T05:32:59.655045523Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.746µs" Sep 9 05:32:59.655082 containerd[1604]: time="2025-09-09T05:32:59.655077112Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:32:59.655170 containerd[1604]: time="2025-09-09T05:32:59.655094514Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:32:59.655271 containerd[1604]: time="2025-09-09T05:32:59.655249746Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:32:59.655295 containerd[1604]: time="2025-09-09T05:32:59.655271296Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:32:59.655314 containerd[1604]: time="2025-09-09T05:32:59.655294650Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:32:59.655375 containerd[1604]: time="2025-09-09T05:32:59.655353560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:32:59.655375 containerd[1604]: time="2025-09-09T05:32:59.655371834Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:32:59.655693 containerd[1604]: time="2025-09-09T05:32:59.655669332Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:32:59.655693 containerd[1604]: time="2025-09-09T05:32:59.655688248Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:32:59.655743 containerd[1604]: time="2025-09-09T05:32:59.655698317Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:32:59.655743 containerd[1604]: time="2025-09-09T05:32:59.655707434Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:32:59.655823 containerd[1604]: time="2025-09-09T05:32:59.655802041Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:32:59.656107 containerd[1604]: time="2025-09-09T05:32:59.656082417Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:32:59.656132 containerd[1604]: time="2025-09-09T05:32:59.656120037Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:32:59.656152 containerd[1604]: time="2025-09-09T05:32:59.656136588Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:32:59.656171 containerd[1604]: time="2025-09-09T05:32:59.656162998Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:32:59.656343 containerd[1604]: time="2025-09-09T05:32:59.656322617Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:32:59.656404 containerd[1604]: time="2025-09-09T05:32:59.656386737Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:32:59.661898 containerd[1604]: time="2025-09-09T05:32:59.661872884Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:32:59.661942 containerd[1604]: time="2025-09-09T05:32:59.661916446Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:32:59.661942 containerd[1604]: time="2025-09-09T05:32:59.661939789Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:32:59.661979 containerd[1604]: time="2025-09-09T05:32:59.661952193Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:32:59.661979 containerd[1604]: time="2025-09-09T05:32:59.661965207Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:32:59.661979 containerd[1604]: time="2025-09-09T05:32:59.661976138Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:32:59.662030 containerd[1604]: time="2025-09-09T05:32:59.661989232Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:32:59.662030 containerd[1604]: time="2025-09-09T05:32:59.662005773Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:32:59.662030 containerd[1604]: time="2025-09-09T05:32:59.662016112Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:32:59.662030 containerd[1604]: time="2025-09-09T05:32:59.662025320Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:32:59.662101 containerd[1604]: time="2025-09-09T05:32:59.662033585Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:32:59.662101 containerd[1604]: time="2025-09-09T05:32:59.662045988Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:32:59.662167 containerd[1604]: time="2025-09-09T05:32:59.662145435Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:32:59.662190 containerd[1604]: time="2025-09-09T05:32:59.662169039Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:32:59.662190 containerd[1604]: time="2025-09-09T05:32:59.662187584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:32:59.662231 containerd[1604]: time="2025-09-09T05:32:59.662207612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:32:59.662231 containerd[1604]: time="2025-09-09T05:32:59.662218111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:32:59.662271 containerd[1604]: time="2025-09-09T05:32:59.662229042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:32:59.662271 containerd[1604]: time="2025-09-09T05:32:59.662242988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:32:59.662271 containerd[1604]: time="2025-09-09T05:32:59.662260811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:32:59.662331 containerd[1604]: time="2025-09-09T05:32:59.662275198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:32:59.662331 containerd[1604]: time="2025-09-09T05:32:59.662289265Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:32:59.662331 containerd[1604]: time="2025-09-09T05:32:59.662302359Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:32:59.662383 containerd[1604]: time="2025-09-09T05:32:59.662366630Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:32:59.662407 containerd[1604]: time="2025-09-09T05:32:59.662383151Z" level=info msg="Start snapshots syncer" Sep 9 05:32:59.662427 containerd[1604]: time="2025-09-09T05:32:59.662410702Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:32:59.662699 containerd[1604]: time="2025-09-09T05:32:59.662664278Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:32:59.662797 containerd[1604]: time="2025-09-09T05:32:59.662715193Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:32:59.664046 containerd[1604]: time="2025-09-09T05:32:59.664024138Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:32:59.664146 containerd[1604]: time="2025-09-09T05:32:59.664125147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:32:59.664175 containerd[1604]: time="2025-09-09T05:32:59.664150204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:32:59.664175 containerd[1604]: time="2025-09-09T05:32:59.664161485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:32:59.664175 containerd[1604]: time="2025-09-09T05:32:59.664172556Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:32:59.664228 containerd[1604]: time="2025-09-09T05:32:59.664183957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:32:59.664228 containerd[1604]: time="2025-09-09T05:32:59.664194597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:32:59.664228 containerd[1604]: time="2025-09-09T05:32:59.664205457Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:32:59.664285 containerd[1604]: time="2025-09-09T05:32:59.664231667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:32:59.664285 containerd[1604]: time="2025-09-09T05:32:59.664243749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:32:59.664285 containerd[1604]: time="2025-09-09T05:32:59.664253948Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:32:59.664337 containerd[1604]: time="2025-09-09T05:32:59.664285708Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:32:59.664337 containerd[1604]: time="2025-09-09T05:32:59.664309533Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:32:59.664337 containerd[1604]: time="2025-09-09T05:32:59.664319221Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:32:59.664337 containerd[1604]: time="2025-09-09T05:32:59.664330091Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:32:59.664412 containerd[1604]: time="2025-09-09T05:32:59.664338908Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:32:59.664412 containerd[1604]: time="2025-09-09T05:32:59.664353956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:32:59.664412 containerd[1604]: time="2025-09-09T05:32:59.664365077Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:32:59.664412 containerd[1604]: time="2025-09-09T05:32:59.664382039Z" level=info msg="runtime interface created" Sep 9 05:32:59.664412 containerd[1604]: time="2025-09-09T05:32:59.664387238Z" level=info msg="created NRI interface" Sep 9 05:32:59.664412 containerd[1604]: time="2025-09-09T05:32:59.664396245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:32:59.664412 containerd[1604]: time="2025-09-09T05:32:59.664405743Z" level=info msg="Connect containerd service" Sep 9 05:32:59.664601 containerd[1604]: time="2025-09-09T05:32:59.664427013Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:32:59.666639 containerd[1604]: time="2025-09-09T05:32:59.665092521Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:32:59.737348 containerd[1604]: time="2025-09-09T05:32:59.737238755Z" level=info msg="Start subscribing containerd event" Sep 9 05:32:59.737348 containerd[1604]: time="2025-09-09T05:32:59.737312844Z" level=info msg="Start recovering state" Sep 9 05:32:59.737516 containerd[1604]: time="2025-09-09T05:32:59.737494995Z" level=info msg="Start event monitor" Sep 9 05:32:59.737542 containerd[1604]: time="2025-09-09T05:32:59.737527336Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:32:59.737542 containerd[1604]: time="2025-09-09T05:32:59.737537485Z" level=info msg="Start streaming server" Sep 9 05:32:59.737579 containerd[1604]: time="2025-09-09T05:32:59.737553876Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:32:59.737579 containerd[1604]: time="2025-09-09T05:32:59.737561881Z" level=info msg="runtime interface starting up..." Sep 9 05:32:59.737749 containerd[1604]: time="2025-09-09T05:32:59.737567451Z" level=info msg="starting plugins..." Sep 9 05:32:59.737788 containerd[1604]: time="2025-09-09T05:32:59.737754392Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:32:59.737881 containerd[1604]: time="2025-09-09T05:32:59.737701613Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:32:59.737924 containerd[1604]: time="2025-09-09T05:32:59.737909713Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:32:59.737992 containerd[1604]: time="2025-09-09T05:32:59.737972821Z" level=info msg="containerd successfully booted in 0.094036s" Sep 9 05:32:59.738081 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:32:59.778693 tar[1602]: linux-amd64/README.md Sep 9 05:32:59.799907 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:32:59.805061 sshd_keygen[1597]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:32:59.828417 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:32:59.831231 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:32:59.862860 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:32:59.863109 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:32:59.865605 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:32:59.886155 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:32:59.888983 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:32:59.891036 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 05:32:59.892294 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:33:00.604839 systemd-networkd[1485]: eth0: Gained IPv6LL Sep 9 05:33:00.607735 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:33:00.609586 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:33:00.612015 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 05:33:00.614274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:33:00.616331 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:33:00.648851 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:33:00.650843 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 05:33:00.651100 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 05:33:00.653335 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:33:01.330607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:33:01.332263 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:33:01.333735 systemd[1]: Startup finished in 2.764s (kernel) + 6.138s (initrd) + 4.238s (userspace) = 13.142s. Sep 9 05:33:01.339975 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:33:01.736844 kubelet[1705]: E0909 05:33:01.736687 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:33:01.740648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:33:01.740839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:33:01.741242 systemd[1]: kubelet.service: Consumed 956ms CPU time, 266.6M memory peak. Sep 9 05:33:04.645883 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:33:04.647130 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:44996.service - OpenSSH per-connection server daemon (10.0.0.1:44996). Sep 9 05:33:04.851062 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 44996 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:33:04.852999 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:33:04.859218 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:33:04.860325 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:33:04.866374 systemd-logind[1585]: New session 1 of user core. Sep 9 05:33:04.884335 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:33:04.887549 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:33:04.906828 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:33:04.909229 systemd-logind[1585]: New session c1 of user core. Sep 9 05:33:05.048840 systemd[1723]: Queued start job for default target default.target. Sep 9 05:33:05.066781 systemd[1723]: Created slice app.slice - User Application Slice. Sep 9 05:33:05.066805 systemd[1723]: Reached target paths.target - Paths. Sep 9 05:33:05.066842 systemd[1723]: Reached target timers.target - Timers. Sep 9 05:33:05.068210 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:33:05.079773 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:33:05.079890 systemd[1723]: Reached target sockets.target - Sockets. Sep 9 05:33:05.079928 systemd[1723]: Reached target basic.target - Basic System. Sep 9 05:33:05.079967 systemd[1723]: Reached target default.target - Main User Target. Sep 9 05:33:05.080004 systemd[1723]: Startup finished in 164ms. Sep 9 05:33:05.080347 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:33:05.081958 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:33:05.143041 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:45012.service - OpenSSH per-connection server daemon (10.0.0.1:45012). Sep 9 05:33:05.190085 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 45012 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:33:05.191232 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:33:05.195359 systemd-logind[1585]: New session 2 of user core. Sep 9 05:33:05.204750 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:33:05.256053 sshd[1737]: Connection closed by 10.0.0.1 port 45012 Sep 9 05:33:05.256414 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Sep 9 05:33:05.268900 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:45012.service: Deactivated successfully. Sep 9 05:33:05.270455 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:33:05.271176 systemd-logind[1585]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:33:05.273691 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:45024.service - OpenSSH per-connection server daemon (10.0.0.1:45024). Sep 9 05:33:05.274185 systemd-logind[1585]: Removed session 2. Sep 9 05:33:05.337944 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 45024 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:33:05.339123 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:33:05.342798 systemd-logind[1585]: New session 3 of user core. Sep 9 05:33:05.357749 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:33:05.405312 sshd[1746]: Connection closed by 10.0.0.1 port 45024 Sep 9 05:33:05.405614 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 9 05:33:05.418951 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:45024.service: Deactivated successfully. Sep 9 05:33:05.420564 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:33:05.421258 systemd-logind[1585]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:33:05.423909 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:45032.service - OpenSSH per-connection server daemon (10.0.0.1:45032). Sep 9 05:33:05.424596 systemd-logind[1585]: Removed session 3. Sep 9 05:33:05.467056 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 45032 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:33:05.468248 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:33:05.472464 systemd-logind[1585]: New session 4 of user core. Sep 9 05:33:05.481770 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:33:05.532851 sshd[1755]: Connection closed by 10.0.0.1 port 45032 Sep 9 05:33:05.533111 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Sep 9 05:33:05.544913 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:45032.service: Deactivated successfully. Sep 9 05:33:05.546558 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:33:05.547248 systemd-logind[1585]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:33:05.549824 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:45036.service - OpenSSH per-connection server daemon (10.0.0.1:45036). Sep 9 05:33:05.550375 systemd-logind[1585]: Removed session 4. Sep 9 05:33:05.594980 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 45036 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:33:05.596109 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:33:05.600036 systemd-logind[1585]: New session 5 of user core. Sep 9 05:33:05.613726 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:33:05.670801 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:33:05.671100 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:33:05.686523 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 9 05:33:05.688600 sshd[1764]: Connection closed by 10.0.0.1 port 45036 Sep 9 05:33:05.688950 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Sep 9 05:33:05.710195 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:45036.service: Deactivated successfully. Sep 9 05:33:05.711923 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:33:05.712664 systemd-logind[1585]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:33:05.715249 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:45048.service - OpenSSH per-connection server daemon (10.0.0.1:45048). Sep 9 05:33:05.715984 systemd-logind[1585]: Removed session 5. Sep 9 05:33:05.766474 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 45048 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:33:05.767638 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:33:05.771695 systemd-logind[1585]: New session 6 of user core. Sep 9 05:33:05.785755 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:33:05.838661 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:33:05.838965 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:33:05.844992 sudo[1776]: pam_unix(sudo:session): session closed for user root Sep 9 05:33:05.851508 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:33:05.851814 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:33:05.862613 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:33:05.912509 augenrules[1798]: No rules Sep 9 05:33:05.914811 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:33:05.915193 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:33:05.916354 sudo[1775]: pam_unix(sudo:session): session closed for user root Sep 9 05:33:05.917823 sshd[1774]: Connection closed by 10.0.0.1 port 45048 Sep 9 05:33:05.918188 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Sep 9 05:33:05.930981 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:45048.service: Deactivated successfully. Sep 9 05:33:05.932568 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:33:05.933360 systemd-logind[1585]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:33:05.935862 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:45062.service - OpenSSH per-connection server daemon (10.0.0.1:45062). Sep 9 05:33:05.936382 systemd-logind[1585]: Removed session 6. Sep 9 05:33:06.004567 sshd[1807]: Accepted publickey for core from 10.0.0.1 port 45062 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:33:06.005914 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:33:06.010575 systemd-logind[1585]: New session 7 of user core. Sep 9 05:33:06.021747 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:33:06.073870 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:33:06.074170 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:33:06.363947 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:33:06.384932 (dockerd)[1831]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:33:06.606513 dockerd[1831]: time="2025-09-09T05:33:06.606451788Z" level=info msg="Starting up" Sep 9 05:33:06.607269 dockerd[1831]: time="2025-09-09T05:33:06.607246377Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:33:06.620477 dockerd[1831]: time="2025-09-09T05:33:06.620245451Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:33:06.922594 dockerd[1831]: time="2025-09-09T05:33:06.922488637Z" level=info msg="Loading containers: start." Sep 9 05:33:06.932665 kernel: Initializing XFRM netlink socket Sep 9 05:33:07.174667 systemd-networkd[1485]: docker0: Link UP Sep 9 05:33:07.179980 dockerd[1831]: time="2025-09-09T05:33:07.179941551Z" level=info msg="Loading containers: done." Sep 9 05:33:07.192508 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3838706086-merged.mount: Deactivated successfully. Sep 9 05:33:07.194405 dockerd[1831]: time="2025-09-09T05:33:07.194352161Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:33:07.194514 dockerd[1831]: time="2025-09-09T05:33:07.194443702Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:33:07.194559 dockerd[1831]: time="2025-09-09T05:33:07.194541335Z" level=info msg="Initializing buildkit" Sep 9 05:33:07.224116 dockerd[1831]: time="2025-09-09T05:33:07.224086492Z" level=info msg="Completed buildkit initialization" Sep 9 05:33:07.231085 dockerd[1831]: time="2025-09-09T05:33:07.231057012Z" level=info msg="Daemon has completed initialization" Sep 9 05:33:07.231174 dockerd[1831]: time="2025-09-09T05:33:07.231126733Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:33:07.231274 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:33:07.918111 containerd[1604]: time="2025-09-09T05:33:07.918064762Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 05:33:08.435003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682984536.mount: Deactivated successfully. Sep 9 05:33:09.269055 containerd[1604]: time="2025-09-09T05:33:09.268997841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:09.269751 containerd[1604]: time="2025-09-09T05:33:09.269687834Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 9 05:33:09.270831 containerd[1604]: time="2025-09-09T05:33:09.270799769Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:09.273163 containerd[1604]: time="2025-09-09T05:33:09.273122404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:09.274064 containerd[1604]: time="2025-09-09T05:33:09.274017583Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 1.355907697s" Sep 9 05:33:09.274128 containerd[1604]: time="2025-09-09T05:33:09.274064721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 9 05:33:09.274665 containerd[1604]: time="2025-09-09T05:33:09.274612678Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 05:33:10.296742 containerd[1604]: time="2025-09-09T05:33:10.296687969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:10.297348 containerd[1604]: time="2025-09-09T05:33:10.297298113Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 9 05:33:10.298455 containerd[1604]: time="2025-09-09T05:33:10.298405971Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:10.300783 containerd[1604]: time="2025-09-09T05:33:10.300752781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:10.301614 containerd[1604]: time="2025-09-09T05:33:10.301585152Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.026919434s" Sep 9 05:33:10.301670 containerd[1604]: time="2025-09-09T05:33:10.301617202Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 9 05:33:10.302242 containerd[1604]: time="2025-09-09T05:33:10.302018254Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 05:33:11.633605 containerd[1604]: time="2025-09-09T05:33:11.633536380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:11.634511 containerd[1604]: time="2025-09-09T05:33:11.634441918Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 9 05:33:11.635740 containerd[1604]: time="2025-09-09T05:33:11.635694456Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:11.638021 containerd[1604]: time="2025-09-09T05:33:11.637994859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:11.638830 containerd[1604]: time="2025-09-09T05:33:11.638776475Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.336729507s" Sep 9 05:33:11.638830 containerd[1604]: time="2025-09-09T05:33:11.638816179Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 9 05:33:11.640644 containerd[1604]: time="2025-09-09T05:33:11.639530238Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 05:33:11.825794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:33:11.827301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:33:12.031464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:33:12.045972 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:33:12.176029 kubelet[2119]: E0909 05:33:12.175962 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:33:12.182523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:33:12.182758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:33:12.183179 systemd[1]: kubelet.service: Consumed 222ms CPU time, 111.1M memory peak. Sep 9 05:33:12.770885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034380135.mount: Deactivated successfully. Sep 9 05:33:13.435735 containerd[1604]: time="2025-09-09T05:33:13.435660262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:13.436553 containerd[1604]: time="2025-09-09T05:33:13.436521486Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 9 05:33:13.437774 containerd[1604]: time="2025-09-09T05:33:13.437744019Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:13.439782 containerd[1604]: time="2025-09-09T05:33:13.439738799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:13.440174 containerd[1604]: time="2025-09-09T05:33:13.440138939Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.800567915s" Sep 9 05:33:13.440174 containerd[1604]: time="2025-09-09T05:33:13.440167352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 9 05:33:13.440637 containerd[1604]: time="2025-09-09T05:33:13.440593261Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 05:33:13.948108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4061053773.mount: Deactivated successfully. Sep 9 05:33:14.600864 containerd[1604]: time="2025-09-09T05:33:14.600809857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:14.601660 containerd[1604]: time="2025-09-09T05:33:14.601607082Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 05:33:14.602927 containerd[1604]: time="2025-09-09T05:33:14.602888444Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:14.605477 containerd[1604]: time="2025-09-09T05:33:14.605439828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:14.606298 containerd[1604]: time="2025-09-09T05:33:14.606253033Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.165633443s" Sep 9 05:33:14.606298 containerd[1604]: time="2025-09-09T05:33:14.606284241Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 05:33:14.606722 containerd[1604]: time="2025-09-09T05:33:14.606696214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:33:15.104799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1199361247.mount: Deactivated successfully. Sep 9 05:33:15.112135 containerd[1604]: time="2025-09-09T05:33:15.112075480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:33:15.112895 containerd[1604]: time="2025-09-09T05:33:15.112865621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 05:33:15.114160 containerd[1604]: time="2025-09-09T05:33:15.114110145Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:33:15.116095 containerd[1604]: time="2025-09-09T05:33:15.116054651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:33:15.116684 containerd[1604]: time="2025-09-09T05:33:15.116642503Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 509.903078ms" Sep 9 05:33:15.116719 containerd[1604]: time="2025-09-09T05:33:15.116682518Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 05:33:15.117131 containerd[1604]: time="2025-09-09T05:33:15.117101043Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 05:33:15.647199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3724813260.mount: Deactivated successfully. Sep 9 05:33:17.422733 containerd[1604]: time="2025-09-09T05:33:17.422652751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:17.423894 containerd[1604]: time="2025-09-09T05:33:17.423841130Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 9 05:33:17.425370 containerd[1604]: time="2025-09-09T05:33:17.425340721Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:17.427949 containerd[1604]: time="2025-09-09T05:33:17.427899078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:17.428798 containerd[1604]: time="2025-09-09T05:33:17.428771925Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.31163776s" Sep 9 05:33:17.428843 containerd[1604]: time="2025-09-09T05:33:17.428800559Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 9 05:33:19.485385 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:33:19.485542 systemd[1]: kubelet.service: Consumed 222ms CPU time, 111.1M memory peak. Sep 9 05:33:19.487581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:33:19.510594 systemd[1]: Reload requested from client PID 2275 ('systemctl') (unit session-7.scope)... Sep 9 05:33:19.510607 systemd[1]: Reloading... Sep 9 05:33:19.591655 zram_generator::config[2320]: No configuration found. Sep 9 05:33:19.929549 systemd[1]: Reloading finished in 418 ms. Sep 9 05:33:19.987243 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:33:19.987337 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:33:19.987644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:33:19.987685 systemd[1]: kubelet.service: Consumed 149ms CPU time, 98.4M memory peak. Sep 9 05:33:19.989111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:33:20.158064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:33:20.162815 (kubelet)[2365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:33:20.200612 kubelet[2365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:33:20.200612 kubelet[2365]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:33:20.200612 kubelet[2365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:33:20.200919 kubelet[2365]: I0909 05:33:20.200673 2365 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:33:20.771342 kubelet[2365]: I0909 05:33:20.771295 2365 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:33:20.771342 kubelet[2365]: I0909 05:33:20.771322 2365 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:33:20.771579 kubelet[2365]: I0909 05:33:20.771556 2365 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:33:20.798688 kubelet[2365]: E0909 05:33:20.798640 2365 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:33:20.800321 kubelet[2365]: I0909 05:33:20.800287 2365 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:33:20.807116 kubelet[2365]: I0909 05:33:20.807083 2365 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:33:20.811984 kubelet[2365]: I0909 05:33:20.811945 2365 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:33:20.812175 kubelet[2365]: I0909 05:33:20.812138 2365 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:33:20.812322 kubelet[2365]: I0909 05:33:20.812161 2365 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:33:20.812804 kubelet[2365]: I0909 05:33:20.812770 2365 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:33:20.812804 kubelet[2365]: I0909 05:33:20.812786 2365 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:33:20.812945 kubelet[2365]: I0909 05:33:20.812915 2365 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:33:20.815501 kubelet[2365]: I0909 05:33:20.815468 2365 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:33:20.815501 kubelet[2365]: I0909 05:33:20.815501 2365 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:33:20.815552 kubelet[2365]: I0909 05:33:20.815527 2365 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:33:20.815552 kubelet[2365]: I0909 05:33:20.815537 2365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:33:20.820651 kubelet[2365]: W0909 05:33:20.818595 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 9 05:33:20.820651 kubelet[2365]: E0909 05:33:20.818691 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:33:20.820651 kubelet[2365]: I0909 05:33:20.818694 2365 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:33:20.820651 kubelet[2365]: W0909 05:33:20.818948 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 9 05:33:20.820651 kubelet[2365]: E0909 05:33:20.818998 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:33:20.820651 kubelet[2365]: I0909 05:33:20.819137 2365 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:33:20.820651 kubelet[2365]: W0909 05:33:20.819195 2365 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:33:20.820892 kubelet[2365]: I0909 05:33:20.820869 2365 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:33:20.820938 kubelet[2365]: I0909 05:33:20.820906 2365 server.go:1287] "Started kubelet" Sep 9 05:33:20.821032 kubelet[2365]: I0909 05:33:20.820999 2365 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:33:20.824071 kubelet[2365]: I0909 05:33:20.824042 2365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:33:20.824201 kubelet[2365]: I0909 05:33:20.824180 2365 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:33:20.824340 kubelet[2365]: I0909 05:33:20.824322 2365 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:33:20.825780 kubelet[2365]: I0909 05:33:20.825708 2365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:33:20.825857 kubelet[2365]: I0909 05:33:20.825835 2365 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:33:20.826151 kubelet[2365]: E0909 05:33:20.826121 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:20.826471 kubelet[2365]: I0909 05:33:20.826454 2365 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:33:20.826704 kubelet[2365]: I0909 05:33:20.826688 2365 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:33:20.828110 kubelet[2365]: I0909 05:33:20.827227 2365 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:33:20.828110 kubelet[2365]: I0909 05:33:20.827314 2365 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:33:20.828180 kubelet[2365]: I0909 05:33:20.828160 2365 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:33:20.828492 kubelet[2365]: E0909 05:33:20.828451 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Sep 9 05:33:20.830032 kubelet[2365]: W0909 05:33:20.828979 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 9 05:33:20.830032 kubelet[2365]: E0909 05:33:20.829025 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:33:20.830032 kubelet[2365]: E0909 05:33:20.828539 2365 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863865a069efe14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 05:33:20.82088706 +0000 UTC m=+0.653907838,LastTimestamp:2025-09-09 05:33:20.82088706 +0000 UTC m=+0.653907838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 05:33:20.830032 kubelet[2365]: I0909 05:33:20.829663 2365 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:33:20.830405 kubelet[2365]: E0909 05:33:20.830378 2365 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:33:20.842355 kubelet[2365]: I0909 05:33:20.842317 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:33:20.843517 kubelet[2365]: I0909 05:33:20.843493 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:33:20.843517 kubelet[2365]: I0909 05:33:20.843510 2365 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:33:20.843586 kubelet[2365]: I0909 05:33:20.843528 2365 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:33:20.843586 kubelet[2365]: I0909 05:33:20.843536 2365 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:33:20.843724 kubelet[2365]: E0909 05:33:20.843575 2365 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:33:20.846092 kubelet[2365]: W0909 05:33:20.846054 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 9 05:33:20.846141 kubelet[2365]: E0909 05:33:20.846098 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:33:20.846723 kubelet[2365]: I0909 05:33:20.846696 2365 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:33:20.846777 kubelet[2365]: I0909 05:33:20.846713 2365 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:33:20.846799 kubelet[2365]: I0909 05:33:20.846780 2365 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:33:20.926906 kubelet[2365]: E0909 05:33:20.926861 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:20.944272 kubelet[2365]: E0909 05:33:20.944234 2365 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:33:21.027676 kubelet[2365]: E0909 05:33:21.027557 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:21.029051 kubelet[2365]: E0909 05:33:21.029019 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Sep 9 05:33:21.128290 kubelet[2365]: E0909 05:33:21.128249 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:21.144395 kubelet[2365]: E0909 05:33:21.144349 2365 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:33:21.229082 kubelet[2365]: E0909 05:33:21.229048 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:21.253052 kubelet[2365]: I0909 05:33:21.253020 2365 policy_none.go:49] "None policy: Start" Sep 9 05:33:21.253052 kubelet[2365]: I0909 05:33:21.253039 2365 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:33:21.253052 kubelet[2365]: I0909 05:33:21.253052 2365 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:33:21.259539 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:33:21.280723 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:33:21.284179 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:33:21.302501 kubelet[2365]: I0909 05:33:21.302453 2365 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:33:21.302674 kubelet[2365]: I0909 05:33:21.302656 2365 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:33:21.302705 kubelet[2365]: I0909 05:33:21.302673 2365 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:33:21.302991 kubelet[2365]: I0909 05:33:21.302948 2365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:33:21.303950 kubelet[2365]: E0909 05:33:21.303838 2365 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:33:21.303950 kubelet[2365]: E0909 05:33:21.303874 2365 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 05:33:21.405697 kubelet[2365]: I0909 05:33:21.405674 2365 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:33:21.406077 kubelet[2365]: E0909 05:33:21.406034 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Sep 9 05:33:21.429725 kubelet[2365]: E0909 05:33:21.429682 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Sep 9 05:33:21.552688 systemd[1]: Created slice kubepods-burstable-pod40e05b7e1b25759d45351595e3b9201f.slice - libcontainer container kubepods-burstable-pod40e05b7e1b25759d45351595e3b9201f.slice. Sep 9 05:33:21.579171 kubelet[2365]: E0909 05:33:21.579135 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:33:21.581998 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 05:33:21.583942 kubelet[2365]: E0909 05:33:21.583922 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:33:21.586442 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 05:33:21.587946 kubelet[2365]: E0909 05:33:21.587914 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:33:21.607623 kubelet[2365]: I0909 05:33:21.607602 2365 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:33:21.608020 kubelet[2365]: E0909 05:33:21.607974 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Sep 9 05:33:21.630333 kubelet[2365]: I0909 05:33:21.630312 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40e05b7e1b25759d45351595e3b9201f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"40e05b7e1b25759d45351595e3b9201f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:33:21.630393 kubelet[2365]: I0909 05:33:21.630340 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:21.630393 kubelet[2365]: I0909 05:33:21.630361 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:21.630393 kubelet[2365]: I0909 05:33:21.630375 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40e05b7e1b25759d45351595e3b9201f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"40e05b7e1b25759d45351595e3b9201f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:33:21.630393 kubelet[2365]: I0909 05:33:21.630393 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40e05b7e1b25759d45351595e3b9201f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"40e05b7e1b25759d45351595e3b9201f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:33:21.630489 kubelet[2365]: I0909 05:33:21.630409 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:21.630489 kubelet[2365]: I0909 05:33:21.630424 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:21.630489 kubelet[2365]: I0909 05:33:21.630441 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:21.630489 kubelet[2365]: I0909 05:33:21.630456 2365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:33:21.879761 kubelet[2365]: E0909 05:33:21.879726 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:21.880329 containerd[1604]: time="2025-09-09T05:33:21.880287285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:40e05b7e1b25759d45351595e3b9201f,Namespace:kube-system,Attempt:0,}" Sep 9 05:33:21.884538 kubelet[2365]: E0909 05:33:21.884508 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:21.884884 containerd[1604]: time="2025-09-09T05:33:21.884841755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 05:33:21.889134 kubelet[2365]: E0909 05:33:21.889099 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:21.889408 containerd[1604]: time="2025-09-09T05:33:21.889375726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 05:33:21.919214 containerd[1604]: time="2025-09-09T05:33:21.919157767Z" level=info msg="connecting to shim 7ef2fd7ac0de94e04dcc1f0558a0c443030d55908544d924aaa56b9e30985c21" address="unix:///run/containerd/s/6b262e404b873903e49faf2c8188e3b155f6b260cc42f27c1a2f5e38436f0b7c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:33:21.919741 containerd[1604]: time="2025-09-09T05:33:21.919687941Z" level=info msg="connecting to shim 6d45d1a43fe035a37046f5e3442adaa9474ce4aa4a0378f7676c9fccf870c23c" address="unix:///run/containerd/s/bb18442f9e24a68f8ebca031683e2c971d4d83684b591f7fe481462a58beeeb0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:33:21.939645 containerd[1604]: time="2025-09-09T05:33:21.939576302Z" level=info msg="connecting to shim 28a940841d61735fc4d636b5a85dee7518d6d92c9942dd55f2892263165f72f5" address="unix:///run/containerd/s/2c8b576afc144b0dce3061cd214e061e7fad27bd69dffc9443d2cade281cbc17" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:33:21.947845 systemd[1]: Started cri-containerd-7ef2fd7ac0de94e04dcc1f0558a0c443030d55908544d924aaa56b9e30985c21.scope - libcontainer container 7ef2fd7ac0de94e04dcc1f0558a0c443030d55908544d924aaa56b9e30985c21. Sep 9 05:33:21.951432 systemd[1]: Started cri-containerd-6d45d1a43fe035a37046f5e3442adaa9474ce4aa4a0378f7676c9fccf870c23c.scope - libcontainer container 6d45d1a43fe035a37046f5e3442adaa9474ce4aa4a0378f7676c9fccf870c23c. Sep 9 05:33:21.967774 systemd[1]: Started cri-containerd-28a940841d61735fc4d636b5a85dee7518d6d92c9942dd55f2892263165f72f5.scope - libcontainer container 28a940841d61735fc4d636b5a85dee7518d6d92c9942dd55f2892263165f72f5. Sep 9 05:33:22.007755 containerd[1604]: time="2025-09-09T05:33:22.007715934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:40e05b7e1b25759d45351595e3b9201f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ef2fd7ac0de94e04dcc1f0558a0c443030d55908544d924aaa56b9e30985c21\"" Sep 9 05:33:22.009681 kubelet[2365]: I0909 05:33:22.009579 2365 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:33:22.009918 kubelet[2365]: E0909 05:33:22.009880 2365 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Sep 9 05:33:22.010233 kubelet[2365]: E0909 05:33:22.010202 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:22.012383 containerd[1604]: time="2025-09-09T05:33:22.012344973Z" level=info msg="CreateContainer within sandbox \"7ef2fd7ac0de94e04dcc1f0558a0c443030d55908544d924aaa56b9e30985c21\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:33:22.019531 containerd[1604]: time="2025-09-09T05:33:22.019277021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"28a940841d61735fc4d636b5a85dee7518d6d92c9942dd55f2892263165f72f5\"" Sep 9 05:33:22.021354 kubelet[2365]: E0909 05:33:22.021330 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:22.024522 containerd[1604]: time="2025-09-09T05:33:22.024476951Z" level=info msg="CreateContainer within sandbox \"28a940841d61735fc4d636b5a85dee7518d6d92c9942dd55f2892263165f72f5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:33:22.026374 containerd[1604]: time="2025-09-09T05:33:22.026349261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d45d1a43fe035a37046f5e3442adaa9474ce4aa4a0378f7676c9fccf870c23c\"" Sep 9 05:33:22.026965 kubelet[2365]: E0909 05:33:22.026920 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:22.028566 containerd[1604]: time="2025-09-09T05:33:22.028547603Z" level=info msg="CreateContainer within sandbox \"6d45d1a43fe035a37046f5e3442adaa9474ce4aa4a0378f7676c9fccf870c23c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:33:22.031309 containerd[1604]: time="2025-09-09T05:33:22.031273184Z" level=info msg="Container bcd23e29dac1513d2384e6c3a6cd9b46c069325159560a6b9e9a95fd0c667e31: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:22.041036 containerd[1604]: time="2025-09-09T05:33:22.040997527Z" level=info msg="Container 4f7625569e7bde27acf5559906e1df3f2d8e315af056ef3aca945019410903a8: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:22.042992 containerd[1604]: time="2025-09-09T05:33:22.042947703Z" level=info msg="Container f7a78da3cbc8dabf6a043c0301f21ddfabc022370705ec3455905212538f678a: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:22.049604 containerd[1604]: time="2025-09-09T05:33:22.049564951Z" level=info msg="CreateContainer within sandbox \"7ef2fd7ac0de94e04dcc1f0558a0c443030d55908544d924aaa56b9e30985c21\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bcd23e29dac1513d2384e6c3a6cd9b46c069325159560a6b9e9a95fd0c667e31\"" Sep 9 05:33:22.050045 containerd[1604]: time="2025-09-09T05:33:22.050025945Z" level=info msg="StartContainer for \"bcd23e29dac1513d2384e6c3a6cd9b46c069325159560a6b9e9a95fd0c667e31\"" Sep 9 05:33:22.051387 containerd[1604]: time="2025-09-09T05:33:22.051325942Z" level=info msg="connecting to shim bcd23e29dac1513d2384e6c3a6cd9b46c069325159560a6b9e9a95fd0c667e31" address="unix:///run/containerd/s/6b262e404b873903e49faf2c8188e3b155f6b260cc42f27c1a2f5e38436f0b7c" protocol=ttrpc version=3 Sep 9 05:33:22.053948 containerd[1604]: time="2025-09-09T05:33:22.053886203Z" level=info msg="CreateContainer within sandbox \"28a940841d61735fc4d636b5a85dee7518d6d92c9942dd55f2892263165f72f5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4f7625569e7bde27acf5559906e1df3f2d8e315af056ef3aca945019410903a8\"" Sep 9 05:33:22.054311 containerd[1604]: time="2025-09-09T05:33:22.054280823Z" level=info msg="StartContainer for \"4f7625569e7bde27acf5559906e1df3f2d8e315af056ef3aca945019410903a8\"" Sep 9 05:33:22.055151 containerd[1604]: time="2025-09-09T05:33:22.055126869Z" level=info msg="connecting to shim 4f7625569e7bde27acf5559906e1df3f2d8e315af056ef3aca945019410903a8" address="unix:///run/containerd/s/2c8b576afc144b0dce3061cd214e061e7fad27bd69dffc9443d2cade281cbc17" protocol=ttrpc version=3 Sep 9 05:33:22.055354 containerd[1604]: time="2025-09-09T05:33:22.055324360Z" level=info msg="CreateContainer within sandbox \"6d45d1a43fe035a37046f5e3442adaa9474ce4aa4a0378f7676c9fccf870c23c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f7a78da3cbc8dabf6a043c0301f21ddfabc022370705ec3455905212538f678a\"" Sep 9 05:33:22.055886 containerd[1604]: time="2025-09-09T05:33:22.055855245Z" level=info msg="StartContainer for \"f7a78da3cbc8dabf6a043c0301f21ddfabc022370705ec3455905212538f678a\"" Sep 9 05:33:22.057011 containerd[1604]: time="2025-09-09T05:33:22.056986115Z" level=info msg="connecting to shim f7a78da3cbc8dabf6a043c0301f21ddfabc022370705ec3455905212538f678a" address="unix:///run/containerd/s/bb18442f9e24a68f8ebca031683e2c971d4d83684b591f7fe481462a58beeeb0" protocol=ttrpc version=3 Sep 9 05:33:22.075762 systemd[1]: Started cri-containerd-bcd23e29dac1513d2384e6c3a6cd9b46c069325159560a6b9e9a95fd0c667e31.scope - libcontainer container bcd23e29dac1513d2384e6c3a6cd9b46c069325159560a6b9e9a95fd0c667e31. Sep 9 05:33:22.080161 systemd[1]: Started cri-containerd-4f7625569e7bde27acf5559906e1df3f2d8e315af056ef3aca945019410903a8.scope - libcontainer container 4f7625569e7bde27acf5559906e1df3f2d8e315af056ef3aca945019410903a8. Sep 9 05:33:22.082008 systemd[1]: Started cri-containerd-f7a78da3cbc8dabf6a043c0301f21ddfabc022370705ec3455905212538f678a.scope - libcontainer container f7a78da3cbc8dabf6a043c0301f21ddfabc022370705ec3455905212538f678a. Sep 9 05:33:22.134839 containerd[1604]: time="2025-09-09T05:33:22.134434541Z" level=info msg="StartContainer for \"bcd23e29dac1513d2384e6c3a6cd9b46c069325159560a6b9e9a95fd0c667e31\" returns successfully" Sep 9 05:33:22.134839 containerd[1604]: time="2025-09-09T05:33:22.134485306Z" level=info msg="StartContainer for \"4f7625569e7bde27acf5559906e1df3f2d8e315af056ef3aca945019410903a8\" returns successfully" Sep 9 05:33:22.138517 kubelet[2365]: W0909 05:33:22.138441 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 9 05:33:22.138689 kubelet[2365]: E0909 05:33:22.138551 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:33:22.141544 containerd[1604]: time="2025-09-09T05:33:22.141481214Z" level=info msg="StartContainer for \"f7a78da3cbc8dabf6a043c0301f21ddfabc022370705ec3455905212538f678a\" returns successfully" Sep 9 05:33:22.153675 kubelet[2365]: W0909 05:33:22.153612 2365 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Sep 9 05:33:22.153794 kubelet[2365]: E0909 05:33:22.153769 2365 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:33:22.811660 kubelet[2365]: I0909 05:33:22.811411 2365 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:33:22.854026 kubelet[2365]: E0909 05:33:22.853988 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:33:22.854174 kubelet[2365]: E0909 05:33:22.854102 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:22.859646 kubelet[2365]: E0909 05:33:22.859446 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:33:22.859646 kubelet[2365]: E0909 05:33:22.859590 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:22.860503 kubelet[2365]: E0909 05:33:22.860478 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:33:22.860619 kubelet[2365]: E0909 05:33:22.860599 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:22.970122 kubelet[2365]: E0909 05:33:22.970076 2365 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 05:33:23.054257 kubelet[2365]: I0909 05:33:23.054033 2365 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:33:23.054257 kubelet[2365]: E0909 05:33:23.054077 2365 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 05:33:23.066059 kubelet[2365]: E0909 05:33:23.064475 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:23.165590 kubelet[2365]: E0909 05:33:23.165524 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:23.266215 kubelet[2365]: E0909 05:33:23.266143 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:23.366309 kubelet[2365]: E0909 05:33:23.366264 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:23.467063 kubelet[2365]: E0909 05:33:23.466998 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:23.567654 kubelet[2365]: E0909 05:33:23.567587 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:23.668326 kubelet[2365]: E0909 05:33:23.668205 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:23.768941 kubelet[2365]: E0909 05:33:23.768895 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:23.862529 kubelet[2365]: E0909 05:33:23.862499 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:33:23.862913 kubelet[2365]: E0909 05:33:23.862618 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:23.862913 kubelet[2365]: E0909 05:33:23.862646 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:33:23.862913 kubelet[2365]: E0909 05:33:23.862761 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:23.869762 kubelet[2365]: E0909 05:33:23.869733 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:23.970231 kubelet[2365]: E0909 05:33:23.970117 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:24.070833 kubelet[2365]: E0909 05:33:24.070784 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:24.171337 kubelet[2365]: E0909 05:33:24.171287 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:24.272252 kubelet[2365]: E0909 05:33:24.272122 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:24.372905 kubelet[2365]: E0909 05:33:24.372873 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:24.473430 kubelet[2365]: E0909 05:33:24.473398 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:24.574338 kubelet[2365]: E0909 05:33:24.574246 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:24.675092 kubelet[2365]: E0909 05:33:24.675059 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:24.775703 kubelet[2365]: E0909 05:33:24.775650 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:24.864656 kubelet[2365]: E0909 05:33:24.864600 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:33:24.865157 kubelet[2365]: E0909 05:33:24.864747 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:24.876736 kubelet[2365]: E0909 05:33:24.876692 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:24.941674 kubelet[2365]: E0909 05:33:24.941619 2365 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:33:24.941830 kubelet[2365]: E0909 05:33:24.941760 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:24.977015 kubelet[2365]: E0909 05:33:24.976944 2365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:33:25.026711 kubelet[2365]: I0909 05:33:25.026647 2365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:33:25.033235 kubelet[2365]: I0909 05:33:25.033197 2365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:33:25.038126 kubelet[2365]: I0909 05:33:25.037889 2365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:25.099380 systemd[1]: Reload requested from client PID 2642 ('systemctl') (unit session-7.scope)... Sep 9 05:33:25.099395 systemd[1]: Reloading... Sep 9 05:33:25.166744 zram_generator::config[2688]: No configuration found. Sep 9 05:33:25.821759 kubelet[2365]: I0909 05:33:25.821718 2365 apiserver.go:52] "Watching apiserver" Sep 9 05:33:25.823368 kubelet[2365]: E0909 05:33:25.823346 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:25.827750 kubelet[2365]: I0909 05:33:25.827720 2365 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:33:25.865299 kubelet[2365]: E0909 05:33:25.865241 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:25.865299 kubelet[2365]: E0909 05:33:25.865284 2365 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:25.866982 systemd[1]: Reloading finished in 767 ms. Sep 9 05:33:25.894464 kubelet[2365]: I0909 05:33:25.894430 2365 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:33:25.894603 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:33:25.916784 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:33:25.917102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:33:25.917148 systemd[1]: kubelet.service: Consumed 1.076s CPU time, 132.1M memory peak. Sep 9 05:33:25.918843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:33:26.110938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:33:26.115435 (kubelet)[2730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:33:26.153040 kubelet[2730]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:33:26.153040 kubelet[2730]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:33:26.153040 kubelet[2730]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:33:26.153483 kubelet[2730]: I0909 05:33:26.153129 2730 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:33:26.161319 kubelet[2730]: I0909 05:33:26.161281 2730 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:33:26.161319 kubelet[2730]: I0909 05:33:26.161315 2730 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:33:26.163232 kubelet[2730]: I0909 05:33:26.163201 2730 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:33:26.164895 kubelet[2730]: I0909 05:33:26.164879 2730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 05:33:26.166871 kubelet[2730]: I0909 05:33:26.166853 2730 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:33:26.171258 kubelet[2730]: I0909 05:33:26.170552 2730 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:33:26.175063 kubelet[2730]: I0909 05:33:26.175041 2730 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:33:26.175263 kubelet[2730]: I0909 05:33:26.175228 2730 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:33:26.175417 kubelet[2730]: I0909 05:33:26.175259 2730 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:33:26.175502 kubelet[2730]: I0909 05:33:26.175424 2730 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:33:26.175502 kubelet[2730]: I0909 05:33:26.175431 2730 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:33:26.175502 kubelet[2730]: I0909 05:33:26.175477 2730 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:33:26.175619 kubelet[2730]: I0909 05:33:26.175604 2730 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:33:26.175685 kubelet[2730]: I0909 05:33:26.175654 2730 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:33:26.175685 kubelet[2730]: I0909 05:33:26.175681 2730 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:33:26.175727 kubelet[2730]: I0909 05:33:26.175691 2730 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:33:26.177532 kubelet[2730]: I0909 05:33:26.177499 2730 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:33:26.177908 sudo[2746]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 05:33:26.178495 sudo[2746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 05:33:26.178754 kubelet[2730]: I0909 05:33:26.178728 2730 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:33:26.179547 kubelet[2730]: I0909 05:33:26.179206 2730 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:33:26.179547 kubelet[2730]: I0909 05:33:26.179236 2730 server.go:1287] "Started kubelet" Sep 9 05:33:26.179909 kubelet[2730]: I0909 05:33:26.179870 2730 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:33:26.181417 kubelet[2730]: I0909 05:33:26.180928 2730 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:33:26.181588 kubelet[2730]: I0909 05:33:26.181554 2730 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:33:26.185127 kubelet[2730]: I0909 05:33:26.185099 2730 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:33:26.185317 kubelet[2730]: I0909 05:33:26.185295 2730 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:33:26.191811 kubelet[2730]: I0909 05:33:26.191747 2730 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:33:26.196098 kubelet[2730]: I0909 05:33:26.195991 2730 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:33:26.196098 kubelet[2730]: I0909 05:33:26.196095 2730 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:33:26.196393 kubelet[2730]: I0909 05:33:26.196206 2730 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:33:26.196766 kubelet[2730]: I0909 05:33:26.196700 2730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:33:26.197566 kubelet[2730]: I0909 05:33:26.197393 2730 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:33:26.197566 kubelet[2730]: I0909 05:33:26.197482 2730 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:33:26.198072 kubelet[2730]: I0909 05:33:26.197947 2730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:33:26.198072 kubelet[2730]: I0909 05:33:26.197968 2730 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:33:26.198072 kubelet[2730]: I0909 05:33:26.198018 2730 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:33:26.198072 kubelet[2730]: I0909 05:33:26.198026 2730 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:33:26.198170 kubelet[2730]: E0909 05:33:26.198069 2730 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:33:26.198864 kubelet[2730]: I0909 05:33:26.198812 2730 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:33:26.204010 kubelet[2730]: E0909 05:33:26.203870 2730 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:33:26.230133 kubelet[2730]: I0909 05:33:26.230110 2730 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:33:26.230647 kubelet[2730]: I0909 05:33:26.230262 2730 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:33:26.230647 kubelet[2730]: I0909 05:33:26.230282 2730 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:33:26.230647 kubelet[2730]: I0909 05:33:26.230413 2730 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:33:26.230647 kubelet[2730]: I0909 05:33:26.230421 2730 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:33:26.230647 kubelet[2730]: I0909 05:33:26.230437 2730 policy_none.go:49] "None policy: Start" Sep 9 05:33:26.230647 kubelet[2730]: I0909 05:33:26.230445 2730 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:33:26.230647 kubelet[2730]: I0909 05:33:26.230455 2730 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:33:26.230647 kubelet[2730]: I0909 05:33:26.230537 2730 state_mem.go:75] "Updated machine memory state" Sep 9 05:33:26.234357 kubelet[2730]: I0909 05:33:26.234334 2730 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:33:26.234513 kubelet[2730]: I0909 05:33:26.234500 2730 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:33:26.234540 kubelet[2730]: I0909 05:33:26.234515 2730 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:33:26.234950 kubelet[2730]: I0909 05:33:26.234915 2730 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:33:26.238472 kubelet[2730]: E0909 05:33:26.235609 2730 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:33:26.298582 kubelet[2730]: I0909 05:33:26.298539 2730 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:33:26.298903 kubelet[2730]: I0909 05:33:26.298879 2730 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:33:26.299087 kubelet[2730]: I0909 05:33:26.299067 2730 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:26.305680 kubelet[2730]: E0909 05:33:26.305586 2730 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 05:33:26.305739 kubelet[2730]: E0909 05:33:26.305687 2730 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:26.306273 kubelet[2730]: E0909 05:33:26.306255 2730 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 05:33:26.341862 kubelet[2730]: I0909 05:33:26.341655 2730 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:33:26.347052 kubelet[2730]: I0909 05:33:26.346983 2730 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 05:33:26.347052 kubelet[2730]: I0909 05:33:26.347052 2730 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:33:26.498473 kubelet[2730]: I0909 05:33:26.498351 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:26.498473 kubelet[2730]: I0909 05:33:26.498395 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:26.498473 kubelet[2730]: I0909 05:33:26.498439 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:33:26.498473 kubelet[2730]: I0909 05:33:26.498458 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40e05b7e1b25759d45351595e3b9201f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"40e05b7e1b25759d45351595e3b9201f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:33:26.498672 kubelet[2730]: I0909 05:33:26.498494 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40e05b7e1b25759d45351595e3b9201f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"40e05b7e1b25759d45351595e3b9201f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:33:26.498672 kubelet[2730]: I0909 05:33:26.498508 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:26.498672 kubelet[2730]: I0909 05:33:26.498521 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40e05b7e1b25759d45351595e3b9201f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"40e05b7e1b25759d45351595e3b9201f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:33:26.498672 kubelet[2730]: I0909 05:33:26.498539 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:26.498672 kubelet[2730]: I0909 05:33:26.498568 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:26.606295 kubelet[2730]: E0909 05:33:26.606257 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:26.607261 kubelet[2730]: E0909 05:33:26.606533 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:26.607261 kubelet[2730]: E0909 05:33:26.606661 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:26.627574 sudo[2746]: pam_unix(sudo:session): session closed for user root Sep 9 05:33:27.176131 kubelet[2730]: I0909 05:33:27.175857 2730 apiserver.go:52] "Watching apiserver" Sep 9 05:33:27.196241 kubelet[2730]: I0909 05:33:27.196211 2730 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:33:27.214753 kubelet[2730]: I0909 05:33:27.214714 2730 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:27.214896 kubelet[2730]: E0909 05:33:27.214789 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:27.215128 kubelet[2730]: E0909 05:33:27.215106 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:27.334504 kubelet[2730]: E0909 05:33:27.334460 2730 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:33:27.334992 kubelet[2730]: E0909 05:33:27.334821 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:27.388578 kubelet[2730]: I0909 05:33:27.388503 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.388487279 podStartE2EDuration="2.388487279s" podCreationTimestamp="2025-09-09 05:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:33:27.335212128 +0000 UTC m=+1.216009569" watchObservedRunningTime="2025-09-09 05:33:27.388487279 +0000 UTC m=+1.269284720" Sep 9 05:33:27.394323 kubelet[2730]: I0909 05:33:27.394286 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.394277415 podStartE2EDuration="2.394277415s" podCreationTimestamp="2025-09-09 05:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:33:27.388623043 +0000 UTC m=+1.269420484" watchObservedRunningTime="2025-09-09 05:33:27.394277415 +0000 UTC m=+1.275074856" Sep 9 05:33:27.394393 kubelet[2730]: I0909 05:33:27.394338 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.394334262 podStartE2EDuration="2.394334262s" podCreationTimestamp="2025-09-09 05:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:33:27.394147832 +0000 UTC m=+1.274945273" watchObservedRunningTime="2025-09-09 05:33:27.394334262 +0000 UTC m=+1.275131693" Sep 9 05:33:27.894683 sudo[1811]: pam_unix(sudo:session): session closed for user root Sep 9 05:33:27.896077 sshd[1810]: Connection closed by 10.0.0.1 port 45062 Sep 9 05:33:27.896468 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Sep 9 05:33:27.900939 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:45062.service: Deactivated successfully. Sep 9 05:33:27.903125 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:33:27.903343 systemd[1]: session-7.scope: Consumed 3.891s CPU time, 262.8M memory peak. Sep 9 05:33:27.904676 systemd-logind[1585]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:33:27.905794 systemd-logind[1585]: Removed session 7. Sep 9 05:33:28.216376 kubelet[2730]: E0909 05:33:28.216252 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:28.216966 kubelet[2730]: E0909 05:33:28.216395 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:28.216966 kubelet[2730]: E0909 05:33:28.216594 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:30.743284 kubelet[2730]: I0909 05:33:30.743237 2730 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:33:30.743787 kubelet[2730]: I0909 05:33:30.743702 2730 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:33:30.743828 containerd[1604]: time="2025-09-09T05:33:30.743503714Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:33:31.667120 systemd[1]: Created slice kubepods-besteffort-pod06a0d814_4573_4e5a_9b31_8dff9fa7abf9.slice - libcontainer container kubepods-besteffort-pod06a0d814_4573_4e5a_9b31_8dff9fa7abf9.slice. Sep 9 05:33:31.679983 systemd[1]: Created slice kubepods-burstable-podf972031e_7481_41c7_8d11_a03cd44bc65d.slice - libcontainer container kubepods-burstable-podf972031e_7481_41c7_8d11_a03cd44bc65d.slice. Sep 9 05:33:31.735022 kubelet[2730]: I0909 05:33:31.734975 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-hostproc\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735022 kubelet[2730]: I0909 05:33:31.735011 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cni-path\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735022 kubelet[2730]: I0909 05:33:31.735029 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-lib-modules\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735202 kubelet[2730]: I0909 05:33:31.735044 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-host-proc-sys-kernel\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735202 kubelet[2730]: I0909 05:33:31.735059 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-host-proc-sys-net\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735202 kubelet[2730]: I0909 05:33:31.735072 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06a0d814-4573-4e5a-9b31-8dff9fa7abf9-xtables-lock\") pod \"kube-proxy-jgg6d\" (UID: \"06a0d814-4573-4e5a-9b31-8dff9fa7abf9\") " pod="kube-system/kube-proxy-jgg6d" Sep 9 05:33:31.735202 kubelet[2730]: I0909 05:33:31.735087 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06a0d814-4573-4e5a-9b31-8dff9fa7abf9-lib-modules\") pod \"kube-proxy-jgg6d\" (UID: \"06a0d814-4573-4e5a-9b31-8dff9fa7abf9\") " pod="kube-system/kube-proxy-jgg6d" Sep 9 05:33:31.735202 kubelet[2730]: I0909 05:33:31.735129 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp9bl\" (UniqueName: \"kubernetes.io/projected/06a0d814-4573-4e5a-9b31-8dff9fa7abf9-kube-api-access-wp9bl\") pod \"kube-proxy-jgg6d\" (UID: \"06a0d814-4573-4e5a-9b31-8dff9fa7abf9\") " pod="kube-system/kube-proxy-jgg6d" Sep 9 05:33:31.735322 kubelet[2730]: I0909 05:33:31.735148 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-etc-cni-netd\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735322 kubelet[2730]: I0909 05:33:31.735162 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-xtables-lock\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735322 kubelet[2730]: I0909 05:33:31.735175 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f972031e-7481-41c7-8d11-a03cd44bc65d-clustermesh-secrets\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735322 kubelet[2730]: I0909 05:33:31.735189 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-cgroup\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735322 kubelet[2730]: I0909 05:33:31.735203 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-config-path\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735425 kubelet[2730]: I0909 05:33:31.735218 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skp6n\" (UniqueName: \"kubernetes.io/projected/f972031e-7481-41c7-8d11-a03cd44bc65d-kube-api-access-skp6n\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735425 kubelet[2730]: I0909 05:33:31.735232 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-run\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735425 kubelet[2730]: I0909 05:33:31.735247 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-bpf-maps\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.735425 kubelet[2730]: I0909 05:33:31.735262 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/06a0d814-4573-4e5a-9b31-8dff9fa7abf9-kube-proxy\") pod \"kube-proxy-jgg6d\" (UID: \"06a0d814-4573-4e5a-9b31-8dff9fa7abf9\") " pod="kube-system/kube-proxy-jgg6d" Sep 9 05:33:31.735425 kubelet[2730]: I0909 05:33:31.735278 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f972031e-7481-41c7-8d11-a03cd44bc65d-hubble-tls\") pod \"cilium-4lmm6\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " pod="kube-system/cilium-4lmm6" Sep 9 05:33:31.772045 systemd[1]: Created slice kubepods-besteffort-podd28da9bf_6dfb_48af_92a7_8e2058964ced.slice - libcontainer container kubepods-besteffort-podd28da9bf_6dfb_48af_92a7_8e2058964ced.slice. Sep 9 05:33:31.836329 kubelet[2730]: I0909 05:33:31.836295 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d28da9bf-6dfb-48af-92a7-8e2058964ced-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lsprc\" (UID: \"d28da9bf-6dfb-48af-92a7-8e2058964ced\") " pod="kube-system/cilium-operator-6c4d7847fc-lsprc" Sep 9 05:33:31.837837 kubelet[2730]: I0909 05:33:31.837806 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzhz\" (UniqueName: \"kubernetes.io/projected/d28da9bf-6dfb-48af-92a7-8e2058964ced-kube-api-access-lvzhz\") pod \"cilium-operator-6c4d7847fc-lsprc\" (UID: \"d28da9bf-6dfb-48af-92a7-8e2058964ced\") " pod="kube-system/cilium-operator-6c4d7847fc-lsprc" Sep 9 05:33:31.977313 kubelet[2730]: E0909 05:33:31.976928 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:31.977785 containerd[1604]: time="2025-09-09T05:33:31.977745957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jgg6d,Uid:06a0d814-4573-4e5a-9b31-8dff9fa7abf9,Namespace:kube-system,Attempt:0,}" Sep 9 05:33:31.983319 kubelet[2730]: E0909 05:33:31.983297 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:31.983750 containerd[1604]: time="2025-09-09T05:33:31.983702967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4lmm6,Uid:f972031e-7481-41c7-8d11-a03cd44bc65d,Namespace:kube-system,Attempt:0,}" Sep 9 05:33:32.076876 kubelet[2730]: E0909 05:33:32.076841 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:32.077355 containerd[1604]: time="2025-09-09T05:33:32.077309549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lsprc,Uid:d28da9bf-6dfb-48af-92a7-8e2058964ced,Namespace:kube-system,Attempt:0,}" Sep 9 05:33:32.171322 containerd[1604]: time="2025-09-09T05:33:32.171282510Z" level=info msg="connecting to shim 1c785ea2c32e29c520fd666d1ee39f6c3baad0625d221406813ea17b38a23432" address="unix:///run/containerd/s/4abd701d1b92c88cc5d7397cd571040dcebdeb12c8cb27a5a65c142774c40305" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:33:32.179719 containerd[1604]: time="2025-09-09T05:33:32.179677680Z" level=info msg="connecting to shim 64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a" address="unix:///run/containerd/s/67a4caa0648517c79e15176492a12af3655f9c189379dd81d92fdb0dfeb74002" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:33:32.182651 containerd[1604]: time="2025-09-09T05:33:32.182573864Z" level=info msg="connecting to shim a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71" address="unix:///run/containerd/s/217d24148d2ed5ad5546306d5dd7dffdbee71a38ca6a095af4d0ab61284c4a03" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:33:32.207834 systemd[1]: Started cri-containerd-1c785ea2c32e29c520fd666d1ee39f6c3baad0625d221406813ea17b38a23432.scope - libcontainer container 1c785ea2c32e29c520fd666d1ee39f6c3baad0625d221406813ea17b38a23432. Sep 9 05:33:32.212080 systemd[1]: Started cri-containerd-64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a.scope - libcontainer container 64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a. Sep 9 05:33:32.213984 systemd[1]: Started cri-containerd-a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71.scope - libcontainer container a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71. Sep 9 05:33:32.244709 containerd[1604]: time="2025-09-09T05:33:32.244513999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jgg6d,Uid:06a0d814-4573-4e5a-9b31-8dff9fa7abf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c785ea2c32e29c520fd666d1ee39f6c3baad0625d221406813ea17b38a23432\"" Sep 9 05:33:32.246059 kubelet[2730]: E0909 05:33:32.246031 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:32.248556 containerd[1604]: time="2025-09-09T05:33:32.248523973Z" level=info msg="CreateContainer within sandbox \"1c785ea2c32e29c520fd666d1ee39f6c3baad0625d221406813ea17b38a23432\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:33:32.252818 containerd[1604]: time="2025-09-09T05:33:32.252782485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4lmm6,Uid:f972031e-7481-41c7-8d11-a03cd44bc65d,Namespace:kube-system,Attempt:0,} returns sandbox id \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\"" Sep 9 05:33:32.253315 kubelet[2730]: E0909 05:33:32.253284 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:32.253946 containerd[1604]: time="2025-09-09T05:33:32.253916173Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 05:33:32.269670 containerd[1604]: time="2025-09-09T05:33:32.269226809Z" level=info msg="Container f26e62ebbbb66096e38c9be4d1473852d3a95c179437903f6b723a1536df6441: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:32.274047 containerd[1604]: time="2025-09-09T05:33:32.274007876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lsprc,Uid:d28da9bf-6dfb-48af-92a7-8e2058964ced,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71\"" Sep 9 05:33:32.274638 kubelet[2730]: E0909 05:33:32.274604 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:32.280360 containerd[1604]: time="2025-09-09T05:33:32.280320293Z" level=info msg="CreateContainer within sandbox \"1c785ea2c32e29c520fd666d1ee39f6c3baad0625d221406813ea17b38a23432\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f26e62ebbbb66096e38c9be4d1473852d3a95c179437903f6b723a1536df6441\"" Sep 9 05:33:32.282687 containerd[1604]: time="2025-09-09T05:33:32.280733437Z" level=info msg="StartContainer for \"f26e62ebbbb66096e38c9be4d1473852d3a95c179437903f6b723a1536df6441\"" Sep 9 05:33:32.282687 containerd[1604]: time="2025-09-09T05:33:32.281980212Z" level=info msg="connecting to shim f26e62ebbbb66096e38c9be4d1473852d3a95c179437903f6b723a1536df6441" address="unix:///run/containerd/s/4abd701d1b92c88cc5d7397cd571040dcebdeb12c8cb27a5a65c142774c40305" protocol=ttrpc version=3 Sep 9 05:33:32.311783 systemd[1]: Started cri-containerd-f26e62ebbbb66096e38c9be4d1473852d3a95c179437903f6b723a1536df6441.scope - libcontainer container f26e62ebbbb66096e38c9be4d1473852d3a95c179437903f6b723a1536df6441. Sep 9 05:33:32.359041 containerd[1604]: time="2025-09-09T05:33:32.358994680Z" level=info msg="StartContainer for \"f26e62ebbbb66096e38c9be4d1473852d3a95c179437903f6b723a1536df6441\" returns successfully" Sep 9 05:33:33.225535 kubelet[2730]: E0909 05:33:33.225505 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:33.233126 kubelet[2730]: I0909 05:33:33.233066 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jgg6d" podStartSLOduration=2.233053845 podStartE2EDuration="2.233053845s" podCreationTimestamp="2025-09-09 05:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:33:33.232933144 +0000 UTC m=+7.113730575" watchObservedRunningTime="2025-09-09 05:33:33.233053845 +0000 UTC m=+7.113851286" Sep 9 05:33:35.628156 kubelet[2730]: E0909 05:33:35.628126 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:36.230016 kubelet[2730]: E0909 05:33:36.229982 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:36.327448 kubelet[2730]: E0909 05:33:36.327410 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:36.637486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966647631.mount: Deactivated successfully. Sep 9 05:33:36.922560 kubelet[2730]: E0909 05:33:36.922446 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:37.232475 kubelet[2730]: E0909 05:33:37.232349 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:37.233094 kubelet[2730]: E0909 05:33:37.233047 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:38.234566 kubelet[2730]: E0909 05:33:38.234539 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:42.437962 containerd[1604]: time="2025-09-09T05:33:42.437909613Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:42.438842 containerd[1604]: time="2025-09-09T05:33:42.438816014Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 05:33:42.440164 containerd[1604]: time="2025-09-09T05:33:42.440105083Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:42.441585 containerd[1604]: time="2025-09-09T05:33:42.441560828Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.187614106s" Sep 9 05:33:42.441641 containerd[1604]: time="2025-09-09T05:33:42.441586607Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 05:33:42.442901 containerd[1604]: time="2025-09-09T05:33:42.442874092Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 05:33:42.444397 containerd[1604]: time="2025-09-09T05:33:42.443828765Z" level=info msg="CreateContainer within sandbox \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:33:42.453683 containerd[1604]: time="2025-09-09T05:33:42.453644508Z" level=info msg="Container 42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:42.457243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475249953.mount: Deactivated successfully. Sep 9 05:33:42.460538 containerd[1604]: time="2025-09-09T05:33:42.460501632Z" level=info msg="CreateContainer within sandbox \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\"" Sep 9 05:33:42.462702 containerd[1604]: time="2025-09-09T05:33:42.462652126Z" level=info msg="StartContainer for \"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\"" Sep 9 05:33:42.463616 containerd[1604]: time="2025-09-09T05:33:42.463593003Z" level=info msg="connecting to shim 42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92" address="unix:///run/containerd/s/67a4caa0648517c79e15176492a12af3655f9c189379dd81d92fdb0dfeb74002" protocol=ttrpc version=3 Sep 9 05:33:42.519879 systemd[1]: Started cri-containerd-42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92.scope - libcontainer container 42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92. Sep 9 05:33:42.549834 containerd[1604]: time="2025-09-09T05:33:42.549773665Z" level=info msg="StartContainer for \"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\" returns successfully" Sep 9 05:33:42.558344 systemd[1]: cri-containerd-42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92.scope: Deactivated successfully. Sep 9 05:33:42.559867 containerd[1604]: time="2025-09-09T05:33:42.559821408Z" level=info msg="received exit event container_id:\"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\" id:\"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\" pid:3156 exited_at:{seconds:1757396022 nanos:559387965}" Sep 9 05:33:42.560067 containerd[1604]: time="2025-09-09T05:33:42.560010087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\" id:\"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\" pid:3156 exited_at:{seconds:1757396022 nanos:559387965}" Sep 9 05:33:42.578495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92-rootfs.mount: Deactivated successfully. Sep 9 05:33:43.250116 kubelet[2730]: E0909 05:33:43.250084 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:43.251597 containerd[1604]: time="2025-09-09T05:33:43.251562781Z" level=info msg="CreateContainer within sandbox \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:33:43.261491 containerd[1604]: time="2025-09-09T05:33:43.261440747Z" level=info msg="Container 9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:43.268547 containerd[1604]: time="2025-09-09T05:33:43.268495285Z" level=info msg="CreateContainer within sandbox \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\"" Sep 9 05:33:43.268973 containerd[1604]: time="2025-09-09T05:33:43.268952613Z" level=info msg="StartContainer for \"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\"" Sep 9 05:33:43.269676 containerd[1604]: time="2025-09-09T05:33:43.269650256Z" level=info msg="connecting to shim 9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d" address="unix:///run/containerd/s/67a4caa0648517c79e15176492a12af3655f9c189379dd81d92fdb0dfeb74002" protocol=ttrpc version=3 Sep 9 05:33:43.290844 systemd[1]: Started cri-containerd-9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d.scope - libcontainer container 9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d. Sep 9 05:33:43.318775 containerd[1604]: time="2025-09-09T05:33:43.318723987Z" level=info msg="StartContainer for \"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\" returns successfully" Sep 9 05:33:43.332981 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:33:43.333207 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:33:43.333348 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:33:43.334684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:33:43.337545 systemd[1]: cri-containerd-9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d.scope: Deactivated successfully. Sep 9 05:33:43.339859 containerd[1604]: time="2025-09-09T05:33:43.339607309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\" id:\"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\" pid:3200 exited_at:{seconds:1757396023 nanos:339264187}" Sep 9 05:33:43.339935 containerd[1604]: time="2025-09-09T05:33:43.339719522Z" level=info msg="received exit event container_id:\"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\" id:\"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\" pid:3200 exited_at:{seconds:1757396023 nanos:339264187}" Sep 9 05:33:43.363678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:33:44.034113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3615268791.mount: Deactivated successfully. Sep 9 05:33:44.253610 kubelet[2730]: E0909 05:33:44.253265 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:44.257774 containerd[1604]: time="2025-09-09T05:33:44.257711939Z" level=info msg="CreateContainer within sandbox \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:33:44.345695 containerd[1604]: time="2025-09-09T05:33:44.345650772Z" level=info msg="Container 9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:44.349022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2396639264.mount: Deactivated successfully. Sep 9 05:33:44.359840 containerd[1604]: time="2025-09-09T05:33:44.359788892Z" level=info msg="CreateContainer within sandbox \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\"" Sep 9 05:33:44.360197 containerd[1604]: time="2025-09-09T05:33:44.360161138Z" level=info msg="StartContainer for \"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\"" Sep 9 05:33:44.361503 containerd[1604]: time="2025-09-09T05:33:44.361476733Z" level=info msg="connecting to shim 9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea" address="unix:///run/containerd/s/67a4caa0648517c79e15176492a12af3655f9c189379dd81d92fdb0dfeb74002" protocol=ttrpc version=3 Sep 9 05:33:44.382774 systemd[1]: Started cri-containerd-9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea.scope - libcontainer container 9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea. Sep 9 05:33:44.383318 containerd[1604]: time="2025-09-09T05:33:44.383219384Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:44.385283 containerd[1604]: time="2025-09-09T05:33:44.385255155Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 05:33:44.386166 containerd[1604]: time="2025-09-09T05:33:44.386144973Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:33:44.388070 containerd[1604]: time="2025-09-09T05:33:44.388045797Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.945145023s" Sep 9 05:33:44.388127 containerd[1604]: time="2025-09-09T05:33:44.388073560Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 05:33:44.390197 containerd[1604]: time="2025-09-09T05:33:44.390164594Z" level=info msg="CreateContainer within sandbox \"a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 05:33:44.400068 containerd[1604]: time="2025-09-09T05:33:44.400026586Z" level=info msg="Container 0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:44.409678 containerd[1604]: time="2025-09-09T05:33:44.409044247Z" level=info msg="CreateContainer within sandbox \"a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\"" Sep 9 05:33:44.410245 containerd[1604]: time="2025-09-09T05:33:44.410162317Z" level=info msg="StartContainer for \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\"" Sep 9 05:33:44.413562 containerd[1604]: time="2025-09-09T05:33:44.413534652Z" level=info msg="connecting to shim 0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91" address="unix:///run/containerd/s/217d24148d2ed5ad5546306d5dd7dffdbee71a38ca6a095af4d0ab61284c4a03" protocol=ttrpc version=3 Sep 9 05:33:44.428266 containerd[1604]: time="2025-09-09T05:33:44.428231542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\" id:\"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\" pid:3263 exited_at:{seconds:1757396024 nanos:427987740}" Sep 9 05:33:44.428374 containerd[1604]: time="2025-09-09T05:33:44.428288561Z" level=info msg="received exit event container_id:\"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\" id:\"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\" pid:3263 exited_at:{seconds:1757396024 nanos:427987740}" Sep 9 05:33:44.429201 containerd[1604]: time="2025-09-09T05:33:44.429179420Z" level=info msg="StartContainer for \"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\" returns successfully" Sep 9 05:33:44.433830 systemd[1]: Started cri-containerd-0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91.scope - libcontainer container 0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91. Sep 9 05:33:44.434095 systemd[1]: cri-containerd-9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea.scope: Deactivated successfully. Sep 9 05:33:44.675872 containerd[1604]: time="2025-09-09T05:33:44.675359381Z" level=info msg="StartContainer for \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\" returns successfully" Sep 9 05:33:45.192548 update_engine[1587]: I20250909 05:33:45.192447 1587 update_attempter.cc:509] Updating boot flags... Sep 9 05:33:45.265667 kubelet[2730]: E0909 05:33:45.262379 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:45.266143 containerd[1604]: time="2025-09-09T05:33:45.266103670Z" level=info msg="CreateContainer within sandbox \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:33:45.276468 kubelet[2730]: E0909 05:33:45.276434 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:45.302803 containerd[1604]: time="2025-09-09T05:33:45.302122075Z" level=info msg="Container 82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:45.312872 containerd[1604]: time="2025-09-09T05:33:45.312780803Z" level=info msg="CreateContainer within sandbox \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\"" Sep 9 05:33:45.317809 containerd[1604]: time="2025-09-09T05:33:45.317748157Z" level=info msg="StartContainer for \"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\"" Sep 9 05:33:45.319657 containerd[1604]: time="2025-09-09T05:33:45.318553503Z" level=info msg="connecting to shim 82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e" address="unix:///run/containerd/s/67a4caa0648517c79e15176492a12af3655f9c189379dd81d92fdb0dfeb74002" protocol=ttrpc version=3 Sep 9 05:33:45.384789 systemd[1]: Started cri-containerd-82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e.scope - libcontainer container 82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e. Sep 9 05:33:45.429584 systemd[1]: cri-containerd-82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e.scope: Deactivated successfully. Sep 9 05:33:45.430985 containerd[1604]: time="2025-09-09T05:33:45.430945056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\" id:\"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\" pid:3355 exited_at:{seconds:1757396025 nanos:430618056}" Sep 9 05:33:45.432131 containerd[1604]: time="2025-09-09T05:33:45.432107608Z" level=info msg="received exit event container_id:\"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\" id:\"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\" pid:3355 exited_at:{seconds:1757396025 nanos:430618056}" Sep 9 05:33:45.439220 containerd[1604]: time="2025-09-09T05:33:45.439180542Z" level=info msg="StartContainer for \"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\" returns successfully" Sep 9 05:33:45.454857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e-rootfs.mount: Deactivated successfully. Sep 9 05:33:46.281980 kubelet[2730]: E0909 05:33:46.281607 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:46.281980 kubelet[2730]: E0909 05:33:46.281776 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:46.284334 containerd[1604]: time="2025-09-09T05:33:46.284293853Z" level=info msg="CreateContainer within sandbox \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:33:46.295584 kubelet[2730]: I0909 05:33:46.295520 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lsprc" podStartSLOduration=3.18245816 podStartE2EDuration="15.295490918s" podCreationTimestamp="2025-09-09 05:33:31 +0000 UTC" firstStartedPulling="2025-09-09 05:33:32.27553045 +0000 UTC m=+6.156327881" lastFinishedPulling="2025-09-09 05:33:44.388563208 +0000 UTC m=+18.269360639" observedRunningTime="2025-09-09 05:33:45.310248435 +0000 UTC m=+19.191045876" watchObservedRunningTime="2025-09-09 05:33:46.295490918 +0000 UTC m=+20.176288379" Sep 9 05:33:46.316091 containerd[1604]: time="2025-09-09T05:33:46.316041265Z" level=info msg="Container 8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:46.319786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505440764.mount: Deactivated successfully. Sep 9 05:33:46.323190 containerd[1604]: time="2025-09-09T05:33:46.323143996Z" level=info msg="CreateContainer within sandbox \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\"" Sep 9 05:33:46.323638 containerd[1604]: time="2025-09-09T05:33:46.323602344Z" level=info msg="StartContainer for \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\"" Sep 9 05:33:46.324447 containerd[1604]: time="2025-09-09T05:33:46.324423529Z" level=info msg="connecting to shim 8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c" address="unix:///run/containerd/s/67a4caa0648517c79e15176492a12af3655f9c189379dd81d92fdb0dfeb74002" protocol=ttrpc version=3 Sep 9 05:33:46.350756 systemd[1]: Started cri-containerd-8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c.scope - libcontainer container 8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c. Sep 9 05:33:46.385349 containerd[1604]: time="2025-09-09T05:33:46.385308321Z" level=info msg="StartContainer for \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" returns successfully" Sep 9 05:33:46.452592 containerd[1604]: time="2025-09-09T05:33:46.451696483Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" id:\"4f05c3157fa4de1138ebfedbe41c6af9b90c780d80a0ff3b711530c3bc9ed2ca\" pid:3423 exited_at:{seconds:1757396026 nanos:451011315}" Sep 9 05:33:46.512961 kubelet[2730]: I0909 05:33:46.512917 2730 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 05:33:46.540067 kubelet[2730]: I0909 05:33:46.539935 2730 status_manager.go:890] "Failed to get status for pod" podUID="4702ffd6-1d5e-4365-8041-f00bf7f40dab" pod="kube-system/coredns-668d6bf9bc-wtx7h" err="pods \"coredns-668d6bf9bc-wtx7h\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 9 05:33:46.541883 kubelet[2730]: W0909 05:33:46.541795 2730 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 05:33:46.541883 kubelet[2730]: E0909 05:33:46.541832 2730 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 05:33:46.548701 systemd[1]: Created slice kubepods-burstable-pod4702ffd6_1d5e_4365_8041_f00bf7f40dab.slice - libcontainer container kubepods-burstable-pod4702ffd6_1d5e_4365_8041_f00bf7f40dab.slice. Sep 9 05:33:46.562235 systemd[1]: Created slice kubepods-burstable-pod867e57cc_0def_431e_a727_55aa6731cac5.slice - libcontainer container kubepods-burstable-pod867e57cc_0def_431e_a727_55aa6731cac5.slice. Sep 9 05:33:46.646873 kubelet[2730]: I0909 05:33:46.646817 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/867e57cc-0def-431e-a727-55aa6731cac5-config-volume\") pod \"coredns-668d6bf9bc-qwngx\" (UID: \"867e57cc-0def-431e-a727-55aa6731cac5\") " pod="kube-system/coredns-668d6bf9bc-qwngx" Sep 9 05:33:46.646873 kubelet[2730]: I0909 05:33:46.646873 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rggvg\" (UniqueName: \"kubernetes.io/projected/867e57cc-0def-431e-a727-55aa6731cac5-kube-api-access-rggvg\") pod \"coredns-668d6bf9bc-qwngx\" (UID: \"867e57cc-0def-431e-a727-55aa6731cac5\") " pod="kube-system/coredns-668d6bf9bc-qwngx" Sep 9 05:33:46.647060 kubelet[2730]: I0909 05:33:46.646906 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56q47\" (UniqueName: \"kubernetes.io/projected/4702ffd6-1d5e-4365-8041-f00bf7f40dab-kube-api-access-56q47\") pod \"coredns-668d6bf9bc-wtx7h\" (UID: \"4702ffd6-1d5e-4365-8041-f00bf7f40dab\") " pod="kube-system/coredns-668d6bf9bc-wtx7h" Sep 9 05:33:46.647060 kubelet[2730]: I0909 05:33:46.646925 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4702ffd6-1d5e-4365-8041-f00bf7f40dab-config-volume\") pod \"coredns-668d6bf9bc-wtx7h\" (UID: \"4702ffd6-1d5e-4365-8041-f00bf7f40dab\") " pod="kube-system/coredns-668d6bf9bc-wtx7h" Sep 9 05:33:47.292609 kubelet[2730]: E0909 05:33:47.292574 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:47.306723 kubelet[2730]: I0909 05:33:47.306671 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4lmm6" podStartSLOduration=6.117854651 podStartE2EDuration="16.306654417s" podCreationTimestamp="2025-09-09 05:33:31 +0000 UTC" firstStartedPulling="2025-09-09 05:33:32.253583383 +0000 UTC m=+6.134380824" lastFinishedPulling="2025-09-09 05:33:42.442383149 +0000 UTC m=+16.323180590" observedRunningTime="2025-09-09 05:33:47.306022592 +0000 UTC m=+21.186820033" watchObservedRunningTime="2025-09-09 05:33:47.306654417 +0000 UTC m=+21.187451859" Sep 9 05:33:47.756129 kubelet[2730]: E0909 05:33:47.756068 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:47.767407 kubelet[2730]: E0909 05:33:47.766913 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:47.782001 containerd[1604]: time="2025-09-09T05:33:47.781941959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wtx7h,Uid:4702ffd6-1d5e-4365-8041-f00bf7f40dab,Namespace:kube-system,Attempt:0,}" Sep 9 05:33:47.783539 containerd[1604]: time="2025-09-09T05:33:47.783484188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qwngx,Uid:867e57cc-0def-431e-a727-55aa6731cac5,Namespace:kube-system,Attempt:0,}" Sep 9 05:33:48.294321 kubelet[2730]: E0909 05:33:48.294278 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:48.544513 systemd-networkd[1485]: cilium_host: Link UP Sep 9 05:33:48.544692 systemd-networkd[1485]: cilium_net: Link UP Sep 9 05:33:48.545041 systemd-networkd[1485]: cilium_net: Gained carrier Sep 9 05:33:48.545324 systemd-networkd[1485]: cilium_host: Gained carrier Sep 9 05:33:48.643464 systemd-networkd[1485]: cilium_vxlan: Link UP Sep 9 05:33:48.643473 systemd-networkd[1485]: cilium_vxlan: Gained carrier Sep 9 05:33:48.805856 systemd-networkd[1485]: cilium_net: Gained IPv6LL Sep 9 05:33:48.850668 kernel: NET: Registered PF_ALG protocol family Sep 9 05:33:49.180839 systemd-networkd[1485]: cilium_host: Gained IPv6LL Sep 9 05:33:49.296362 kubelet[2730]: E0909 05:33:49.296333 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:49.485676 systemd-networkd[1485]: lxc_health: Link UP Sep 9 05:33:49.488169 systemd-networkd[1485]: lxc_health: Gained carrier Sep 9 05:33:49.821283 systemd-networkd[1485]: lxcc43eae9b22dc: Link UP Sep 9 05:33:49.833276 systemd-networkd[1485]: lxcb91ef390ad80: Link UP Sep 9 05:33:49.842941 kernel: eth0: renamed from tmp34fc8 Sep 9 05:33:49.843152 kernel: eth0: renamed from tmpd5476 Sep 9 05:33:49.843278 systemd-networkd[1485]: lxcb91ef390ad80: Gained carrier Sep 9 05:33:49.844287 systemd-networkd[1485]: lxcc43eae9b22dc: Gained carrier Sep 9 05:33:50.204859 systemd-networkd[1485]: cilium_vxlan: Gained IPv6LL Sep 9 05:33:50.298709 kubelet[2730]: E0909 05:33:50.298663 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:50.972906 systemd-networkd[1485]: lxc_health: Gained IPv6LL Sep 9 05:33:51.100864 systemd-networkd[1485]: lxcc43eae9b22dc: Gained IPv6LL Sep 9 05:33:51.308528 kubelet[2730]: E0909 05:33:51.308105 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:51.548844 systemd-networkd[1485]: lxcb91ef390ad80: Gained IPv6LL Sep 9 05:33:52.310380 kubelet[2730]: E0909 05:33:52.310347 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:53.142146 containerd[1604]: time="2025-09-09T05:33:53.141611231Z" level=info msg="connecting to shim d5476d62e1f11a15ebb67b452baed9ee6d24ae472616dcecfe0a8d3bf621eff3" address="unix:///run/containerd/s/f6e4f7a6059cb0cadcf35f5583d0ee122931738792244cfc97d3729296efb414" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:33:53.143852 containerd[1604]: time="2025-09-09T05:33:53.143823495Z" level=info msg="connecting to shim 34fc8941fa25b480dad306afc4b1d5c283de7cfca3ad12680aceb96b8c8a6bcb" address="unix:///run/containerd/s/0d49ddb0bff3426cdf371330cbc4bd93e9361b81aeb818a9de1dcbfcf7b569ff" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:33:53.173767 systemd[1]: Started cri-containerd-d5476d62e1f11a15ebb67b452baed9ee6d24ae472616dcecfe0a8d3bf621eff3.scope - libcontainer container d5476d62e1f11a15ebb67b452baed9ee6d24ae472616dcecfe0a8d3bf621eff3. Sep 9 05:33:53.178677 systemd[1]: Started cri-containerd-34fc8941fa25b480dad306afc4b1d5c283de7cfca3ad12680aceb96b8c8a6bcb.scope - libcontainer container 34fc8941fa25b480dad306afc4b1d5c283de7cfca3ad12680aceb96b8c8a6bcb. Sep 9 05:33:53.185896 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:33:53.191562 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:33:53.318369 containerd[1604]: time="2025-09-09T05:33:53.318321953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wtx7h,Uid:4702ffd6-1d5e-4365-8041-f00bf7f40dab,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5476d62e1f11a15ebb67b452baed9ee6d24ae472616dcecfe0a8d3bf621eff3\"" Sep 9 05:33:53.318800 kubelet[2730]: E0909 05:33:53.318765 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:53.348083 containerd[1604]: time="2025-09-09T05:33:53.320728574Z" level=info msg="CreateContainer within sandbox \"d5476d62e1f11a15ebb67b452baed9ee6d24ae472616dcecfe0a8d3bf621eff3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:33:53.620262 containerd[1604]: time="2025-09-09T05:33:53.620216049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qwngx,Uid:867e57cc-0def-431e-a727-55aa6731cac5,Namespace:kube-system,Attempt:0,} returns sandbox id \"34fc8941fa25b480dad306afc4b1d5c283de7cfca3ad12680aceb96b8c8a6bcb\"" Sep 9 05:33:53.621138 kubelet[2730]: E0909 05:33:53.621097 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:53.622828 containerd[1604]: time="2025-09-09T05:33:53.622784265Z" level=info msg="CreateContainer within sandbox \"34fc8941fa25b480dad306afc4b1d5c283de7cfca3ad12680aceb96b8c8a6bcb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:33:53.771610 containerd[1604]: time="2025-09-09T05:33:53.771567179Z" level=info msg="Container 0fefa12fd53c87f8643e6727fda1bb24ed260e41054f948ed544e9b3a7a805b0: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:53.773840 containerd[1604]: time="2025-09-09T05:33:53.773801054Z" level=info msg="Container 8f937c7c1ab10c90f94f2dbd299c9f632d108b70aacd50984846bb42f61b05b4: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:33:53.778191 containerd[1604]: time="2025-09-09T05:33:53.778159098Z" level=info msg="CreateContainer within sandbox \"d5476d62e1f11a15ebb67b452baed9ee6d24ae472616dcecfe0a8d3bf621eff3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fefa12fd53c87f8643e6727fda1bb24ed260e41054f948ed544e9b3a7a805b0\"" Sep 9 05:33:53.778930 containerd[1604]: time="2025-09-09T05:33:53.778581605Z" level=info msg="StartContainer for \"0fefa12fd53c87f8643e6727fda1bb24ed260e41054f948ed544e9b3a7a805b0\"" Sep 9 05:33:53.779452 containerd[1604]: time="2025-09-09T05:33:53.779412983Z" level=info msg="connecting to shim 0fefa12fd53c87f8643e6727fda1bb24ed260e41054f948ed544e9b3a7a805b0" address="unix:///run/containerd/s/f6e4f7a6059cb0cadcf35f5583d0ee122931738792244cfc97d3729296efb414" protocol=ttrpc version=3 Sep 9 05:33:53.781822 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:47046.service - OpenSSH per-connection server daemon (10.0.0.1:47046). Sep 9 05:33:53.784521 containerd[1604]: time="2025-09-09T05:33:53.784493700Z" level=info msg="CreateContainer within sandbox \"34fc8941fa25b480dad306afc4b1d5c283de7cfca3ad12680aceb96b8c8a6bcb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f937c7c1ab10c90f94f2dbd299c9f632d108b70aacd50984846bb42f61b05b4\"" Sep 9 05:33:53.785674 containerd[1604]: time="2025-09-09T05:33:53.785334296Z" level=info msg="StartContainer for \"8f937c7c1ab10c90f94f2dbd299c9f632d108b70aacd50984846bb42f61b05b4\"" Sep 9 05:33:53.786689 containerd[1604]: time="2025-09-09T05:33:53.786656692Z" level=info msg="connecting to shim 8f937c7c1ab10c90f94f2dbd299c9f632d108b70aacd50984846bb42f61b05b4" address="unix:///run/containerd/s/0d49ddb0bff3426cdf371330cbc4bd93e9361b81aeb818a9de1dcbfcf7b569ff" protocol=ttrpc version=3 Sep 9 05:33:53.794237 systemd[1]: Started cri-containerd-0fefa12fd53c87f8643e6727fda1bb24ed260e41054f948ed544e9b3a7a805b0.scope - libcontainer container 0fefa12fd53c87f8643e6727fda1bb24ed260e41054f948ed544e9b3a7a805b0. Sep 9 05:33:53.816819 systemd[1]: Started cri-containerd-8f937c7c1ab10c90f94f2dbd299c9f632d108b70aacd50984846bb42f61b05b4.scope - libcontainer container 8f937c7c1ab10c90f94f2dbd299c9f632d108b70aacd50984846bb42f61b05b4. Sep 9 05:33:53.829742 containerd[1604]: time="2025-09-09T05:33:53.829668683Z" level=info msg="StartContainer for \"0fefa12fd53c87f8643e6727fda1bb24ed260e41054f948ed544e9b3a7a805b0\" returns successfully" Sep 9 05:33:53.834581 sshd[3986]: Accepted publickey for core from 10.0.0.1 port 47046 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:33:53.836301 sshd-session[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:33:53.841499 systemd-logind[1585]: New session 8 of user core. Sep 9 05:33:53.848367 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:33:53.858667 containerd[1604]: time="2025-09-09T05:33:53.858584385Z" level=info msg="StartContainer for \"8f937c7c1ab10c90f94f2dbd299c9f632d108b70aacd50984846bb42f61b05b4\" returns successfully" Sep 9 05:33:53.992710 sshd[4045]: Connection closed by 10.0.0.1 port 47046 Sep 9 05:33:53.992984 sshd-session[3986]: pam_unix(sshd:session): session closed for user core Sep 9 05:33:53.996404 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:47046.service: Deactivated successfully. Sep 9 05:33:53.998301 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:33:54.000455 systemd-logind[1585]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:33:54.001402 systemd-logind[1585]: Removed session 8. Sep 9 05:33:54.342510 kubelet[2730]: E0909 05:33:54.342460 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:54.345161 kubelet[2730]: E0909 05:33:54.345126 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:54.353255 kubelet[2730]: I0909 05:33:54.353198 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wtx7h" podStartSLOduration=23.353182088 podStartE2EDuration="23.353182088s" podCreationTimestamp="2025-09-09 05:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:33:54.352700129 +0000 UTC m=+28.233497570" watchObservedRunningTime="2025-09-09 05:33:54.353182088 +0000 UTC m=+28.233979529" Sep 9 05:33:55.346403 kubelet[2730]: E0909 05:33:55.346370 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:55.346910 kubelet[2730]: E0909 05:33:55.346512 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:56.351356 kubelet[2730]: E0909 05:33:56.351316 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:56.351835 kubelet[2730]: E0909 05:33:56.351441 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:33:59.009402 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:47058.service - OpenSSH per-connection server daemon (10.0.0.1:47058). Sep 9 05:33:59.072880 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 47058 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:33:59.074699 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:33:59.079134 systemd-logind[1585]: New session 9 of user core. Sep 9 05:33:59.098737 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:33:59.211744 sshd[4085]: Connection closed by 10.0.0.1 port 47058 Sep 9 05:33:59.212128 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Sep 9 05:33:59.217288 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:47058.service: Deactivated successfully. Sep 9 05:33:59.219341 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:33:59.220346 systemd-logind[1585]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:33:59.221419 systemd-logind[1585]: Removed session 9. Sep 9 05:34:04.228167 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:40788.service - OpenSSH per-connection server daemon (10.0.0.1:40788). Sep 9 05:34:04.274927 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 40788 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:04.276169 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:04.280080 systemd-logind[1585]: New session 10 of user core. Sep 9 05:34:04.286751 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:34:04.486603 sshd[4106]: Connection closed by 10.0.0.1 port 40788 Sep 9 05:34:04.486908 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:04.490523 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:40788.service: Deactivated successfully. Sep 9 05:34:04.492269 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:34:04.492945 systemd-logind[1585]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:34:04.493986 systemd-logind[1585]: Removed session 10. Sep 9 05:34:09.502282 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:40790.service - OpenSSH per-connection server daemon (10.0.0.1:40790). Sep 9 05:34:09.548505 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 40790 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:09.549777 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:09.553749 systemd-logind[1585]: New session 11 of user core. Sep 9 05:34:09.560757 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:34:09.668412 sshd[4123]: Connection closed by 10.0.0.1 port 40790 Sep 9 05:34:09.668827 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:09.679122 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:40790.service: Deactivated successfully. Sep 9 05:34:09.680984 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:34:09.681706 systemd-logind[1585]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:34:09.684529 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:40794.service - OpenSSH per-connection server daemon (10.0.0.1:40794). Sep 9 05:34:09.685270 systemd-logind[1585]: Removed session 11. Sep 9 05:34:09.737986 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 40794 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:09.739122 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:09.743231 systemd-logind[1585]: New session 12 of user core. Sep 9 05:34:09.756856 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:34:09.897123 sshd[4140]: Connection closed by 10.0.0.1 port 40794 Sep 9 05:34:09.897581 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:09.908806 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:40794.service: Deactivated successfully. Sep 9 05:34:09.912759 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:34:09.914130 systemd-logind[1585]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:34:09.917569 systemd-logind[1585]: Removed session 12. Sep 9 05:34:09.919522 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:56134.service - OpenSSH per-connection server daemon (10.0.0.1:56134). Sep 9 05:34:09.970136 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 56134 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:09.971296 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:09.975250 systemd-logind[1585]: New session 13 of user core. Sep 9 05:34:09.983738 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:34:10.097457 sshd[4154]: Connection closed by 10.0.0.1 port 56134 Sep 9 05:34:10.097792 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:10.101574 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:56134.service: Deactivated successfully. Sep 9 05:34:10.103714 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:34:10.105361 systemd-logind[1585]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:34:10.106357 systemd-logind[1585]: Removed session 13. Sep 9 05:34:15.109990 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:56138.service - OpenSSH per-connection server daemon (10.0.0.1:56138). Sep 9 05:34:15.163997 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 56138 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:15.165479 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:15.169529 systemd-logind[1585]: New session 14 of user core. Sep 9 05:34:15.185842 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:34:15.294877 sshd[4170]: Connection closed by 10.0.0.1 port 56138 Sep 9 05:34:15.295203 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:15.300116 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:56138.service: Deactivated successfully. Sep 9 05:34:15.302133 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:34:15.302901 systemd-logind[1585]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:34:15.303977 systemd-logind[1585]: Removed session 14. Sep 9 05:34:20.306226 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:35434.service - OpenSSH per-connection server daemon (10.0.0.1:35434). Sep 9 05:34:20.351684 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 35434 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:20.352950 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:20.357027 systemd-logind[1585]: New session 15 of user core. Sep 9 05:34:20.368775 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:34:20.471902 sshd[4186]: Connection closed by 10.0.0.1 port 35434 Sep 9 05:34:20.472246 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:20.475976 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:35434.service: Deactivated successfully. Sep 9 05:34:20.477811 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:34:20.478600 systemd-logind[1585]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:34:20.479760 systemd-logind[1585]: Removed session 15. Sep 9 05:34:25.491439 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:35446.service - OpenSSH per-connection server daemon (10.0.0.1:35446). Sep 9 05:34:25.540053 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 35446 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:25.541893 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:25.546408 systemd-logind[1585]: New session 16 of user core. Sep 9 05:34:25.549774 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:34:25.656504 sshd[4202]: Connection closed by 10.0.0.1 port 35446 Sep 9 05:34:25.656930 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:25.675114 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:35446.service: Deactivated successfully. Sep 9 05:34:25.676882 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:34:25.677671 systemd-logind[1585]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:34:25.680465 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:35448.service - OpenSSH per-connection server daemon (10.0.0.1:35448). Sep 9 05:34:25.681418 systemd-logind[1585]: Removed session 16. Sep 9 05:34:25.740011 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 35448 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:25.741520 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:25.746752 systemd-logind[1585]: New session 17 of user core. Sep 9 05:34:25.757783 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:34:26.233675 sshd[4218]: Connection closed by 10.0.0.1 port 35448 Sep 9 05:34:26.235883 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:26.245347 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:35448.service: Deactivated successfully. Sep 9 05:34:26.247141 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:34:26.247915 systemd-logind[1585]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:34:26.251292 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:35452.service - OpenSSH per-connection server daemon (10.0.0.1:35452). Sep 9 05:34:26.251966 systemd-logind[1585]: Removed session 17. Sep 9 05:34:26.300197 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 35452 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:26.301499 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:26.306068 systemd-logind[1585]: New session 18 of user core. Sep 9 05:34:26.315778 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:34:26.765565 sshd[4235]: Connection closed by 10.0.0.1 port 35452 Sep 9 05:34:26.766058 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:26.776374 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:35452.service: Deactivated successfully. Sep 9 05:34:26.778355 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:34:26.779103 systemd-logind[1585]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:34:26.782359 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:35464.service - OpenSSH per-connection server daemon (10.0.0.1:35464). Sep 9 05:34:26.783066 systemd-logind[1585]: Removed session 18. Sep 9 05:34:26.828907 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 35464 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:26.830502 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:26.836689 systemd-logind[1585]: New session 19 of user core. Sep 9 05:34:26.841803 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:34:27.061272 sshd[4256]: Connection closed by 10.0.0.1 port 35464 Sep 9 05:34:27.061817 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:27.070616 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:35464.service: Deactivated successfully. Sep 9 05:34:27.072649 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:34:27.073450 systemd-logind[1585]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:34:27.076021 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:35468.service - OpenSSH per-connection server daemon (10.0.0.1:35468). Sep 9 05:34:27.076680 systemd-logind[1585]: Removed session 19. Sep 9 05:34:27.120882 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 35468 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:27.122202 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:27.126338 systemd-logind[1585]: New session 20 of user core. Sep 9 05:34:27.132744 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:34:27.247209 sshd[4270]: Connection closed by 10.0.0.1 port 35468 Sep 9 05:34:27.247535 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:27.252119 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:35468.service: Deactivated successfully. Sep 9 05:34:27.254156 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:34:27.254904 systemd-logind[1585]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:34:27.256221 systemd-logind[1585]: Removed session 20. Sep 9 05:34:32.263289 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:51630.service - OpenSSH per-connection server daemon (10.0.0.1:51630). Sep 9 05:34:32.301276 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 51630 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:32.302439 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:32.306150 systemd-logind[1585]: New session 21 of user core. Sep 9 05:34:32.315753 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:34:32.417584 sshd[4286]: Connection closed by 10.0.0.1 port 51630 Sep 9 05:34:32.417935 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:32.423112 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:51630.service: Deactivated successfully. Sep 9 05:34:32.424887 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:34:32.425638 systemd-logind[1585]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:34:32.426749 systemd-logind[1585]: Removed session 21. Sep 9 05:34:37.434181 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:51636.service - OpenSSH per-connection server daemon (10.0.0.1:51636). Sep 9 05:34:37.474797 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 51636 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:37.476133 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:37.480216 systemd-logind[1585]: New session 22 of user core. Sep 9 05:34:37.494760 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:34:37.723045 sshd[4306]: Connection closed by 10.0.0.1 port 51636 Sep 9 05:34:37.723298 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:37.728514 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:51636.service: Deactivated successfully. Sep 9 05:34:37.731560 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:34:37.732669 systemd-logind[1585]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:34:37.734131 systemd-logind[1585]: Removed session 22. Sep 9 05:34:41.206545 kubelet[2730]: E0909 05:34:41.206491 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:42.740744 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:57242.service - OpenSSH per-connection server daemon (10.0.0.1:57242). Sep 9 05:34:42.800127 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 57242 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:42.801554 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:42.805848 systemd-logind[1585]: New session 23 of user core. Sep 9 05:34:42.814758 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:34:42.921230 sshd[4323]: Connection closed by 10.0.0.1 port 57242 Sep 9 05:34:42.921593 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:42.926138 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:57242.service: Deactivated successfully. Sep 9 05:34:42.928135 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:34:42.928858 systemd-logind[1585]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:34:42.930118 systemd-logind[1585]: Removed session 23. Sep 9 05:34:43.199655 kubelet[2730]: E0909 05:34:43.199504 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:43.200074 kubelet[2730]: E0909 05:34:43.199696 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:47.933841 systemd[1]: Started sshd@23-10.0.0.89:22-10.0.0.1:57254.service - OpenSSH per-connection server daemon (10.0.0.1:57254). Sep 9 05:34:47.997054 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 57254 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:47.998726 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:48.003051 systemd-logind[1585]: New session 24 of user core. Sep 9 05:34:48.013762 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:34:48.126285 sshd[4340]: Connection closed by 10.0.0.1 port 57254 Sep 9 05:34:48.126671 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:48.140078 systemd[1]: sshd@23-10.0.0.89:22-10.0.0.1:57254.service: Deactivated successfully. Sep 9 05:34:48.141899 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:34:48.142746 systemd-logind[1585]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:34:48.145765 systemd[1]: Started sshd@24-10.0.0.89:22-10.0.0.1:57266.service - OpenSSH per-connection server daemon (10.0.0.1:57266). Sep 9 05:34:48.146485 systemd-logind[1585]: Removed session 24. Sep 9 05:34:48.193199 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 57266 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:48.194617 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:48.198841 kubelet[2730]: E0909 05:34:48.198815 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:48.200971 systemd-logind[1585]: New session 25 of user core. Sep 9 05:34:48.206772 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 05:34:49.198980 kubelet[2730]: E0909 05:34:49.198940 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:49.529486 kubelet[2730]: I0909 05:34:49.529364 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qwngx" podStartSLOduration=78.529345962 podStartE2EDuration="1m18.529345962s" podCreationTimestamp="2025-09-09 05:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:33:54.374869565 +0000 UTC m=+28.255667006" watchObservedRunningTime="2025-09-09 05:34:49.529345962 +0000 UTC m=+83.410143403" Sep 9 05:34:49.530900 containerd[1604]: time="2025-09-09T05:34:49.530787272Z" level=info msg="StopContainer for \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\" with timeout 30 (s)" Sep 9 05:34:49.538335 containerd[1604]: time="2025-09-09T05:34:49.538291538Z" level=info msg="Stop container \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\" with signal terminated" Sep 9 05:34:49.555542 systemd[1]: cri-containerd-0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91.scope: Deactivated successfully. Sep 9 05:34:49.559311 containerd[1604]: time="2025-09-09T05:34:49.559270687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\" id:\"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\" pid:3296 exited_at:{seconds:1757396089 nanos:557161457}" Sep 9 05:34:49.559439 containerd[1604]: time="2025-09-09T05:34:49.559340951Z" level=info msg="received exit event container_id:\"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\" id:\"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\" pid:3296 exited_at:{seconds:1757396089 nanos:557161457}" Sep 9 05:34:49.566222 containerd[1604]: time="2025-09-09T05:34:49.566180023Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:34:49.572074 containerd[1604]: time="2025-09-09T05:34:49.572037925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" id:\"d5e29279ef537c29c2cab1a1e69300708e03ffd33913e7e65f8e36c564baacd5\" pid:4383 exited_at:{seconds:1757396089 nanos:571790531}" Sep 9 05:34:49.573997 containerd[1604]: time="2025-09-09T05:34:49.573962922Z" level=info msg="StopContainer for \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" with timeout 2 (s)" Sep 9 05:34:49.574221 containerd[1604]: time="2025-09-09T05:34:49.574200608Z" level=info msg="Stop container \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" with signal terminated" Sep 9 05:34:49.581985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91-rootfs.mount: Deactivated successfully. Sep 9 05:34:49.582663 systemd-networkd[1485]: lxc_health: Link DOWN Sep 9 05:34:49.582671 systemd-networkd[1485]: lxc_health: Lost carrier Sep 9 05:34:49.599950 containerd[1604]: time="2025-09-09T05:34:49.599908405Z" level=info msg="StopContainer for \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\" returns successfully" Sep 9 05:34:49.601996 systemd[1]: cri-containerd-8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c.scope: Deactivated successfully. Sep 9 05:34:49.602434 systemd[1]: cri-containerd-8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c.scope: Consumed 6.287s CPU time, 123.5M memory peak, 392K read from disk, 13.3M written to disk. Sep 9 05:34:49.603965 containerd[1604]: time="2025-09-09T05:34:49.603936241Z" level=info msg="received exit event container_id:\"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" id:\"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" pid:3393 exited_at:{seconds:1757396089 nanos:603767698}" Sep 9 05:34:49.604086 containerd[1604]: time="2025-09-09T05:34:49.604064717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" id:\"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" pid:3393 exited_at:{seconds:1757396089 nanos:603767698}" Sep 9 05:34:49.612350 containerd[1604]: time="2025-09-09T05:34:49.612312387Z" level=info msg="StopPodSandbox for \"a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71\"" Sep 9 05:34:49.612453 containerd[1604]: time="2025-09-09T05:34:49.612403932Z" level=info msg="Container to stop \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:34:49.621970 systemd[1]: cri-containerd-a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71.scope: Deactivated successfully. Sep 9 05:34:49.624126 containerd[1604]: time="2025-09-09T05:34:49.624086733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71\" id:\"a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71\" pid:2926 exit_status:137 exited_at:{seconds:1757396089 nanos:623752392}" Sep 9 05:34:49.633072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c-rootfs.mount: Deactivated successfully. Sep 9 05:34:49.644655 containerd[1604]: time="2025-09-09T05:34:49.644603877Z" level=info msg="StopContainer for \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" returns successfully" Sep 9 05:34:49.645710 containerd[1604]: time="2025-09-09T05:34:49.645313456Z" level=info msg="StopPodSandbox for \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\"" Sep 9 05:34:49.645710 containerd[1604]: time="2025-09-09T05:34:49.645377508Z" level=info msg="Container to stop \"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:34:49.645710 containerd[1604]: time="2025-09-09T05:34:49.645394711Z" level=info msg="Container to stop \"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:34:49.645710 containerd[1604]: time="2025-09-09T05:34:49.645405972Z" level=info msg="Container to stop \"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:34:49.645710 containerd[1604]: time="2025-09-09T05:34:49.645416533Z" level=info msg="Container to stop \"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:34:49.645710 containerd[1604]: time="2025-09-09T05:34:49.645426732Z" level=info msg="Container to stop \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:34:49.656329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71-rootfs.mount: Deactivated successfully. Sep 9 05:34:49.657112 systemd[1]: cri-containerd-64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a.scope: Deactivated successfully. Sep 9 05:34:49.658541 containerd[1604]: time="2025-09-09T05:34:49.658509795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" id:\"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" pid:2923 exit_status:137 exited_at:{seconds:1757396089 nanos:655525940}" Sep 9 05:34:49.658740 containerd[1604]: time="2025-09-09T05:34:49.658717854Z" level=info msg="shim disconnected" id=a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71 namespace=k8s.io Sep 9 05:34:49.658740 containerd[1604]: time="2025-09-09T05:34:49.658737030Z" level=warning msg="cleaning up after shim disconnected" id=a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71 namespace=k8s.io Sep 9 05:34:49.658805 containerd[1604]: time="2025-09-09T05:34:49.658744726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:34:49.660418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71-shm.mount: Deactivated successfully. Sep 9 05:34:49.680083 containerd[1604]: time="2025-09-09T05:34:49.680030081Z" level=info msg="TearDown network for sandbox \"a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71\" successfully" Sep 9 05:34:49.680083 containerd[1604]: time="2025-09-09T05:34:49.680066791Z" level=info msg="StopPodSandbox for \"a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71\" returns successfully" Sep 9 05:34:49.684013 containerd[1604]: time="2025-09-09T05:34:49.683979297Z" level=info msg="received exit event sandbox_id:\"a1feec42bb89d19a9105bc8d964c3c096a7ab3cabe62136f93b71de2d8625c71\" exit_status:137 exited_at:{seconds:1757396089 nanos:623752392}" Sep 9 05:34:49.696307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a-rootfs.mount: Deactivated successfully. Sep 9 05:34:49.701571 containerd[1604]: time="2025-09-09T05:34:49.701514387Z" level=info msg="received exit event sandbox_id:\"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" exit_status:137 exited_at:{seconds:1757396089 nanos:655525940}" Sep 9 05:34:49.702017 containerd[1604]: time="2025-09-09T05:34:49.701992574Z" level=info msg="TearDown network for sandbox \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" successfully" Sep 9 05:34:49.702056 containerd[1604]: time="2025-09-09T05:34:49.702016288Z" level=info msg="StopPodSandbox for \"64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a\" returns successfully" Sep 9 05:34:49.703487 containerd[1604]: time="2025-09-09T05:34:49.703465234Z" level=info msg="shim disconnected" id=64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a namespace=k8s.io Sep 9 05:34:49.703487 containerd[1604]: time="2025-09-09T05:34:49.703484421Z" level=warning msg="cleaning up after shim disconnected" id=64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a namespace=k8s.io Sep 9 05:34:49.703562 containerd[1604]: time="2025-09-09T05:34:49.703502897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:34:49.830548 kubelet[2730]: I0909 05:34:49.830510 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-lib-modules\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.830734 kubelet[2730]: I0909 05:34:49.830557 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f972031e-7481-41c7-8d11-a03cd44bc65d-hubble-tls\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.830734 kubelet[2730]: I0909 05:34:49.830572 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-bpf-maps\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.830734 kubelet[2730]: I0909 05:34:49.830586 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cni-path\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.830734 kubelet[2730]: I0909 05:34:49.830602 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-cgroup\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.830734 kubelet[2730]: I0909 05:34:49.830616 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvzhz\" (UniqueName: \"kubernetes.io/projected/d28da9bf-6dfb-48af-92a7-8e2058964ced-kube-api-access-lvzhz\") pod \"d28da9bf-6dfb-48af-92a7-8e2058964ced\" (UID: \"d28da9bf-6dfb-48af-92a7-8e2058964ced\") " Sep 9 05:34:49.830734 kubelet[2730]: I0909 05:34:49.830657 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-host-proc-sys-net\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.830877 kubelet[2730]: I0909 05:34:49.830670 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-host-proc-sys-kernel\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.830877 kubelet[2730]: I0909 05:34:49.830683 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-etc-cni-netd\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.830877 kubelet[2730]: I0909 05:34:49.830700 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-config-path\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.830877 kubelet[2730]: I0909 05:34:49.830702 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:34:49.830877 kubelet[2730]: I0909 05:34:49.830730 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-hostproc" (OuterVolumeSpecName: "hostproc") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:34:49.830877 kubelet[2730]: I0909 05:34:49.830714 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-hostproc\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.831019 kubelet[2730]: I0909 05:34:49.830746 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:34:49.831019 kubelet[2730]: I0909 05:34:49.830760 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cni-path" (OuterVolumeSpecName: "cni-path") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:34:49.831019 kubelet[2730]: I0909 05:34:49.830760 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f972031e-7481-41c7-8d11-a03cd44bc65d-clustermesh-secrets\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.831019 kubelet[2730]: I0909 05:34:49.830779 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-run\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.831019 kubelet[2730]: I0909 05:34:49.830793 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d28da9bf-6dfb-48af-92a7-8e2058964ced-cilium-config-path\") pod \"d28da9bf-6dfb-48af-92a7-8e2058964ced\" (UID: \"d28da9bf-6dfb-48af-92a7-8e2058964ced\") " Sep 9 05:34:49.831019 kubelet[2730]: I0909 05:34:49.830810 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-xtables-lock\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.831168 kubelet[2730]: I0909 05:34:49.830826 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skp6n\" (UniqueName: \"kubernetes.io/projected/f972031e-7481-41c7-8d11-a03cd44bc65d-kube-api-access-skp6n\") pod \"f972031e-7481-41c7-8d11-a03cd44bc65d\" (UID: \"f972031e-7481-41c7-8d11-a03cd44bc65d\") " Sep 9 05:34:49.831168 kubelet[2730]: I0909 05:34:49.830857 2730 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.831168 kubelet[2730]: I0909 05:34:49.830867 2730 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.831168 kubelet[2730]: I0909 05:34:49.830874 2730 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.831168 kubelet[2730]: I0909 05:34:49.830882 2730 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.831279 kubelet[2730]: I0909 05:34:49.831190 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:34:49.831279 kubelet[2730]: I0909 05:34:49.831211 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:34:49.834549 kubelet[2730]: I0909 05:34:49.834212 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:34:49.834549 kubelet[2730]: I0909 05:34:49.834286 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:34:49.834549 kubelet[2730]: I0909 05:34:49.834368 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:34:49.834549 kubelet[2730]: I0909 05:34:49.834386 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:34:49.834705 kubelet[2730]: I0909 05:34:49.834598 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d28da9bf-6dfb-48af-92a7-8e2058964ced-kube-api-access-lvzhz" (OuterVolumeSpecName: "kube-api-access-lvzhz") pod "d28da9bf-6dfb-48af-92a7-8e2058964ced" (UID: "d28da9bf-6dfb-48af-92a7-8e2058964ced"). InnerVolumeSpecName "kube-api-access-lvzhz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:34:49.835022 kubelet[2730]: I0909 05:34:49.834988 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f972031e-7481-41c7-8d11-a03cd44bc65d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:34:49.835130 kubelet[2730]: I0909 05:34:49.835030 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f972031e-7481-41c7-8d11-a03cd44bc65d-kube-api-access-skp6n" (OuterVolumeSpecName: "kube-api-access-skp6n") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "kube-api-access-skp6n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:34:49.835550 kubelet[2730]: I0909 05:34:49.835527 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f972031e-7481-41c7-8d11-a03cd44bc65d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:34:49.837436 kubelet[2730]: I0909 05:34:49.837404 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d28da9bf-6dfb-48af-92a7-8e2058964ced-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d28da9bf-6dfb-48af-92a7-8e2058964ced" (UID: "d28da9bf-6dfb-48af-92a7-8e2058964ced"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:34:49.837750 kubelet[2730]: I0909 05:34:49.837725 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f972031e-7481-41c7-8d11-a03cd44bc65d" (UID: "f972031e-7481-41c7-8d11-a03cd44bc65d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:34:49.931963 kubelet[2730]: I0909 05:34:49.931898 2730 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.931963 kubelet[2730]: I0909 05:34:49.931937 2730 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lvzhz\" (UniqueName: \"kubernetes.io/projected/d28da9bf-6dfb-48af-92a7-8e2058964ced-kube-api-access-lvzhz\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.931963 kubelet[2730]: I0909 05:34:49.931948 2730 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.931963 kubelet[2730]: I0909 05:34:49.931965 2730 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.932197 kubelet[2730]: I0909 05:34:49.931991 2730 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.932197 kubelet[2730]: I0909 05:34:49.932002 2730 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.932197 kubelet[2730]: I0909 05:34:49.932014 2730 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f972031e-7481-41c7-8d11-a03cd44bc65d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.932197 kubelet[2730]: I0909 05:34:49.932025 2730 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.932197 kubelet[2730]: I0909 05:34:49.932035 2730 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d28da9bf-6dfb-48af-92a7-8e2058964ced-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.932197 kubelet[2730]: I0909 05:34:49.932056 2730 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f972031e-7481-41c7-8d11-a03cd44bc65d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.932197 kubelet[2730]: I0909 05:34:49.932066 2730 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-skp6n\" (UniqueName: \"kubernetes.io/projected/f972031e-7481-41c7-8d11-a03cd44bc65d-kube-api-access-skp6n\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:49.932197 kubelet[2730]: I0909 05:34:49.932076 2730 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f972031e-7481-41c7-8d11-a03cd44bc65d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 05:34:50.207813 systemd[1]: Removed slice kubepods-besteffort-podd28da9bf_6dfb_48af_92a7_8e2058964ced.slice - libcontainer container kubepods-besteffort-podd28da9bf_6dfb_48af_92a7_8e2058964ced.slice. Sep 9 05:34:50.209235 systemd[1]: Removed slice kubepods-burstable-podf972031e_7481_41c7_8d11_a03cd44bc65d.slice - libcontainer container kubepods-burstable-podf972031e_7481_41c7_8d11_a03cd44bc65d.slice. Sep 9 05:34:50.209485 systemd[1]: kubepods-burstable-podf972031e_7481_41c7_8d11_a03cd44bc65d.slice: Consumed 6.390s CPU time, 123.8M memory peak, 392K read from disk, 13.3M written to disk. Sep 9 05:34:50.463784 kubelet[2730]: I0909 05:34:50.463602 2730 scope.go:117] "RemoveContainer" containerID="0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91" Sep 9 05:34:50.465998 containerd[1604]: time="2025-09-09T05:34:50.465945110Z" level=info msg="RemoveContainer for \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\"" Sep 9 05:34:50.543069 containerd[1604]: time="2025-09-09T05:34:50.543009036Z" level=info msg="RemoveContainer for \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\" returns successfully" Sep 9 05:34:50.543553 kubelet[2730]: I0909 05:34:50.543411 2730 scope.go:117] "RemoveContainer" containerID="0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91" Sep 9 05:34:50.543872 containerd[1604]: time="2025-09-09T05:34:50.543824948Z" level=error msg="ContainerStatus for \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\": not found" Sep 9 05:34:50.544107 kubelet[2730]: E0909 05:34:50.544074 2730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\": not found" containerID="0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91" Sep 9 05:34:50.544221 kubelet[2730]: I0909 05:34:50.544117 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91"} err="failed to get container status \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\": rpc error: code = NotFound desc = an error occurred when try to find container \"0156efdc7d33994d91bbcf98cd6d4edd00e76327ec4b7924942fa839ed074b91\": not found" Sep 9 05:34:50.544265 kubelet[2730]: I0909 05:34:50.544222 2730 scope.go:117] "RemoveContainer" containerID="8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c" Sep 9 05:34:50.546469 containerd[1604]: time="2025-09-09T05:34:50.546441498Z" level=info msg="RemoveContainer for \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\"" Sep 9 05:34:50.552681 containerd[1604]: time="2025-09-09T05:34:50.552598075Z" level=info msg="RemoveContainer for \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" returns successfully" Sep 9 05:34:50.552965 kubelet[2730]: I0909 05:34:50.552913 2730 scope.go:117] "RemoveContainer" containerID="82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e" Sep 9 05:34:50.554664 containerd[1604]: time="2025-09-09T05:34:50.554615648Z" level=info msg="RemoveContainer for \"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\"" Sep 9 05:34:50.559618 containerd[1604]: time="2025-09-09T05:34:50.559561587Z" level=info msg="RemoveContainer for \"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\" returns successfully" Sep 9 05:34:50.559904 kubelet[2730]: I0909 05:34:50.559835 2730 scope.go:117] "RemoveContainer" containerID="9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea" Sep 9 05:34:50.562186 containerd[1604]: time="2025-09-09T05:34:50.562158509Z" level=info msg="RemoveContainer for \"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\"" Sep 9 05:34:50.566449 containerd[1604]: time="2025-09-09T05:34:50.566415340Z" level=info msg="RemoveContainer for \"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\" returns successfully" Sep 9 05:34:50.566669 kubelet[2730]: I0909 05:34:50.566618 2730 scope.go:117] "RemoveContainer" containerID="9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d" Sep 9 05:34:50.568277 containerd[1604]: time="2025-09-09T05:34:50.568236917Z" level=info msg="RemoveContainer for \"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\"" Sep 9 05:34:50.572285 containerd[1604]: time="2025-09-09T05:34:50.572212829Z" level=info msg="RemoveContainer for \"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\" returns successfully" Sep 9 05:34:50.572504 kubelet[2730]: I0909 05:34:50.572472 2730 scope.go:117] "RemoveContainer" containerID="42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92" Sep 9 05:34:50.574400 containerd[1604]: time="2025-09-09T05:34:50.574351735Z" level=info msg="RemoveContainer for \"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\"" Sep 9 05:34:50.578033 containerd[1604]: time="2025-09-09T05:34:50.577986343Z" level=info msg="RemoveContainer for \"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\" returns successfully" Sep 9 05:34:50.578262 kubelet[2730]: I0909 05:34:50.578218 2730 scope.go:117] "RemoveContainer" containerID="8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c" Sep 9 05:34:50.578508 containerd[1604]: time="2025-09-09T05:34:50.578471622Z" level=error msg="ContainerStatus for \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\": not found" Sep 9 05:34:50.578670 kubelet[2730]: E0909 05:34:50.578610 2730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\": not found" containerID="8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c" Sep 9 05:34:50.578720 kubelet[2730]: I0909 05:34:50.578678 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c"} err="failed to get container status \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c68b079c91419cb2cdbff8a0a4b1537a4d5a20bef00b217e898bc667e886e2c\": not found" Sep 9 05:34:50.578720 kubelet[2730]: I0909 05:34:50.578706 2730 scope.go:117] "RemoveContainer" containerID="82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e" Sep 9 05:34:50.579022 containerd[1604]: time="2025-09-09T05:34:50.578971780Z" level=error msg="ContainerStatus for \"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\": not found" Sep 9 05:34:50.579123 kubelet[2730]: E0909 05:34:50.579099 2730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\": not found" containerID="82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e" Sep 9 05:34:50.579168 kubelet[2730]: I0909 05:34:50.579122 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e"} err="failed to get container status \"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"82a16b6b9a1e6b7ba3d22722c94a201287f843bccf31128bf73fe9e02f1d5b3e\": not found" Sep 9 05:34:50.579168 kubelet[2730]: I0909 05:34:50.579148 2730 scope.go:117] "RemoveContainer" containerID="9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea" Sep 9 05:34:50.579348 containerd[1604]: time="2025-09-09T05:34:50.579309045Z" level=error msg="ContainerStatus for \"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\": not found" Sep 9 05:34:50.579491 kubelet[2730]: E0909 05:34:50.579402 2730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\": not found" containerID="9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea" Sep 9 05:34:50.579491 kubelet[2730]: I0909 05:34:50.579422 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea"} err="failed to get container status \"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ad021ed866bfc262ade15fd7ab13b10855a418447d2f7c53c53eb608509e7ea\": not found" Sep 9 05:34:50.579491 kubelet[2730]: I0909 05:34:50.579455 2730 scope.go:117] "RemoveContainer" containerID="9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d" Sep 9 05:34:50.579744 containerd[1604]: time="2025-09-09T05:34:50.579694935Z" level=error msg="ContainerStatus for \"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\": not found" Sep 9 05:34:50.579871 kubelet[2730]: E0909 05:34:50.579834 2730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\": not found" containerID="9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d" Sep 9 05:34:50.579871 kubelet[2730]: I0909 05:34:50.579856 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d"} err="failed to get container status \"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\": rpc error: code = NotFound desc = an error occurred when try to find container \"9154748b126f3dfd931f18b1077a02006c313c1a0eff2d66bb8a0f173aff450d\": not found" Sep 9 05:34:50.579871 kubelet[2730]: I0909 05:34:50.579868 2730 scope.go:117] "RemoveContainer" containerID="42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92" Sep 9 05:34:50.580120 containerd[1604]: time="2025-09-09T05:34:50.580056787Z" level=error msg="ContainerStatus for \"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\": not found" Sep 9 05:34:50.580318 kubelet[2730]: E0909 05:34:50.580287 2730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\": not found" containerID="42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92" Sep 9 05:34:50.580395 kubelet[2730]: I0909 05:34:50.580317 2730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92"} err="failed to get container status \"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\": rpc error: code = NotFound desc = an error occurred when try to find container \"42212baad2b963634b9fbcdce9b1f22723e617250ff05df2a8f4bb2d310adc92\": not found" Sep 9 05:34:50.581865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64eee35aadc1f6ebaaaf2aaad082717935a091d4ad68f7e087ca320d7217de9a-shm.mount: Deactivated successfully. Sep 9 05:34:50.581973 systemd[1]: var-lib-kubelet-pods-d28da9bf\x2d6dfb\x2d48af\x2d92a7\x2d8e2058964ced-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlvzhz.mount: Deactivated successfully. Sep 9 05:34:50.582055 systemd[1]: var-lib-kubelet-pods-f972031e\x2d7481\x2d41c7\x2d8d11\x2da03cd44bc65d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskp6n.mount: Deactivated successfully. Sep 9 05:34:50.582128 systemd[1]: var-lib-kubelet-pods-f972031e\x2d7481\x2d41c7\x2d8d11\x2da03cd44bc65d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 05:34:50.582225 systemd[1]: var-lib-kubelet-pods-f972031e\x2d7481\x2d41c7\x2d8d11\x2da03cd44bc65d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 05:34:51.255970 kubelet[2730]: E0909 05:34:51.255931 2730 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:34:51.579159 sshd[4356]: Connection closed by 10.0.0.1 port 57266 Sep 9 05:34:51.579752 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:51.593813 systemd[1]: sshd@24-10.0.0.89:22-10.0.0.1:57266.service: Deactivated successfully. Sep 9 05:34:51.596057 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 05:34:51.596785 systemd-logind[1585]: Session 25 logged out. Waiting for processes to exit. Sep 9 05:34:51.599442 systemd[1]: Started sshd@25-10.0.0.89:22-10.0.0.1:33382.service - OpenSSH per-connection server daemon (10.0.0.1:33382). Sep 9 05:34:51.600652 systemd-logind[1585]: Removed session 25. Sep 9 05:34:51.661172 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 33382 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:51.662923 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:51.668133 systemd-logind[1585]: New session 26 of user core. Sep 9 05:34:51.678824 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 05:34:52.201921 kubelet[2730]: I0909 05:34:52.201873 2730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d28da9bf-6dfb-48af-92a7-8e2058964ced" path="/var/lib/kubelet/pods/d28da9bf-6dfb-48af-92a7-8e2058964ced/volumes" Sep 9 05:34:52.202561 kubelet[2730]: I0909 05:34:52.202483 2730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f972031e-7481-41c7-8d11-a03cd44bc65d" path="/var/lib/kubelet/pods/f972031e-7481-41c7-8d11-a03cd44bc65d/volumes" Sep 9 05:34:52.245763 sshd[4519]: Connection closed by 10.0.0.1 port 33382 Sep 9 05:34:52.246190 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:52.262572 systemd[1]: sshd@25-10.0.0.89:22-10.0.0.1:33382.service: Deactivated successfully. Sep 9 05:34:52.265752 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 05:34:52.267065 kubelet[2730]: I0909 05:34:52.266989 2730 memory_manager.go:355] "RemoveStaleState removing state" podUID="f972031e-7481-41c7-8d11-a03cd44bc65d" containerName="cilium-agent" Sep 9 05:34:52.267065 kubelet[2730]: I0909 05:34:52.267021 2730 memory_manager.go:355] "RemoveStaleState removing state" podUID="d28da9bf-6dfb-48af-92a7-8e2058964ced" containerName="cilium-operator" Sep 9 05:34:52.267480 systemd-logind[1585]: Session 26 logged out. Waiting for processes to exit. Sep 9 05:34:52.275509 systemd[1]: Started sshd@26-10.0.0.89:22-10.0.0.1:33398.service - OpenSSH per-connection server daemon (10.0.0.1:33398). Sep 9 05:34:52.277496 systemd-logind[1585]: Removed session 26. Sep 9 05:34:52.288708 systemd[1]: Created slice kubepods-burstable-pod43a3cc48_3eee_4fde_b2f0_a0d7dd0415d0.slice - libcontainer container kubepods-burstable-pod43a3cc48_3eee_4fde_b2f0_a0d7dd0415d0.slice. Sep 9 05:34:52.334644 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 33398 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:52.336394 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:52.340871 systemd-logind[1585]: New session 27 of user core. Sep 9 05:34:52.346142 kubelet[2730]: I0909 05:34:52.346110 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-host-proc-sys-kernel\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346225 kubelet[2730]: I0909 05:34:52.346150 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-bpf-maps\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346225 kubelet[2730]: I0909 05:34:52.346189 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-cni-path\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346287 kubelet[2730]: I0909 05:34:52.346245 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-cilium-config-path\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346356 kubelet[2730]: I0909 05:34:52.346322 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-clustermesh-secrets\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346387 kubelet[2730]: I0909 05:34:52.346363 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-etc-cni-netd\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346413 kubelet[2730]: I0909 05:34:52.346386 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-hubble-tls\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346413 kubelet[2730]: I0909 05:34:52.346401 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-cilium-cgroup\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346460 kubelet[2730]: I0909 05:34:52.346414 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-lib-modules\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346460 kubelet[2730]: I0909 05:34:52.346430 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-xtables-lock\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346460 kubelet[2730]: I0909 05:34:52.346447 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-hostproc\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346520 kubelet[2730]: I0909 05:34:52.346465 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-cilium-ipsec-secrets\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346520 kubelet[2730]: I0909 05:34:52.346489 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-cilium-run\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346520 kubelet[2730]: I0909 05:34:52.346502 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-host-proc-sys-net\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.346520 kubelet[2730]: I0909 05:34:52.346517 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj4n2\" (UniqueName: \"kubernetes.io/projected/43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0-kube-api-access-cj4n2\") pod \"cilium-7n22m\" (UID: \"43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0\") " pod="kube-system/cilium-7n22m" Sep 9 05:34:52.350771 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 05:34:52.402086 sshd[4534]: Connection closed by 10.0.0.1 port 33398 Sep 9 05:34:52.402415 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Sep 9 05:34:52.413319 systemd[1]: sshd@26-10.0.0.89:22-10.0.0.1:33398.service: Deactivated successfully. Sep 9 05:34:52.415305 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 05:34:52.416090 systemd-logind[1585]: Session 27 logged out. Waiting for processes to exit. Sep 9 05:34:52.419139 systemd[1]: Started sshd@27-10.0.0.89:22-10.0.0.1:33406.service - OpenSSH per-connection server daemon (10.0.0.1:33406). Sep 9 05:34:52.419848 systemd-logind[1585]: Removed session 27. Sep 9 05:34:52.476088 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 33406 ssh2: RSA SHA256:9+3J2aT7q2koLO1Rle2UX2pTYMxmV9eQF9r8rZDBoIg Sep 9 05:34:52.478115 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:34:52.482219 systemd-logind[1585]: New session 28 of user core. Sep 9 05:34:52.494763 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 05:34:52.593084 kubelet[2730]: E0909 05:34:52.593034 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:52.593655 containerd[1604]: time="2025-09-09T05:34:52.593560340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7n22m,Uid:43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0,Namespace:kube-system,Attempt:0,}" Sep 9 05:34:52.685548 containerd[1604]: time="2025-09-09T05:34:52.685495161Z" level=info msg="connecting to shim 7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427" address="unix:///run/containerd/s/7e54d5afe797779bcafe2af462182b647331a49ece9f3d487bc9ea8d59237204" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:34:52.713893 systemd[1]: Started cri-containerd-7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427.scope - libcontainer container 7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427. Sep 9 05:34:52.738937 containerd[1604]: time="2025-09-09T05:34:52.738819205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7n22m,Uid:43a3cc48-3eee-4fde-b2f0-a0d7dd0415d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427\"" Sep 9 05:34:52.739752 kubelet[2730]: E0909 05:34:52.739725 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:52.744142 containerd[1604]: time="2025-09-09T05:34:52.744104324Z" level=info msg="CreateContainer within sandbox \"7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:34:52.751455 containerd[1604]: time="2025-09-09T05:34:52.751403965Z" level=info msg="Container 3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:34:52.759903 containerd[1604]: time="2025-09-09T05:34:52.759856160Z" level=info msg="CreateContainer within sandbox \"7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4\"" Sep 9 05:34:52.760391 containerd[1604]: time="2025-09-09T05:34:52.760352168Z" level=info msg="StartContainer for \"3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4\"" Sep 9 05:34:52.761248 containerd[1604]: time="2025-09-09T05:34:52.761219097Z" level=info msg="connecting to shim 3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4" address="unix:///run/containerd/s/7e54d5afe797779bcafe2af462182b647331a49ece9f3d487bc9ea8d59237204" protocol=ttrpc version=3 Sep 9 05:34:52.782768 systemd[1]: Started cri-containerd-3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4.scope - libcontainer container 3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4. Sep 9 05:34:52.908083 containerd[1604]: time="2025-09-09T05:34:52.908026483Z" level=info msg="StartContainer for \"3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4\" returns successfully" Sep 9 05:34:52.919959 systemd[1]: cri-containerd-3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4.scope: Deactivated successfully. Sep 9 05:34:52.920832 containerd[1604]: time="2025-09-09T05:34:52.920795355Z" level=info msg="received exit event container_id:\"3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4\" id:\"3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4\" pid:4615 exited_at:{seconds:1757396092 nanos:920426199}" Sep 9 05:34:52.920912 containerd[1604]: time="2025-09-09T05:34:52.920816355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4\" id:\"3fa7c90ac5e0443abe9c4e1ae176d28a499c1f4f75c044c7fa466da52be690f4\" pid:4615 exited_at:{seconds:1757396092 nanos:920426199}" Sep 9 05:34:53.478243 kubelet[2730]: E0909 05:34:53.478196 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:53.479769 containerd[1604]: time="2025-09-09T05:34:53.479725644Z" level=info msg="CreateContainer within sandbox \"7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:34:53.784706 containerd[1604]: time="2025-09-09T05:34:53.784563246Z" level=info msg="Container cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:34:53.789254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3593515660.mount: Deactivated successfully. Sep 9 05:34:54.325043 containerd[1604]: time="2025-09-09T05:34:54.324982126Z" level=info msg="CreateContainer within sandbox \"7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82\"" Sep 9 05:34:54.325520 containerd[1604]: time="2025-09-09T05:34:54.325459609Z" level=info msg="StartContainer for \"cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82\"" Sep 9 05:34:54.326474 containerd[1604]: time="2025-09-09T05:34:54.326445392Z" level=info msg="connecting to shim cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82" address="unix:///run/containerd/s/7e54d5afe797779bcafe2af462182b647331a49ece9f3d487bc9ea8d59237204" protocol=ttrpc version=3 Sep 9 05:34:54.350775 systemd[1]: Started cri-containerd-cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82.scope - libcontainer container cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82. Sep 9 05:34:54.388019 systemd[1]: cri-containerd-cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82.scope: Deactivated successfully. Sep 9 05:34:54.389054 containerd[1604]: time="2025-09-09T05:34:54.389006033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82\" id:\"cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82\" pid:4660 exited_at:{seconds:1757396094 nanos:388385928}" Sep 9 05:34:54.598659 containerd[1604]: time="2025-09-09T05:34:54.598578410Z" level=info msg="received exit event container_id:\"cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82\" id:\"cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82\" pid:4660 exited_at:{seconds:1757396094 nanos:388385928}" Sep 9 05:34:54.599419 containerd[1604]: time="2025-09-09T05:34:54.599368389Z" level=info msg="StartContainer for \"cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82\" returns successfully" Sep 9 05:34:54.617075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbc1ea16a4154febba4ae718d2d9425a981b8cefde79fe0252e19b502eafed82-rootfs.mount: Deactivated successfully. Sep 9 05:34:55.605424 kubelet[2730]: E0909 05:34:55.605388 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:55.607332 containerd[1604]: time="2025-09-09T05:34:55.607277690Z" level=info msg="CreateContainer within sandbox \"7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:34:55.630586 containerd[1604]: time="2025-09-09T05:34:55.630543070Z" level=info msg="Container 3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:34:55.641173 containerd[1604]: time="2025-09-09T05:34:55.641119798Z" level=info msg="CreateContainer within sandbox \"7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8\"" Sep 9 05:34:55.641606 containerd[1604]: time="2025-09-09T05:34:55.641557694Z" level=info msg="StartContainer for \"3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8\"" Sep 9 05:34:55.643002 containerd[1604]: time="2025-09-09T05:34:55.642978979Z" level=info msg="connecting to shim 3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8" address="unix:///run/containerd/s/7e54d5afe797779bcafe2af462182b647331a49ece9f3d487bc9ea8d59237204" protocol=ttrpc version=3 Sep 9 05:34:55.665777 systemd[1]: Started cri-containerd-3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8.scope - libcontainer container 3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8. Sep 9 05:34:55.703492 systemd[1]: cri-containerd-3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8.scope: Deactivated successfully. Sep 9 05:34:55.704876 containerd[1604]: time="2025-09-09T05:34:55.704846739Z" level=info msg="StartContainer for \"3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8\" returns successfully" Sep 9 05:34:55.705361 containerd[1604]: time="2025-09-09T05:34:55.705323741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8\" id:\"3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8\" pid:4704 exited_at:{seconds:1757396095 nanos:705097288}" Sep 9 05:34:55.705361 containerd[1604]: time="2025-09-09T05:34:55.705341645Z" level=info msg="received exit event container_id:\"3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8\" id:\"3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8\" pid:4704 exited_at:{seconds:1757396095 nanos:705097288}" Sep 9 05:34:55.724456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c583f0439eb6425c6c735618390eafc4835ebceab6c96fd556e7bcd37bd19e8-rootfs.mount: Deactivated successfully. Sep 9 05:34:56.257055 kubelet[2730]: E0909 05:34:56.257018 2730 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:34:56.609166 kubelet[2730]: E0909 05:34:56.609134 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:56.610709 containerd[1604]: time="2025-09-09T05:34:56.610670809Z" level=info msg="CreateContainer within sandbox \"7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:34:56.619662 containerd[1604]: time="2025-09-09T05:34:56.619344395Z" level=info msg="Container e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:34:56.627978 containerd[1604]: time="2025-09-09T05:34:56.627930906Z" level=info msg="CreateContainer within sandbox \"7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e\"" Sep 9 05:34:56.628362 containerd[1604]: time="2025-09-09T05:34:56.628344265Z" level=info msg="StartContainer for \"e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e\"" Sep 9 05:34:56.629184 containerd[1604]: time="2025-09-09T05:34:56.629159732Z" level=info msg="connecting to shim e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e" address="unix:///run/containerd/s/7e54d5afe797779bcafe2af462182b647331a49ece9f3d487bc9ea8d59237204" protocol=ttrpc version=3 Sep 9 05:34:56.649784 systemd[1]: Started cri-containerd-e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e.scope - libcontainer container e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e. Sep 9 05:34:56.674574 systemd[1]: cri-containerd-e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e.scope: Deactivated successfully. Sep 9 05:34:56.675845 containerd[1604]: time="2025-09-09T05:34:56.674966810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e\" id:\"e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e\" pid:4743 exited_at:{seconds:1757396096 nanos:674758452}" Sep 9 05:34:56.676531 containerd[1604]: time="2025-09-09T05:34:56.676443318Z" level=info msg="received exit event container_id:\"e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e\" id:\"e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e\" pid:4743 exited_at:{seconds:1757396096 nanos:674758452}" Sep 9 05:34:56.683802 containerd[1604]: time="2025-09-09T05:34:56.683770726Z" level=info msg="StartContainer for \"e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e\" returns successfully" Sep 9 05:34:56.695302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7a5a9d8cec8947a85554bd31b49cb9d02c36c6a6b213f052c6b27b763d65c5e-rootfs.mount: Deactivated successfully. Sep 9 05:34:57.615060 kubelet[2730]: E0909 05:34:57.615015 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:57.617289 containerd[1604]: time="2025-09-09T05:34:57.616840194Z" level=info msg="CreateContainer within sandbox \"7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:34:57.635848 containerd[1604]: time="2025-09-09T05:34:57.635798427Z" level=info msg="Container 610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:34:57.636835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682811887.mount: Deactivated successfully. Sep 9 05:34:57.646995 containerd[1604]: time="2025-09-09T05:34:57.646951667Z" level=info msg="CreateContainer within sandbox \"7c1d5ddf2f1c2b71579cb01ec0514ec2ecd0dddab92c56bbadf3af9a863e6427\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410\"" Sep 9 05:34:57.647505 containerd[1604]: time="2025-09-09T05:34:57.647450669Z" level=info msg="StartContainer for \"610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410\"" Sep 9 05:34:57.648378 containerd[1604]: time="2025-09-09T05:34:57.648352650Z" level=info msg="connecting to shim 610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410" address="unix:///run/containerd/s/7e54d5afe797779bcafe2af462182b647331a49ece9f3d487bc9ea8d59237204" protocol=ttrpc version=3 Sep 9 05:34:57.674794 systemd[1]: Started cri-containerd-610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410.scope - libcontainer container 610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410. Sep 9 05:34:57.849599 containerd[1604]: time="2025-09-09T05:34:57.849558308Z" level=info msg="StartContainer for \"610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410\" returns successfully" Sep 9 05:34:57.905057 containerd[1604]: time="2025-09-09T05:34:57.904957900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410\" id:\"e137b301f8867afe152055a45741f8e644a8813a0b283e044b6a78196a1323cd\" pid:4817 exited_at:{seconds:1757396097 nanos:904683627}" Sep 9 05:34:58.146658 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 05:34:58.290458 kubelet[2730]: I0909 05:34:58.290323 2730 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T05:34:58Z","lastTransitionTime":"2025-09-09T05:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 05:34:58.623908 kubelet[2730]: E0909 05:34:58.623854 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:34:58.996966 kubelet[2730]: I0909 05:34:58.996756 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7n22m" podStartSLOduration=6.996740314 podStartE2EDuration="6.996740314s" podCreationTimestamp="2025-09-09 05:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:34:58.996489454 +0000 UTC m=+92.877286895" watchObservedRunningTime="2025-09-09 05:34:58.996740314 +0000 UTC m=+92.877537755" Sep 9 05:34:59.146154 containerd[1604]: time="2025-09-09T05:34:59.146109007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410\" id:\"7aec1ac2055467d669760b9bf65bd91c8c11bff7b46a216335aa3acaf62daec0\" pid:4886 exit_status:1 exited_at:{seconds:1757396099 nanos:145865853}" Sep 9 05:34:59.625749 kubelet[2730]: E0909 05:34:59.625699 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:35:00.627308 kubelet[2730]: E0909 05:35:00.627268 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:35:01.212861 systemd-networkd[1485]: lxc_health: Link UP Sep 9 05:35:01.213143 systemd-networkd[1485]: lxc_health: Gained carrier Sep 9 05:35:01.270346 containerd[1604]: time="2025-09-09T05:35:01.270297278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410\" id:\"169488ef215c8ee9f23f2ac3f503dfef190654b66fe682c66bf41f9aaca5b5c4\" pid:5317 exit_status:1 exited_at:{seconds:1757396101 nanos:269450205}" Sep 9 05:35:01.272377 kubelet[2730]: E0909 05:35:01.272150 2730 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48748->127.0.0.1:35719: write tcp 127.0.0.1:48748->127.0.0.1:35719: write: broken pipe Sep 9 05:35:02.595615 kubelet[2730]: E0909 05:35:02.595568 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:35:02.630554 kubelet[2730]: E0909 05:35:02.630512 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:35:02.909756 systemd-networkd[1485]: lxc_health: Gained IPv6LL Sep 9 05:35:03.357557 containerd[1604]: time="2025-09-09T05:35:03.357491576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410\" id:\"716260ec588c70819cbfb18e44671ea40c72b56b2f52a92c690ea5d5925639da\" pid:5373 exited_at:{seconds:1757396103 nanos:357015610}" Sep 9 05:35:05.464468 containerd[1604]: time="2025-09-09T05:35:05.464415559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410\" id:\"49515b5937d30e1fc0d73cd1d0eb918667763b037ec4f9e727dc324e67ebef65\" pid:5403 exited_at:{seconds:1757396105 nanos:464146927}" Sep 9 05:35:07.591198 containerd[1604]: time="2025-09-09T05:35:07.591156374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"610616c39fb78598b46224f29066ab0f513c79b3dfce419db4dff2d4bfaf1410\" id:\"27a31798d3a14520fd92fba4cca0596965aaa46010eef0c8e979e38c66128446\" pid:5433 exited_at:{seconds:1757396107 nanos:590790589}" Sep 9 05:35:07.596760 sshd[4549]: Connection closed by 10.0.0.1 port 33406 Sep 9 05:35:07.597187 sshd-session[4541]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:07.601047 systemd[1]: sshd@27-10.0.0.89:22-10.0.0.1:33406.service: Deactivated successfully. Sep 9 05:35:07.603191 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 05:35:07.604092 systemd-logind[1585]: Session 28 logged out. Waiting for processes to exit. Sep 9 05:35:07.605277 systemd-logind[1585]: Removed session 28. Sep 9 05:35:09.199491 kubelet[2730]: E0909 05:35:09.199436 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"