Oct 30 13:26:31.449417 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 11:31:03 -00 2025 Oct 30 13:26:31.449457 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9059fc71bb508d9916e045ba086d15ed58da6c6a917da2fc328a48e57682d73b Oct 30 13:26:31.449467 kernel: BIOS-provided physical RAM map: Oct 30 13:26:31.449482 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 30 13:26:31.449488 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 30 13:26:31.449495 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 30 13:26:31.449504 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 30 13:26:31.449511 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 30 13:26:31.449520 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 30 13:26:31.449527 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 30 13:26:31.449534 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Oct 30 13:26:31.449548 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 30 13:26:31.449555 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 30 13:26:31.449562 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 30 13:26:31.449571 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 30 13:26:31.449578 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 30 13:26:31.449595 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 30 13:26:31.449603 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 30 13:26:31.449610 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 30 13:26:31.449618 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 30 13:26:31.449625 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 30 13:26:31.449633 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 30 13:26:31.449640 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 30 13:26:31.449648 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 30 13:26:31.449655 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 30 13:26:31.449662 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 30 13:26:31.449676 kernel: NX (Execute Disable) protection: active Oct 30 13:26:31.449684 kernel: APIC: Static calls initialized Oct 30 13:26:31.449692 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Oct 30 13:26:31.449700 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Oct 30 13:26:31.449707 kernel: extended physical RAM map: Oct 30 13:26:31.449715 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 30 13:26:31.449722 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 30 13:26:31.449730 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 30 13:26:31.449737 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 30 13:26:31.449745 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 30 13:26:31.449752 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 30 13:26:31.449784 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 30 13:26:31.449792 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Oct 30 13:26:31.449800 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Oct 30 13:26:31.449815 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Oct 30 13:26:31.449829 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Oct 30 13:26:31.449837 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Oct 30 13:26:31.449845 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 30 13:26:31.449853 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 30 13:26:31.449860 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 30 13:26:31.449868 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 30 13:26:31.449876 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 30 13:26:31.449884 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 30 13:26:31.449891 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 30 13:26:31.449906 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 30 13:26:31.449914 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 30 13:26:31.449921 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 30 13:26:31.449929 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 30 13:26:31.449937 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 30 13:26:31.449944 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 30 13:26:31.449952 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 30 13:26:31.449960 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 30 13:26:31.449969 kernel: efi: EFI v2.7 by EDK II Oct 30 13:26:31.449977 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Oct 30 13:26:31.449993 kernel: random: crng init done Oct 30 13:26:31.450010 kernel: efi: Remove mem150: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Oct 30 13:26:31.450018 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Oct 30 13:26:31.450028 kernel: secureboot: Secure boot disabled Oct 30 13:26:31.450036 kernel: SMBIOS 2.8 present. Oct 30 13:26:31.450043 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 30 13:26:31.450051 kernel: DMI: Memory slots populated: 1/1 Oct 30 13:26:31.450059 kernel: Hypervisor detected: KVM Oct 30 13:26:31.450066 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 30 13:26:31.450074 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 30 13:26:31.450082 kernel: kvm-clock: using sched offset of 5274432336 cycles Oct 30 13:26:31.450090 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 30 13:26:31.450105 kernel: tsc: Detected 2794.750 MHz processor Oct 30 13:26:31.450113 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 30 13:26:31.450122 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 30 13:26:31.450130 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 30 13:26:31.450138 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 30 13:26:31.450146 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 30 13:26:31.450154 kernel: Using GB pages for direct mapping Oct 30 13:26:31.450168 kernel: ACPI: Early table checksum verification disabled Oct 30 13:26:31.450177 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 30 13:26:31.450185 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 30 13:26:31.450193 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:26:31.450201 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:26:31.450209 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 30 13:26:31.450217 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:26:31.450231 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:26:31.450240 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:26:31.450248 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:26:31.450256 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 30 13:26:31.450264 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 30 13:26:31.450272 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 30 13:26:31.450292 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 30 13:26:31.450309 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 30 13:26:31.450317 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 30 13:26:31.450325 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 30 13:26:31.450332 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 30 13:26:31.450341 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 30 13:26:31.450349 kernel: No NUMA configuration found Oct 30 13:26:31.450357 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Oct 30 13:26:31.450365 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Oct 30 13:26:31.450380 kernel: Zone ranges: Oct 30 13:26:31.450388 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 30 13:26:31.450396 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Oct 30 13:26:31.450404 kernel: Normal empty Oct 30 13:26:31.450412 kernel: Device empty Oct 30 13:26:31.450420 kernel: Movable zone start for each node Oct 30 13:26:31.450428 kernel: Early memory node ranges Oct 30 13:26:31.450436 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 30 13:26:31.450453 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 30 13:26:31.450461 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 30 13:26:31.450469 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Oct 30 13:26:31.450477 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Oct 30 13:26:31.450485 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Oct 30 13:26:31.450493 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Oct 30 13:26:31.450501 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Oct 30 13:26:31.450517 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Oct 30 13:26:31.450526 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 13:26:31.450553 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 30 13:26:31.450568 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 30 13:26:31.450576 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 13:26:31.450584 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Oct 30 13:26:31.450592 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Oct 30 13:26:31.450600 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 30 13:26:31.450609 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 30 13:26:31.450617 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Oct 30 13:26:31.450633 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 30 13:26:31.450641 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 30 13:26:31.450650 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 30 13:26:31.450658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 30 13:26:31.450673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 30 13:26:31.450681 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 30 13:26:31.450690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 30 13:26:31.450699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 30 13:26:31.450710 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 30 13:26:31.450721 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 30 13:26:31.450732 kernel: TSC deadline timer available Oct 30 13:26:31.450752 kernel: CPU topo: Max. logical packages: 1 Oct 30 13:26:31.450764 kernel: CPU topo: Max. logical dies: 1 Oct 30 13:26:31.450775 kernel: CPU topo: Max. dies per package: 1 Oct 30 13:26:31.450786 kernel: CPU topo: Max. threads per core: 1 Oct 30 13:26:31.450797 kernel: CPU topo: Num. cores per package: 4 Oct 30 13:26:31.450805 kernel: CPU topo: Num. threads per package: 4 Oct 30 13:26:31.450813 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 30 13:26:31.450823 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 30 13:26:31.450845 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 30 13:26:31.450857 kernel: kvm-guest: setup PV sched yield Oct 30 13:26:31.450868 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 30 13:26:31.450880 kernel: Booting paravirtualized kernel on KVM Oct 30 13:26:31.450889 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 30 13:26:31.450897 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 30 13:26:31.450906 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 30 13:26:31.450926 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 30 13:26:31.450938 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 30 13:26:31.450949 kernel: kvm-guest: PV spinlocks enabled Oct 30 13:26:31.450960 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 30 13:26:31.450976 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9059fc71bb508d9916e045ba086d15ed58da6c6a917da2fc328a48e57682d73b Oct 30 13:26:31.450996 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 30 13:26:31.451025 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 13:26:31.451036 kernel: Fallback order for Node 0: 0 Oct 30 13:26:31.451047 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Oct 30 13:26:31.451058 kernel: Policy zone: DMA32 Oct 30 13:26:31.451070 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 13:26:31.451081 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 30 13:26:31.451092 kernel: ftrace: allocating 40092 entries in 157 pages Oct 30 13:26:31.451103 kernel: ftrace: allocated 157 pages with 5 groups Oct 30 13:26:31.451125 kernel: Dynamic Preempt: voluntary Oct 30 13:26:31.451136 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 13:26:31.451148 kernel: rcu: RCU event tracing is enabled. Oct 30 13:26:31.451160 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 30 13:26:31.451171 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 13:26:31.451182 kernel: Rude variant of Tasks RCU enabled. Oct 30 13:26:31.451192 kernel: Tracing variant of Tasks RCU enabled. Oct 30 13:26:31.451208 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 13:26:31.451217 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 30 13:26:31.451227 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 13:26:31.451236 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 13:26:31.451248 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 13:26:31.451259 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 30 13:26:31.451271 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 13:26:31.451311 kernel: Console: colour dummy device 80x25 Oct 30 13:26:31.451323 kernel: printk: legacy console [ttyS0] enabled Oct 30 13:26:31.451334 kernel: ACPI: Core revision 20240827 Oct 30 13:26:31.451346 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 30 13:26:31.451358 kernel: APIC: Switch to symmetric I/O mode setup Oct 30 13:26:31.451368 kernel: x2apic enabled Oct 30 13:26:31.451380 kernel: APIC: Switched APIC routing to: physical x2apic Oct 30 13:26:31.451404 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 30 13:26:31.451416 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 30 13:26:31.451427 kernel: kvm-guest: setup PV IPIs Oct 30 13:26:31.451438 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 30 13:26:31.451450 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 30 13:26:31.451461 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 30 13:26:31.451472 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 30 13:26:31.451492 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 30 13:26:31.451503 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 30 13:26:31.451514 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 30 13:26:31.451525 kernel: Spectre V2 : Mitigation: Retpolines Oct 30 13:26:31.451536 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 30 13:26:31.451547 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 30 13:26:31.451558 kernel: active return thunk: retbleed_return_thunk Oct 30 13:26:31.451576 kernel: RETBleed: Mitigation: untrained return thunk Oct 30 13:26:31.451587 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 30 13:26:31.451595 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 30 13:26:31.451604 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 30 13:26:31.451613 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 30 13:26:31.451621 kernel: active return thunk: srso_return_thunk Oct 30 13:26:31.451630 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 30 13:26:31.451646 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 30 13:26:31.451654 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 30 13:26:31.451662 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 30 13:26:31.451671 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 30 13:26:31.451679 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 30 13:26:31.451687 kernel: Freeing SMP alternatives memory: 32K Oct 30 13:26:31.451696 kernel: pid_max: default: 32768 minimum: 301 Oct 30 13:26:31.451711 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 13:26:31.451719 kernel: landlock: Up and running. Oct 30 13:26:31.451727 kernel: SELinux: Initializing. Oct 30 13:26:31.451736 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 13:26:31.451744 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 13:26:31.451753 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 30 13:26:31.451761 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 30 13:26:31.451776 kernel: ... version: 0 Oct 30 13:26:31.451785 kernel: ... bit width: 48 Oct 30 13:26:31.451793 kernel: ... generic registers: 6 Oct 30 13:26:31.451801 kernel: ... value mask: 0000ffffffffffff Oct 30 13:26:31.451809 kernel: ... max period: 00007fffffffffff Oct 30 13:26:31.451818 kernel: ... fixed-purpose events: 0 Oct 30 13:26:31.451826 kernel: ... event mask: 000000000000003f Oct 30 13:26:31.451841 kernel: signal: max sigframe size: 1776 Oct 30 13:26:31.451850 kernel: rcu: Hierarchical SRCU implementation. Oct 30 13:26:31.451859 kernel: rcu: Max phase no-delay instances is 400. Oct 30 13:26:31.451870 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 30 13:26:31.451878 kernel: smp: Bringing up secondary CPUs ... Oct 30 13:26:31.451887 kernel: smpboot: x86: Booting SMP configuration: Oct 30 13:26:31.451895 kernel: .... node #0, CPUs: #1 #2 #3 Oct 30 13:26:31.451906 kernel: smp: Brought up 1 node, 4 CPUs Oct 30 13:26:31.451923 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 30 13:26:31.451933 kernel: Memory: 2441100K/2565800K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15964K init, 2080K bss, 118764K reserved, 0K cma-reserved) Oct 30 13:26:31.451941 kernel: devtmpfs: initialized Oct 30 13:26:31.451950 kernel: x86/mm: Memory block size: 128MB Oct 30 13:26:31.451958 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 30 13:26:31.451967 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 30 13:26:31.451975 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Oct 30 13:26:31.451998 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 30 13:26:31.452006 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Oct 30 13:26:31.452015 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 30 13:26:31.452024 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 13:26:31.452032 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 30 13:26:31.452040 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 13:26:31.452049 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 13:26:31.452064 kernel: audit: initializing netlink subsys (disabled) Oct 30 13:26:31.452073 kernel: audit: type=2000 audit(1761830789.215:1): state=initialized audit_enabled=0 res=1 Oct 30 13:26:31.452081 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 13:26:31.452089 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 30 13:26:31.452098 kernel: cpuidle: using governor menu Oct 30 13:26:31.452106 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 13:26:31.452114 kernel: dca service started, version 1.12.1 Oct 30 13:26:31.452130 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 30 13:26:31.452138 kernel: PCI: Using configuration type 1 for base access Oct 30 13:26:31.452146 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 30 13:26:31.452155 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 13:26:31.452163 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 13:26:31.452171 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 13:26:31.452180 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 13:26:31.452195 kernel: ACPI: Added _OSI(Module Device) Oct 30 13:26:31.452204 kernel: ACPI: Added _OSI(Processor Device) Oct 30 13:26:31.452212 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 13:26:31.452220 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 13:26:31.452229 kernel: ACPI: Interpreter enabled Oct 30 13:26:31.452237 kernel: ACPI: PM: (supports S0 S3 S5) Oct 30 13:26:31.452245 kernel: ACPI: Using IOAPIC for interrupt routing Oct 30 13:26:31.452261 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 30 13:26:31.452269 kernel: PCI: Using E820 reservations for host bridge windows Oct 30 13:26:31.452278 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 30 13:26:31.452314 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 30 13:26:31.452576 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 30 13:26:31.452784 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 30 13:26:31.453016 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 30 13:26:31.453029 kernel: PCI host bridge to bus 0000:00 Oct 30 13:26:31.453218 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 30 13:26:31.453436 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 30 13:26:31.453650 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 30 13:26:31.453881 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 30 13:26:31.454118 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 30 13:26:31.454340 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 30 13:26:31.454520 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 30 13:26:31.454763 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 30 13:26:31.454954 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 30 13:26:31.455157 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 30 13:26:31.455379 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 30 13:26:31.455570 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 30 13:26:31.455744 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 30 13:26:31.455929 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 30 13:26:31.456115 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 30 13:26:31.456409 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 30 13:26:31.456584 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 30 13:26:31.456765 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 30 13:26:31.456944 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 30 13:26:31.457184 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 30 13:26:31.457432 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 30 13:26:31.457639 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 30 13:26:31.457814 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 30 13:26:31.458004 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 30 13:26:31.458181 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 30 13:26:31.458424 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 30 13:26:31.458682 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 30 13:26:31.458864 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 30 13:26:31.459070 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 30 13:26:31.459247 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 30 13:26:31.459448 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 30 13:26:31.459643 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 30 13:26:31.459843 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 30 13:26:31.459856 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 30 13:26:31.459865 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 30 13:26:31.459873 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 30 13:26:31.459882 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 30 13:26:31.459890 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 30 13:26:31.459911 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 30 13:26:31.459919 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 30 13:26:31.459928 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 30 13:26:31.459937 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 30 13:26:31.459945 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 30 13:26:31.459954 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 30 13:26:31.459962 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 30 13:26:31.459978 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 30 13:26:31.459994 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 30 13:26:31.460003 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 30 13:26:31.460013 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 30 13:26:31.460021 kernel: iommu: Default domain type: Translated Oct 30 13:26:31.460030 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 30 13:26:31.460039 kernel: efivars: Registered efivars operations Oct 30 13:26:31.460054 kernel: PCI: Using ACPI for IRQ routing Oct 30 13:26:31.460063 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 30 13:26:31.460072 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 30 13:26:31.460080 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Oct 30 13:26:31.460089 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Oct 30 13:26:31.460097 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Oct 30 13:26:31.460105 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Oct 30 13:26:31.460114 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Oct 30 13:26:31.460130 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Oct 30 13:26:31.460138 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Oct 30 13:26:31.460331 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 30 13:26:31.460508 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 30 13:26:31.460682 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 30 13:26:31.460694 kernel: vgaarb: loaded Oct 30 13:26:31.460715 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 30 13:26:31.460724 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 30 13:26:31.460736 kernel: clocksource: Switched to clocksource kvm-clock Oct 30 13:26:31.460747 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 13:26:31.460756 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 13:26:31.460765 kernel: pnp: PnP ACPI init Oct 30 13:26:31.461034 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 30 13:26:31.461069 kernel: pnp: PnP ACPI: found 6 devices Oct 30 13:26:31.461079 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 30 13:26:31.461088 kernel: NET: Registered PF_INET protocol family Oct 30 13:26:31.461097 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 30 13:26:31.461106 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 30 13:26:31.461115 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 13:26:31.461134 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 13:26:31.461143 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 30 13:26:31.461152 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 30 13:26:31.461161 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 13:26:31.461170 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 13:26:31.461179 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 13:26:31.461188 kernel: NET: Registered PF_XDP protocol family Oct 30 13:26:31.461401 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 30 13:26:31.461588 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 30 13:26:31.461762 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 30 13:26:31.461927 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 30 13:26:31.462102 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 30 13:26:31.462263 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 30 13:26:31.462442 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 30 13:26:31.462687 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 30 13:26:31.462699 kernel: PCI: CLS 0 bytes, default 64 Oct 30 13:26:31.462710 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 30 13:26:31.462729 kernel: Initialise system trusted keyrings Oct 30 13:26:31.462746 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 30 13:26:31.462758 kernel: Key type asymmetric registered Oct 30 13:26:31.462769 kernel: Asymmetric key parser 'x509' registered Oct 30 13:26:31.462781 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 13:26:31.462792 kernel: io scheduler mq-deadline registered Oct 30 13:26:31.462801 kernel: io scheduler kyber registered Oct 30 13:26:31.462809 kernel: io scheduler bfq registered Oct 30 13:26:31.462827 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 30 13:26:31.462837 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 30 13:26:31.462846 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 30 13:26:31.462855 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 30 13:26:31.462864 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 13:26:31.462873 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 30 13:26:31.462882 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 30 13:26:31.462898 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 30 13:26:31.462908 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 30 13:26:31.463104 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 30 13:26:31.463118 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 30 13:26:31.463301 kernel: rtc_cmos 00:04: registered as rtc0 Oct 30 13:26:31.463518 kernel: rtc_cmos 00:04: setting system clock to 2025-10-30T13:26:29 UTC (1761830789) Oct 30 13:26:31.463737 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 30 13:26:31.463751 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 30 13:26:31.463761 kernel: efifb: probing for efifb Oct 30 13:26:31.463770 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 30 13:26:31.463778 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 30 13:26:31.463787 kernel: efifb: scrolling: redraw Oct 30 13:26:31.463796 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 30 13:26:31.463816 kernel: Console: switching to colour frame buffer device 160x50 Oct 30 13:26:31.463825 kernel: fb0: EFI VGA frame buffer device Oct 30 13:26:31.463834 kernel: pstore: Using crash dump compression: deflate Oct 30 13:26:31.463843 kernel: pstore: Registered efi_pstore as persistent store backend Oct 30 13:26:31.463852 kernel: NET: Registered PF_INET6 protocol family Oct 30 13:26:31.463861 kernel: Segment Routing with IPv6 Oct 30 13:26:31.463870 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 13:26:31.463879 kernel: NET: Registered PF_PACKET protocol family Oct 30 13:26:31.463895 kernel: Key type dns_resolver registered Oct 30 13:26:31.463904 kernel: IPI shorthand broadcast: enabled Oct 30 13:26:31.463913 kernel: sched_clock: Marking stable (1533003451, 284698840)->(1954934442, -137232151) Oct 30 13:26:31.463922 kernel: registered taskstats version 1 Oct 30 13:26:31.463932 kernel: Loading compiled-in X.509 certificates Oct 30 13:26:31.463941 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 94f1b718c5ca9e16ea420e725d7bfe648cbb4295' Oct 30 13:26:31.463949 kernel: Demotion targets for Node 0: null Oct 30 13:26:31.463966 kernel: Key type .fscrypt registered Oct 30 13:26:31.463974 kernel: Key type fscrypt-provisioning registered Oct 30 13:26:31.463990 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 13:26:31.463999 kernel: ima: Allocated hash algorithm: sha1 Oct 30 13:26:31.464009 kernel: ima: No architecture policies found Oct 30 13:26:31.464018 kernel: clk: Disabling unused clocks Oct 30 13:26:31.464027 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 30 13:26:31.464042 kernel: Write protecting the kernel read-only data: 45056k Oct 30 13:26:31.464052 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Oct 30 13:26:31.464061 kernel: Run /init as init process Oct 30 13:26:31.464069 kernel: with arguments: Oct 30 13:26:31.464078 kernel: /init Oct 30 13:26:31.464087 kernel: with environment: Oct 30 13:26:31.464096 kernel: HOME=/ Oct 30 13:26:31.464111 kernel: TERM=linux Oct 30 13:26:31.464120 kernel: SCSI subsystem initialized Oct 30 13:26:31.464129 kernel: libata version 3.00 loaded. Oct 30 13:26:31.464329 kernel: ahci 0000:00:1f.2: version 3.0 Oct 30 13:26:31.464343 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 30 13:26:31.464532 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 30 13:26:31.464706 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 30 13:26:31.464895 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 30 13:26:31.465116 kernel: scsi host0: ahci Oct 30 13:26:31.465323 kernel: scsi host1: ahci Oct 30 13:26:31.465516 kernel: scsi host2: ahci Oct 30 13:26:31.465714 kernel: scsi host3: ahci Oct 30 13:26:31.465922 kernel: scsi host4: ahci Oct 30 13:26:31.466123 kernel: scsi host5: ahci Oct 30 13:26:31.466137 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 30 13:26:31.466147 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 30 13:26:31.466166 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 30 13:26:31.466175 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 30 13:26:31.466191 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 30 13:26:31.466201 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 30 13:26:31.466210 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 30 13:26:31.466219 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 30 13:26:31.466228 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 30 13:26:31.466236 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 30 13:26:31.466253 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 30 13:26:31.466269 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 30 13:26:31.466301 kernel: ata3.00: LPM support broken, forcing max_power Oct 30 13:26:31.466310 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 30 13:26:31.466320 kernel: ata3.00: applying bridge limits Oct 30 13:26:31.466329 kernel: ata3.00: LPM support broken, forcing max_power Oct 30 13:26:31.466338 kernel: ata3.00: configured for UDMA/100 Oct 30 13:26:31.466554 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 30 13:26:31.466764 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 30 13:26:31.466939 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 30 13:26:31.466952 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 30 13:26:31.466962 kernel: GPT:16515071 != 27000831 Oct 30 13:26:31.466971 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 30 13:26:31.466980 kernel: GPT:16515071 != 27000831 Oct 30 13:26:31.466997 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 30 13:26:31.467138 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 13:26:31.467350 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 30 13:26:31.467363 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 30 13:26:31.467556 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 30 13:26:31.467569 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 13:26:31.467578 kernel: device-mapper: uevent: version 1.0.3 Oct 30 13:26:31.467598 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 13:26:31.467607 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 30 13:26:31.467617 kernel: raid6: avx2x4 gen() 29563 MB/s Oct 30 13:26:31.467626 kernel: raid6: avx2x2 gen() 27952 MB/s Oct 30 13:26:31.467634 kernel: raid6: avx2x1 gen() 19177 MB/s Oct 30 13:26:31.467644 kernel: raid6: using algorithm avx2x4 gen() 29563 MB/s Oct 30 13:26:31.467653 kernel: raid6: .... xor() 7941 MB/s, rmw enabled Oct 30 13:26:31.467674 kernel: raid6: using avx2x2 recovery algorithm Oct 30 13:26:31.467683 kernel: xor: automatically using best checksumming function avx Oct 30 13:26:31.467692 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 13:26:31.467701 kernel: BTRFS: device fsid eda3d582-32f5-4286-9f04-debab6c64300 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (180) Oct 30 13:26:31.467710 kernel: BTRFS info (device dm-0): first mount of filesystem eda3d582-32f5-4286-9f04-debab6c64300 Oct 30 13:26:31.467719 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:26:31.467728 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 13:26:31.467744 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 13:26:31.467754 kernel: loop: module loaded Oct 30 13:26:31.467762 kernel: loop0: detected capacity change from 0 to 100136 Oct 30 13:26:31.467771 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 13:26:31.467781 systemd[1]: Successfully made /usr/ read-only. Oct 30 13:26:31.467794 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 13:26:31.467811 systemd[1]: Detected virtualization kvm. Oct 30 13:26:31.467821 systemd[1]: Detected architecture x86-64. Oct 30 13:26:31.467830 systemd[1]: Running in initrd. Oct 30 13:26:31.467839 systemd[1]: No hostname configured, using default hostname. Oct 30 13:26:31.467849 systemd[1]: Hostname set to . Oct 30 13:26:31.467858 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 13:26:31.467868 systemd[1]: Queued start job for default target initrd.target. Oct 30 13:26:31.467884 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 13:26:31.467893 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:26:31.467903 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:26:31.467913 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 13:26:31.467923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 13:26:31.467933 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 13:26:31.467950 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 13:26:31.467959 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:26:31.467969 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:26:31.467978 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 13:26:31.467995 systemd[1]: Reached target paths.target - Path Units. Oct 30 13:26:31.468004 systemd[1]: Reached target slices.target - Slice Units. Oct 30 13:26:31.468022 systemd[1]: Reached target swap.target - Swaps. Oct 30 13:26:31.468031 systemd[1]: Reached target timers.target - Timer Units. Oct 30 13:26:31.468041 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 13:26:31.468051 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 13:26:31.468061 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 13:26:31.468070 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 13:26:31.468080 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:26:31.468096 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 13:26:31.468105 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:26:31.468115 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 13:26:31.468125 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 13:26:31.468134 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 13:26:31.468144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 13:26:31.468153 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 13:26:31.468171 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 13:26:31.468180 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 13:26:31.468189 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 13:26:31.468199 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 13:26:31.468208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:26:31.468225 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 13:26:31.468235 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:26:31.468245 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 13:26:31.468255 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 13:26:31.468314 systemd-journald[314]: Collecting audit messages is disabled. Oct 30 13:26:31.468344 systemd-journald[314]: Journal started Oct 30 13:26:31.468364 systemd-journald[314]: Runtime Journal (/run/log/journal/3c05884345da460b8926b664334f5163) is 6M, max 48.1M, 42.1M free. Oct 30 13:26:31.474403 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 13:26:31.474457 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 13:26:31.475695 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 13:26:31.480552 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 13:26:31.485776 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 13:26:31.488072 kernel: Bridge firewalling registered Oct 30 13:26:31.486090 systemd-modules-load[317]: Inserted module 'br_netfilter' Oct 30 13:26:31.492092 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 13:26:31.502659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:26:31.511437 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 13:26:31.515683 systemd-tmpfiles[334]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 13:26:31.518756 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 13:26:31.529600 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:26:31.533854 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:26:31.552921 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 13:26:31.558562 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:26:31.564867 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 13:26:31.567484 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 13:26:31.606493 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9059fc71bb508d9916e045ba086d15ed58da6c6a917da2fc328a48e57682d73b Oct 30 13:26:31.642863 systemd-resolved[357]: Positive Trust Anchors: Oct 30 13:26:31.642889 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 13:26:31.642902 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 13:26:31.642960 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 13:26:31.716778 systemd-resolved[357]: Defaulting to hostname 'linux'. Oct 30 13:26:31.718741 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 13:26:31.722241 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:26:31.791322 kernel: Loading iSCSI transport class v2.0-870. Oct 30 13:26:31.816352 kernel: iscsi: registered transport (tcp) Oct 30 13:26:31.847630 kernel: iscsi: registered transport (qla4xxx) Oct 30 13:26:31.847726 kernel: QLogic iSCSI HBA Driver Oct 30 13:26:31.877791 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 13:26:31.911766 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:26:31.915942 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 13:26:32.030553 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 13:26:32.034158 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 13:26:32.036533 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 13:26:32.080045 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 13:26:32.083741 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:26:32.116183 systemd-udevd[600]: Using default interface naming scheme 'v257'. Oct 30 13:26:32.130847 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:26:32.136420 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 13:26:32.170199 dracut-pre-trigger[669]: rd.md=0: removing MD RAID activation Oct 30 13:26:32.174359 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 13:26:32.177987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 13:26:32.207189 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 13:26:32.212562 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 13:26:32.248939 systemd-networkd[711]: lo: Link UP Oct 30 13:26:32.248957 systemd-networkd[711]: lo: Gained carrier Oct 30 13:26:32.250160 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 13:26:32.252124 systemd[1]: Reached target network.target - Network. Oct 30 13:26:32.339352 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:26:32.344658 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 13:26:32.434995 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 30 13:26:32.466586 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 30 13:26:32.484911 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 30 13:26:32.496937 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 13:26:32.535643 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 13:26:32.541313 kernel: cryptd: max_cpu_qlen set to 1000 Oct 30 13:26:32.552005 systemd-networkd[711]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:26:32.552713 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 13:26:32.554114 systemd-networkd[711]: eth0: Link UP Oct 30 13:26:32.555182 systemd-networkd[711]: eth0: Gained carrier Oct 30 13:26:32.555192 systemd-networkd[711]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:26:32.568468 disk-uuid[777]: Primary Header is updated. Oct 30 13:26:32.568468 disk-uuid[777]: Secondary Entries is updated. Oct 30 13:26:32.568468 disk-uuid[777]: Secondary Header is updated. Oct 30 13:26:32.577563 kernel: AES CTR mode by8 optimization enabled Oct 30 13:26:32.555365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 13:26:32.555491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:26:32.556699 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:26:32.558326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:26:32.587369 systemd-networkd[711]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 13:26:32.601927 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 30 13:26:32.594008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 13:26:32.594141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:26:32.622480 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:26:32.679433 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:26:32.704032 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 13:26:32.706584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 13:26:32.709491 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:26:32.711378 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 13:26:32.716186 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 13:26:32.754771 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 13:26:32.941759 systemd-resolved[357]: Detected conflict on linux IN A 10.0.0.124 Oct 30 13:26:32.941778 systemd-resolved[357]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Oct 30 13:26:33.652960 disk-uuid[779]: Warning: The kernel is still using the old partition table. Oct 30 13:26:33.652960 disk-uuid[779]: The new table will be used at the next reboot or after you Oct 30 13:26:33.652960 disk-uuid[779]: run partprobe(8) or kpartx(8) Oct 30 13:26:33.652960 disk-uuid[779]: The operation has completed successfully. Oct 30 13:26:33.672394 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 13:26:33.673094 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 13:26:33.676919 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 13:26:33.722326 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (866) Oct 30 13:26:33.725460 systemd-networkd[711]: eth0: Gained IPv6LL Oct 30 13:26:33.727478 kernel: BTRFS info (device vda6): first mount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:26:33.727508 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:26:33.729879 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:26:33.729901 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:26:33.738313 kernel: BTRFS info (device vda6): last unmount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:26:33.738960 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 13:26:33.742108 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 13:26:34.058115 ignition[885]: Ignition 2.22.0 Oct 30 13:26:34.058131 ignition[885]: Stage: fetch-offline Oct 30 13:26:34.058183 ignition[885]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:26:34.058196 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:26:34.058356 ignition[885]: parsed url from cmdline: "" Oct 30 13:26:34.058360 ignition[885]: no config URL provided Oct 30 13:26:34.058368 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 13:26:34.058379 ignition[885]: no config at "/usr/lib/ignition/user.ign" Oct 30 13:26:34.058431 ignition[885]: op(1): [started] loading QEMU firmware config module Oct 30 13:26:34.058436 ignition[885]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 30 13:26:34.070916 ignition[885]: op(1): [finished] loading QEMU firmware config module Oct 30 13:26:34.154233 ignition[885]: parsing config with SHA512: 76b13397f18aa2d59384f75557d0418dad2c0f20189a639a33bb7089af770bc99bdd12bdbef7c3d9b3f6c4f40633cbe92c7a1a94b9bc27db976ba2240862c181 Oct 30 13:26:34.162736 unknown[885]: fetched base config from "system" Oct 30 13:26:34.162750 unknown[885]: fetched user config from "qemu" Oct 30 13:26:34.163181 ignition[885]: fetch-offline: fetch-offline passed Oct 30 13:26:34.165619 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 13:26:34.163246 ignition[885]: Ignition finished successfully Oct 30 13:26:34.168234 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 30 13:26:34.169266 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 13:26:34.370899 ignition[895]: Ignition 2.22.0 Oct 30 13:26:34.370925 ignition[895]: Stage: kargs Oct 30 13:26:34.371211 ignition[895]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:26:34.371222 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:26:34.378209 ignition[895]: kargs: kargs passed Oct 30 13:26:34.378339 ignition[895]: Ignition finished successfully Oct 30 13:26:34.384399 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 13:26:34.387854 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 13:26:34.494516 ignition[903]: Ignition 2.22.0 Oct 30 13:26:34.494529 ignition[903]: Stage: disks Oct 30 13:26:34.494745 ignition[903]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:26:34.494756 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:26:34.496030 ignition[903]: disks: disks passed Oct 30 13:26:34.496080 ignition[903]: Ignition finished successfully Oct 30 13:26:34.504761 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 13:26:34.506001 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 13:26:34.508743 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 13:26:34.512272 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 13:26:34.516555 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 13:26:34.519813 systemd[1]: Reached target basic.target - Basic System. Oct 30 13:26:34.526534 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 13:26:34.599575 systemd-fsck[913]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 30 13:26:34.841651 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 13:26:34.845214 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 13:26:34.964327 kernel: EXT4-fs (vda9): mounted filesystem 6e47eb19-ed37-4e0f-85fc-4a1fde834fe4 r/w with ordered data mode. Quota mode: none. Oct 30 13:26:34.965313 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 13:26:34.966929 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 13:26:34.972047 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 13:26:34.975000 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 13:26:34.977033 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 30 13:26:34.977100 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 13:26:34.994355 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (921) Oct 30 13:26:34.994390 kernel: BTRFS info (device vda6): first mount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:26:34.994407 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:26:34.977159 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 13:26:35.001081 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:26:35.001424 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:26:34.985158 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 13:26:34.995738 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 13:26:35.002487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 13:26:35.118387 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 13:26:35.123258 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Oct 30 13:26:35.128923 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 13:26:35.133777 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 13:26:35.273268 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 13:26:35.276667 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 13:26:35.279011 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 13:26:35.300231 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 13:26:35.302789 kernel: BTRFS info (device vda6): last unmount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:26:35.316515 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 13:26:35.371613 ignition[1035]: INFO : Ignition 2.22.0 Oct 30 13:26:35.371613 ignition[1035]: INFO : Stage: mount Oct 30 13:26:35.374852 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:26:35.374852 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:26:35.378673 ignition[1035]: INFO : mount: mount passed Oct 30 13:26:35.378673 ignition[1035]: INFO : Ignition finished successfully Oct 30 13:26:35.384744 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 13:26:35.388133 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 13:26:35.968188 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 13:26:35.997923 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1048) Oct 30 13:26:35.997985 kernel: BTRFS info (device vda6): first mount of filesystem 11cbc96a-f6fd-4ff3-b9c8-ab48038f6d57 Oct 30 13:26:35.998018 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 13:26:36.003181 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:26:36.003212 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:26:36.005068 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 13:26:36.060442 ignition[1065]: INFO : Ignition 2.22.0 Oct 30 13:26:36.060442 ignition[1065]: INFO : Stage: files Oct 30 13:26:36.063224 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:26:36.063224 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:26:36.063224 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Oct 30 13:26:36.069685 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 13:26:36.069685 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 13:26:36.078236 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 13:26:36.080554 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 13:26:36.082886 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 13:26:36.081082 unknown[1065]: wrote ssh authorized keys file for user: core Oct 30 13:26:36.087380 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 30 13:26:36.087380 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 30 13:26:36.231121 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 13:26:36.296721 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 30 13:26:36.300346 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 30 13:26:36.300346 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 30 13:26:36.572781 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 30 13:26:36.704616 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 30 13:26:36.707909 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 30 13:26:36.707909 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 13:26:36.707909 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 13:26:36.707909 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 13:26:36.707909 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 13:26:36.707909 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 13:26:36.707909 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 13:26:36.707909 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 13:26:36.796687 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 13:26:36.800300 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 13:26:36.800300 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 13:26:36.800300 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 13:26:36.800300 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 13:26:36.816318 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 30 13:26:37.218091 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 30 13:26:37.904247 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 30 13:26:37.904247 ignition[1065]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 30 13:26:37.909987 ignition[1065]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 13:26:38.175086 ignition[1065]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 13:26:38.175086 ignition[1065]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 30 13:26:38.175086 ignition[1065]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 30 13:26:38.175086 ignition[1065]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 13:26:38.189756 ignition[1065]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 13:26:38.189756 ignition[1065]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 30 13:26:38.189756 ignition[1065]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 30 13:26:38.205223 ignition[1065]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 13:26:38.237072 ignition[1065]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 13:26:38.237072 ignition[1065]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 30 13:26:38.237072 ignition[1065]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 30 13:26:38.237072 ignition[1065]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 13:26:38.237072 ignition[1065]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 13:26:38.237072 ignition[1065]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 13:26:38.237072 ignition[1065]: INFO : files: files passed Oct 30 13:26:38.237072 ignition[1065]: INFO : Ignition finished successfully Oct 30 13:26:38.255193 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 13:26:38.260247 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 13:26:38.264139 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 13:26:38.279486 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 13:26:38.280316 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 13:26:38.286164 initrd-setup-root-after-ignition[1096]: grep: /sysroot/oem/oem-release: No such file or directory Oct 30 13:26:38.290925 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:26:38.290925 initrd-setup-root-after-ignition[1098]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:26:38.296137 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:26:38.298579 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 13:26:38.299862 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 13:26:38.306125 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 13:26:38.361112 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 13:26:38.361253 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 13:26:38.399485 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 13:26:38.399905 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 13:26:38.409890 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 13:26:38.411067 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 13:26:38.449147 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 13:26:38.451720 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 13:26:38.480361 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 13:26:38.480516 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:26:38.484730 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:26:38.488841 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 13:26:38.500042 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 13:26:38.500163 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 13:26:38.504731 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 13:26:38.508025 systemd[1]: Stopped target basic.target - Basic System. Oct 30 13:26:38.510898 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 13:26:38.513886 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 13:26:38.517250 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 13:26:38.518118 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 13:26:38.522918 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 13:26:38.526254 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 13:26:38.529346 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 13:26:38.533066 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 13:26:38.536128 systemd[1]: Stopped target swap.target - Swaps. Oct 30 13:26:38.539105 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 13:26:38.539229 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 13:26:38.544054 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:26:38.547268 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:26:38.548115 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 13:26:38.548272 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:26:38.602237 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 13:26:38.602386 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 13:26:38.609506 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 13:26:38.609630 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 13:26:38.613166 systemd[1]: Stopped target paths.target - Path Units. Oct 30 13:26:38.614059 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 13:26:38.617370 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:26:38.618882 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 13:26:38.622805 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 13:26:38.625801 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 13:26:38.625908 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 13:26:38.629333 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 13:26:38.629421 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 13:26:38.633969 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 13:26:38.634085 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 13:26:38.636890 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 13:26:38.637002 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 13:26:38.640850 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 13:26:38.643485 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 13:26:38.645679 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 13:26:38.645852 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:26:38.648794 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 13:26:38.648911 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:26:38.649357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 13:26:38.649456 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 13:26:38.746828 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 13:26:38.751001 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 13:26:38.776502 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 13:26:38.950070 ignition[1122]: INFO : Ignition 2.22.0 Oct 30 13:26:38.950070 ignition[1122]: INFO : Stage: umount Oct 30 13:26:38.952775 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:26:38.952775 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:26:38.952775 ignition[1122]: INFO : umount: umount passed Oct 30 13:26:38.952775 ignition[1122]: INFO : Ignition finished successfully Oct 30 13:26:38.961455 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 13:26:38.963026 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 13:26:38.966635 systemd[1]: Stopped target network.target - Network. Oct 30 13:26:38.969683 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 13:26:38.969839 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 13:26:38.971093 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 13:26:38.971154 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 13:26:38.971623 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 13:26:38.971680 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 13:26:38.978662 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 13:26:38.978721 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 13:26:38.982928 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 13:26:38.986022 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 13:26:39.074104 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 13:26:39.074256 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 13:26:39.087726 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 13:26:39.087958 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 13:26:39.094664 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 13:26:39.095370 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 13:26:39.095420 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:26:39.099449 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 13:26:39.101814 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 13:26:39.101877 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 13:26:39.104907 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 13:26:39.104961 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:26:39.108394 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 13:26:39.108452 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 13:26:39.108875 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:26:39.109950 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 13:26:39.119513 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 13:26:39.122616 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 13:26:39.122733 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 13:26:39.129985 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 13:26:39.130174 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:26:39.133360 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 13:26:39.133411 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 13:26:39.136372 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 13:26:39.136417 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:26:39.138863 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 13:26:39.138919 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 13:26:39.232570 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 13:26:39.232648 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 13:26:39.273035 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 13:26:39.273109 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 13:26:39.280001 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 13:26:39.283083 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 13:26:39.283144 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:26:39.284012 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 13:26:39.284059 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:26:39.284826 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 30 13:26:39.284872 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 13:26:39.291687 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 13:26:39.291739 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:26:39.295062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 13:26:39.295116 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:26:39.323646 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 13:26:39.323809 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 13:26:39.324910 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 13:26:39.325014 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 13:26:39.329210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 13:26:39.334993 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 13:26:39.356472 systemd[1]: Switching root. Oct 30 13:26:39.391740 systemd-journald[314]: Journal stopped Oct 30 13:26:41.833553 systemd-journald[314]: Received SIGTERM from PID 1 (systemd). Oct 30 13:26:41.833697 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 13:26:41.833713 kernel: SELinux: policy capability open_perms=1 Oct 30 13:26:41.833729 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 13:26:41.833749 kernel: SELinux: policy capability always_check_network=0 Oct 30 13:26:41.833762 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 13:26:41.833774 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 13:26:41.833794 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 13:26:41.833806 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 13:26:41.833818 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 13:26:41.833831 kernel: audit: type=1403 audit(1761830800.789:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 13:26:41.833844 systemd[1]: Successfully loaded SELinux policy in 78.025ms. Oct 30 13:26:41.833871 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.776ms. Oct 30 13:26:41.833886 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 13:26:41.833906 systemd[1]: Detected virtualization kvm. Oct 30 13:26:41.833919 systemd[1]: Detected architecture x86-64. Oct 30 13:26:41.833932 systemd[1]: Detected first boot. Oct 30 13:26:41.833945 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 13:26:41.833958 zram_generator::config[1168]: No configuration found. Oct 30 13:26:41.833972 kernel: Guest personality initialized and is inactive Oct 30 13:26:41.833985 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 30 13:26:41.834004 kernel: Initialized host personality Oct 30 13:26:41.834016 kernel: NET: Registered PF_VSOCK protocol family Oct 30 13:26:41.834033 systemd[1]: Populated /etc with preset unit settings. Oct 30 13:26:41.834057 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 13:26:41.834071 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 13:26:41.834084 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 13:26:41.834097 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 13:26:41.834117 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 13:26:41.834130 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 13:26:41.834143 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 13:26:41.834156 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 13:26:41.834170 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 13:26:41.834183 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 13:26:41.834196 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 13:26:41.834216 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:26:41.834232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:26:41.834245 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 13:26:41.834258 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 13:26:41.834271 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 13:26:41.834298 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 13:26:41.834320 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 30 13:26:41.834333 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:26:41.834346 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:26:41.834359 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 13:26:41.834375 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 13:26:41.834388 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 13:26:41.834401 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 13:26:41.834421 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:26:41.834434 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 13:26:41.834448 systemd[1]: Reached target slices.target - Slice Units. Oct 30 13:26:41.834461 systemd[1]: Reached target swap.target - Swaps. Oct 30 13:26:41.834474 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 13:26:41.834487 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 13:26:41.834501 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 13:26:41.834521 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:26:41.834534 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 13:26:41.834548 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:26:41.834560 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 13:26:41.834580 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 13:26:41.834594 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 13:26:41.834609 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 13:26:41.834622 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:26:41.834642 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 13:26:41.834656 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 13:26:41.834669 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 13:26:41.834687 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 13:26:41.834702 systemd[1]: Reached target machines.target - Containers. Oct 30 13:26:41.834718 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 13:26:41.834746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:26:41.834762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 13:26:41.834775 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 13:26:41.834791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 13:26:41.834804 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 13:26:41.834818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 13:26:41.834831 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 13:26:41.834851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 13:26:41.834864 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 13:26:41.834877 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 13:26:41.834893 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 13:26:41.834906 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 13:26:41.834919 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 13:26:41.834932 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:26:41.834952 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 13:26:41.834966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 13:26:41.834978 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 13:26:41.834991 kernel: fuse: init (API version 7.41) Oct 30 13:26:41.835007 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 13:26:41.835023 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 13:26:41.835036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 13:26:41.835075 systemd-journald[1232]: Collecting audit messages is disabled. Oct 30 13:26:41.835099 systemd-journald[1232]: Journal started Oct 30 13:26:41.835129 systemd-journald[1232]: Runtime Journal (/run/log/journal/3c05884345da460b8926b664334f5163) is 6M, max 48.1M, 42.1M free. Oct 30 13:26:41.428649 systemd[1]: Queued start job for default target multi-user.target. Oct 30 13:26:41.455461 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 30 13:26:41.455998 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 13:26:41.844342 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:26:41.848465 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 13:26:41.851216 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 13:26:41.852990 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 13:26:41.855109 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 13:26:41.864461 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 13:26:41.866350 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 13:26:41.868236 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 13:26:41.870126 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:26:41.872433 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 13:26:41.872659 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 13:26:41.874840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 13:26:41.875062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 13:26:41.877173 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 13:26:41.877431 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 13:26:41.879695 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 13:26:41.879944 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 13:26:41.881962 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 13:26:41.882190 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 13:26:41.884419 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:26:41.887567 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 13:26:41.892188 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 13:26:41.904749 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 13:26:41.906927 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 30 13:26:41.910537 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 13:26:41.912307 kernel: ACPI: bus type drm_connector registered Oct 30 13:26:41.915585 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 13:26:41.917361 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 13:26:41.917395 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 13:26:41.920004 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 13:26:41.921071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:26:41.929370 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 13:26:41.933133 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 13:26:41.935407 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 13:26:41.936754 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 13:26:41.938521 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 13:26:41.940183 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 13:26:41.944462 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 13:26:41.949124 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 13:26:41.949403 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 13:26:41.950573 systemd-journald[1232]: Time spent on flushing to /var/log/journal/3c05884345da460b8926b664334f5163 is 15.089ms for 1057 entries. Oct 30 13:26:41.950573 systemd-journald[1232]: System Journal (/var/log/journal/3c05884345da460b8926b664334f5163) is 8M, max 163.5M, 155.5M free. Oct 30 13:26:41.971711 systemd-journald[1232]: Received client request to flush runtime journal. Oct 30 13:26:41.971773 kernel: loop1: detected capacity change from 0 to 111544 Oct 30 13:26:41.953265 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 13:26:41.956360 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:26:41.958650 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 13:26:41.960899 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 13:26:41.970947 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 13:26:41.973844 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 13:26:41.979476 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 13:26:41.982050 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 13:26:41.984911 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 13:26:41.994429 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 13:26:42.001317 kernel: loop2: detected capacity change from 0 to 224512 Oct 30 13:26:42.003210 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Oct 30 13:26:42.003230 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Oct 30 13:26:42.010019 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 13:26:42.014538 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 13:26:42.019446 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:26:42.030715 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 13:26:42.039403 kernel: loop3: detected capacity change from 0 to 128912 Oct 30 13:26:42.061680 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 13:26:42.065816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 13:26:42.068472 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 13:26:42.074500 kernel: loop4: detected capacity change from 0 to 111544 Oct 30 13:26:42.084311 kernel: loop5: detected capacity change from 0 to 224512 Oct 30 13:26:42.085812 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 13:26:42.094315 kernel: loop6: detected capacity change from 0 to 128912 Oct 30 13:26:42.100081 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Oct 30 13:26:42.100101 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Oct 30 13:26:42.102004 (sd-merge)[1310]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 30 13:26:42.105981 (sd-merge)[1310]: Merged extensions into '/usr'. Oct 30 13:26:42.107366 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:26:42.112345 systemd[1]: Reload requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 13:26:42.112366 systemd[1]: Reloading... Oct 30 13:26:42.257049 zram_generator::config[1379]: No configuration found. Oct 30 13:26:42.334662 systemd-resolved[1308]: Positive Trust Anchors: Oct 30 13:26:42.334684 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 13:26:42.334691 systemd-resolved[1308]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 13:26:42.334722 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 13:26:42.338944 systemd-resolved[1308]: Defaulting to hostname 'linux'. Oct 30 13:26:42.437393 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 13:26:42.437943 systemd[1]: Reloading finished in 325 ms. Oct 30 13:26:42.470658 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 13:26:42.472873 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 13:26:42.475044 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 13:26:42.479857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:26:42.496936 systemd[1]: Starting ensure-sysext.service... Oct 30 13:26:42.499455 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 13:26:42.522536 systemd[1]: Reload requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Oct 30 13:26:42.522555 systemd[1]: Reloading... Oct 30 13:26:42.525133 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 13:26:42.525174 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 13:26:42.525616 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 13:26:42.526085 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 13:26:42.527051 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 13:26:42.527381 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Oct 30 13:26:42.527545 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Oct 30 13:26:42.535524 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 13:26:42.535537 systemd-tmpfiles[1383]: Skipping /boot Oct 30 13:26:42.546389 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 13:26:42.546468 systemd-tmpfiles[1383]: Skipping /boot Oct 30 13:26:42.615330 zram_generator::config[1425]: No configuration found. Oct 30 13:26:42.815785 systemd[1]: Reloading finished in 292 ms. Oct 30 13:26:42.839010 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 13:26:42.873668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:26:42.885867 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 13:26:42.889540 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 13:26:42.892564 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 13:26:42.906266 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 13:26:42.910678 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:26:42.914897 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 13:26:42.921847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:26:42.922029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:26:42.923569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 13:26:42.927892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 13:26:42.932858 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 13:26:42.936557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:26:42.936730 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:26:42.936831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:26:42.941850 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 13:26:42.942179 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 13:26:42.944970 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 13:26:42.945892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 13:26:42.952064 systemd-udevd[1456]: Using default interface naming scheme 'v257'. Oct 30 13:26:42.959072 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:26:42.959317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:26:42.962253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 13:26:42.970103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 13:26:42.971064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:26:42.971196 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:26:42.971333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:26:42.982807 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 13:26:42.987513 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 13:26:42.991103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 13:26:42.992053 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 13:26:42.995590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 13:26:42.996249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 13:26:42.999762 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:26:43.004121 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 13:26:43.005338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 13:26:43.022667 augenrules[1502]: No rules Oct 30 13:26:43.023956 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 13:26:43.025220 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 13:26:43.037057 systemd[1]: Finished ensure-sysext.service. Oct 30 13:26:43.040824 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:26:43.041013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:26:43.042521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 13:26:43.047922 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 13:26:43.056588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 13:26:43.087612 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 13:26:43.089683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:26:43.089751 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:26:43.094131 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 13:26:43.098588 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 30 13:26:43.100733 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 13:26:43.101871 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 13:26:43.102174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 13:26:43.107009 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 13:26:43.107303 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 13:26:43.109599 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 13:26:43.109835 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 13:26:43.112667 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 13:26:43.112932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 13:26:43.121278 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 13:26:43.134317 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 13:26:43.137489 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 30 13:26:43.137658 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 13:26:43.137730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 13:26:43.137754 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 13:26:43.149338 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Oct 30 13:26:43.156323 kernel: ACPI: button: Power Button [PWRF] Oct 30 13:26:43.289340 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 30 13:26:43.295698 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 13:26:43.298076 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 13:26:43.301215 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 13:26:43.308213 systemd-networkd[1525]: lo: Link UP Oct 30 13:26:43.308225 systemd-networkd[1525]: lo: Gained carrier Oct 30 13:26:43.311140 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 13:26:43.313424 systemd-networkd[1525]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:26:43.313448 systemd-networkd[1525]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 13:26:43.315136 systemd[1]: Reached target network.target - Network. Oct 30 13:26:43.319194 systemd-networkd[1525]: eth0: Link UP Oct 30 13:26:43.319518 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 13:26:43.323634 systemd-networkd[1525]: eth0: Gained carrier Oct 30 13:26:43.323661 systemd-networkd[1525]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:26:43.326530 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 13:26:43.331910 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 13:26:43.345928 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 30 13:26:43.346252 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 30 13:26:43.346492 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 30 13:26:43.349374 systemd-networkd[1525]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 13:26:43.350176 systemd-timesyncd[1526]: Network configuration changed, trying to establish connection. Oct 30 13:26:44.687242 systemd-timesyncd[1526]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 30 13:26:44.687373 systemd-timesyncd[1526]: Initial clock synchronization to Thu 2025-10-30 13:26:44.687064 UTC. Oct 30 13:26:44.687425 systemd-resolved[1308]: Clock change detected. Flushing caches. Oct 30 13:26:44.717897 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 13:26:44.760248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:26:44.913531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 13:26:44.915704 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:26:44.920095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:26:44.958285 kernel: kvm_amd: TSC scaling supported Oct 30 13:26:44.958444 kernel: kvm_amd: Nested Virtualization enabled Oct 30 13:26:44.958467 kernel: kvm_amd: Nested Paging enabled Oct 30 13:26:44.958492 kernel: kvm_amd: LBR virtualization supported Oct 30 13:26:44.958526 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 30 13:26:44.958544 kernel: kvm_amd: Virtual GIF supported Oct 30 13:26:44.994644 kernel: EDAC MC: Ver: 3.0.0 Oct 30 13:26:45.037769 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:26:45.087458 ldconfig[1454]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 13:26:45.197797 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 13:26:45.201718 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 13:26:45.237152 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 13:26:45.240414 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 13:26:45.244208 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 13:26:45.247203 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 13:26:45.250593 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 30 13:26:45.254335 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 13:26:45.256935 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 13:26:45.260650 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 13:26:45.264178 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 13:26:45.264235 systemd[1]: Reached target paths.target - Path Units. Oct 30 13:26:45.266589 systemd[1]: Reached target timers.target - Timer Units. Oct 30 13:26:45.271187 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 13:26:45.277762 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 13:26:45.284454 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 13:26:45.287437 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 13:26:45.290256 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 13:26:45.295455 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 13:26:45.297532 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 13:26:45.300275 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 13:26:45.302894 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 13:26:45.304600 systemd[1]: Reached target basic.target - Basic System. Oct 30 13:26:45.306260 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 13:26:45.306290 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 13:26:45.307435 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 13:26:45.310343 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 13:26:45.313287 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 13:26:45.326545 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 13:26:45.331302 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 13:26:45.333221 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 13:26:45.335890 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 30 13:26:45.339472 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 13:26:45.342908 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 13:26:45.345164 jq[1580]: false Oct 30 13:26:45.350611 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 13:26:45.357301 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 13:26:45.362892 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing passwd entry cache Oct 30 13:26:45.364286 oslogin_cache_refresh[1582]: Refreshing passwd entry cache Oct 30 13:26:45.367560 extend-filesystems[1581]: Found /dev/vda6 Oct 30 13:26:45.367506 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 13:26:45.369330 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 13:26:45.370155 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 13:26:45.372283 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 13:26:45.376929 extend-filesystems[1581]: Found /dev/vda9 Oct 30 13:26:45.379040 extend-filesystems[1581]: Checking size of /dev/vda9 Oct 30 13:26:45.381137 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 13:26:45.384506 oslogin_cache_refresh[1582]: Failure getting users, quitting Oct 30 13:26:45.386306 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting users, quitting Oct 30 13:26:45.386306 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 13:26:45.386306 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing group entry cache Oct 30 13:26:45.384532 oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 13:26:45.384595 oslogin_cache_refresh[1582]: Refreshing group entry cache Oct 30 13:26:45.387218 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 13:26:45.443449 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 13:26:45.447786 jq[1600]: true Oct 30 13:26:45.444882 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 13:26:45.445715 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 13:26:45.447111 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 13:26:45.469707 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting groups, quitting Oct 30 13:26:45.469707 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 13:26:45.468796 oslogin_cache_refresh[1582]: Failure getting groups, quitting Oct 30 13:26:45.468817 oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 13:26:45.470586 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 13:26:45.470882 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 13:26:45.474591 update_engine[1597]: I20251030 13:26:45.472946 1597 main.cc:92] Flatcar Update Engine starting Oct 30 13:26:45.473631 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 30 13:26:45.473942 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 30 13:26:45.479140 extend-filesystems[1581]: Resized partition /dev/vda9 Oct 30 13:26:45.489629 extend-filesystems[1619]: resize2fs 1.47.3 (8-Jul-2025) Oct 30 13:26:45.498391 jq[1612]: true Oct 30 13:26:45.554443 sshd_keygen[1605]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 13:26:45.582833 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 13:26:45.590571 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 13:26:45.619750 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 13:26:45.620076 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 13:26:45.623279 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 13:26:45.676121 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 13:26:45.681713 systemd-logind[1593]: Watching system buttons on /dev/input/event2 (Power Button) Oct 30 13:26:45.681739 systemd-logind[1593]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 30 13:26:45.682322 systemd-logind[1593]: New seat seat0. Oct 30 13:26:45.684357 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 13:26:45.687460 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 30 13:26:45.689377 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 13:26:45.691131 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 13:26:45.695280 tar[1607]: linux-amd64/LICENSE Oct 30 13:26:45.695280 tar[1607]: linux-amd64/helm Oct 30 13:26:45.742036 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 30 13:26:45.939553 dbus-daemon[1578]: [system] SELinux support is enabled Oct 30 13:26:45.939868 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 13:26:45.943613 systemd-networkd[1525]: eth0: Gained IPv6LL Oct 30 13:26:45.945546 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 13:26:45.945580 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 13:26:45.948327 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 13:26:45.948353 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 13:26:45.951739 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 13:26:45.955273 update_engine[1597]: I20251030 13:26:45.954694 1597 update_check_scheduler.cc:74] Next update check in 5m44s Oct 30 13:26:45.955437 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 13:26:45.957305 dbus-daemon[1578]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 30 13:26:45.959240 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 30 13:26:46.042690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:26:46.046021 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 30 13:26:46.049036 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 13:26:46.068576 systemd[1]: Started update-engine.service - Update Engine. Oct 30 13:26:46.073542 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 13:26:46.113623 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 30 13:26:46.113937 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 30 13:26:46.116445 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 13:26:46.132126 locksmithd[1669]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 13:26:46.202778 bash[1657]: Updated "/home/core/.ssh/authorized_keys" Oct 30 13:26:46.250147 extend-filesystems[1619]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 30 13:26:46.250147 extend-filesystems[1619]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 30 13:26:46.250147 extend-filesystems[1619]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 30 13:26:46.256256 extend-filesystems[1581]: Resized filesystem in /dev/vda9 Oct 30 13:26:46.255763 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 13:26:46.262449 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 13:26:46.262805 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 13:26:46.265985 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 13:26:46.275213 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 30 13:26:46.465852 containerd[1614]: time="2025-10-30T13:26:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 13:26:46.470019 containerd[1614]: time="2025-10-30T13:26:46.467710422Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 13:26:46.483229 containerd[1614]: time="2025-10-30T13:26:46.483181320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.336µs" Oct 30 13:26:46.483229 containerd[1614]: time="2025-10-30T13:26:46.483222016Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 13:26:46.483306 containerd[1614]: time="2025-10-30T13:26:46.483248546Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 13:26:46.483466 containerd[1614]: time="2025-10-30T13:26:46.483443091Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 13:26:46.483527 containerd[1614]: time="2025-10-30T13:26:46.483465874Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 13:26:46.483527 containerd[1614]: time="2025-10-30T13:26:46.483502402Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 13:26:46.483613 containerd[1614]: time="2025-10-30T13:26:46.483591149Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 13:26:46.483613 containerd[1614]: time="2025-10-30T13:26:46.483611036Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 13:26:46.483888 containerd[1614]: time="2025-10-30T13:26:46.483851998Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 13:26:46.483888 containerd[1614]: time="2025-10-30T13:26:46.483873999Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 13:26:46.483941 containerd[1614]: time="2025-10-30T13:26:46.483888867Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 13:26:46.483941 containerd[1614]: time="2025-10-30T13:26:46.483898675Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 13:26:46.484072 containerd[1614]: time="2025-10-30T13:26:46.484048115Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 13:26:46.484366 containerd[1614]: time="2025-10-30T13:26:46.484338620Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 13:26:46.484418 containerd[1614]: time="2025-10-30T13:26:46.484394375Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 13:26:46.484418 containerd[1614]: time="2025-10-30T13:26:46.484413300Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 13:26:46.484468 containerd[1614]: time="2025-10-30T13:26:46.484458325Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 13:26:46.485246 containerd[1614]: time="2025-10-30T13:26:46.485214182Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 13:26:46.485739 containerd[1614]: time="2025-10-30T13:26:46.485705854Z" level=info msg="metadata content store policy set" policy=shared Oct 30 13:26:46.541279 tar[1607]: linux-amd64/README.md Oct 30 13:26:46.565939 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 13:26:46.792650 containerd[1614]: time="2025-10-30T13:26:46.792470527Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 13:26:46.792650 containerd[1614]: time="2025-10-30T13:26:46.792607755Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 13:26:46.792650 containerd[1614]: time="2025-10-30T13:26:46.792625718Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 13:26:46.792650 containerd[1614]: time="2025-10-30T13:26:46.792645556Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 13:26:46.792650 containerd[1614]: time="2025-10-30T13:26:46.792658309Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 13:26:46.792883 containerd[1614]: time="2025-10-30T13:26:46.792668208Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 13:26:46.792883 containerd[1614]: time="2025-10-30T13:26:46.792682875Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 13:26:46.792883 containerd[1614]: time="2025-10-30T13:26:46.792698415Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 13:26:46.792883 containerd[1614]: time="2025-10-30T13:26:46.792724704Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 13:26:46.792883 containerd[1614]: time="2025-10-30T13:26:46.792740443Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 13:26:46.792883 containerd[1614]: time="2025-10-30T13:26:46.792751143Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 13:26:46.792883 containerd[1614]: time="2025-10-30T13:26:46.792775760Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 13:26:46.793086 containerd[1614]: time="2025-10-30T13:26:46.793063890Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 13:26:46.793113 containerd[1614]: time="2025-10-30T13:26:46.793095379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 13:26:46.793144 containerd[1614]: time="2025-10-30T13:26:46.793121107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 13:26:46.793144 containerd[1614]: time="2025-10-30T13:26:46.793139401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 13:26:46.793186 containerd[1614]: time="2025-10-30T13:26:46.793152025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 13:26:46.793186 containerd[1614]: time="2025-10-30T13:26:46.793163717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 13:26:46.793226 containerd[1614]: time="2025-10-30T13:26:46.793188924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 13:26:46.793226 containerd[1614]: time="2025-10-30T13:26:46.793212078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 13:26:46.793392 containerd[1614]: time="2025-10-30T13:26:46.793224541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 13:26:46.793392 containerd[1614]: time="2025-10-30T13:26:46.793235111Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 13:26:46.793392 containerd[1614]: time="2025-10-30T13:26:46.793264947Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 13:26:46.793459 containerd[1614]: time="2025-10-30T13:26:46.793416130Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 13:26:46.793459 containerd[1614]: time="2025-10-30T13:26:46.793440135Z" level=info msg="Start snapshots syncer" Oct 30 13:26:46.793500 containerd[1614]: time="2025-10-30T13:26:46.793492544Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 13:26:46.793886 containerd[1614]: time="2025-10-30T13:26:46.793798457Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 13:26:46.794038 containerd[1614]: time="2025-10-30T13:26:46.793897493Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 13:26:46.797021 containerd[1614]: time="2025-10-30T13:26:46.796976556Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 13:26:46.797157 containerd[1614]: time="2025-10-30T13:26:46.797125776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 13:26:46.797157 containerd[1614]: time="2025-10-30T13:26:46.797152236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 13:26:46.797200 containerd[1614]: time="2025-10-30T13:26:46.797165460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 13:26:46.797200 containerd[1614]: time="2025-10-30T13:26:46.797175469Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 13:26:46.797200 containerd[1614]: time="2025-10-30T13:26:46.797191189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 13:26:46.797269 containerd[1614]: time="2025-10-30T13:26:46.797206457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 13:26:46.797269 containerd[1614]: time="2025-10-30T13:26:46.797246082Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 13:26:46.797307 containerd[1614]: time="2025-10-30T13:26:46.797282350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 13:26:46.797307 containerd[1614]: time="2025-10-30T13:26:46.797298129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 13:26:46.797342 containerd[1614]: time="2025-10-30T13:26:46.797309090Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 13:26:46.797411 containerd[1614]: time="2025-10-30T13:26:46.797389671Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 13:26:46.797444 containerd[1614]: time="2025-10-30T13:26:46.797413245Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 13:26:46.797444 containerd[1614]: time="2025-10-30T13:26:46.797423093Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 13:26:46.797444 containerd[1614]: time="2025-10-30T13:26:46.797432391Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 13:26:46.797444 containerd[1614]: time="2025-10-30T13:26:46.797439935Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 13:26:46.797514 containerd[1614]: time="2025-10-30T13:26:46.797454542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 13:26:46.797514 containerd[1614]: time="2025-10-30T13:26:46.797465162Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 13:26:46.797514 containerd[1614]: time="2025-10-30T13:26:46.797502582Z" level=info msg="runtime interface created" Oct 30 13:26:46.797514 containerd[1614]: time="2025-10-30T13:26:46.797510778Z" level=info msg="created NRI interface" Oct 30 13:26:46.797589 containerd[1614]: time="2025-10-30T13:26:46.797521448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 13:26:46.797589 containerd[1614]: time="2025-10-30T13:26:46.797537508Z" level=info msg="Connect containerd service" Oct 30 13:26:46.797589 containerd[1614]: time="2025-10-30T13:26:46.797565731Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 13:26:46.798726 containerd[1614]: time="2025-10-30T13:26:46.798695349Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 13:26:47.052218 containerd[1614]: time="2025-10-30T13:26:47.052060228Z" level=info msg="Start subscribing containerd event" Oct 30 13:26:47.052359 containerd[1614]: time="2025-10-30T13:26:47.052181736Z" level=info msg="Start recovering state" Oct 30 13:26:47.052677 containerd[1614]: time="2025-10-30T13:26:47.052396689Z" level=info msg="Start event monitor" Oct 30 13:26:47.052677 containerd[1614]: time="2025-10-30T13:26:47.052427166Z" level=info msg="Start cni network conf syncer for default" Oct 30 13:26:47.052677 containerd[1614]: time="2025-10-30T13:26:47.052463554Z" level=info msg="Start streaming server" Oct 30 13:26:47.052677 containerd[1614]: time="2025-10-30T13:26:47.052492428Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 13:26:47.052677 containerd[1614]: time="2025-10-30T13:26:47.052501315Z" level=info msg="runtime interface starting up..." Oct 30 13:26:47.052677 containerd[1614]: time="2025-10-30T13:26:47.052511785Z" level=info msg="starting plugins..." Oct 30 13:26:47.052677 containerd[1614]: time="2025-10-30T13:26:47.052535970Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 13:26:47.053047 containerd[1614]: time="2025-10-30T13:26:47.053005360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 13:26:47.053123 containerd[1614]: time="2025-10-30T13:26:47.053104235Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 13:26:47.053215 containerd[1614]: time="2025-10-30T13:26:47.053190247Z" level=info msg="containerd successfully booted in 0.589483s" Oct 30 13:26:47.053406 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 13:26:47.972040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:26:47.974654 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 13:26:47.975744 systemd[1]: Startup finished in 2.997s (kernel) + 9.717s (initrd) + 5.925s (userspace) = 18.640s. Oct 30 13:26:47.977178 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 13:26:48.683743 kubelet[1721]: E1030 13:26:48.683639 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 13:26:48.687638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 13:26:48.687843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 13:26:48.688257 systemd[1]: kubelet.service: Consumed 2.074s CPU time, 265.3M memory peak. Oct 30 13:26:54.904469 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 13:26:54.905974 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:49656.service - OpenSSH per-connection server daemon (10.0.0.1:49656). Oct 30 13:26:54.991545 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 49656 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:26:54.993711 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:26:55.001112 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 13:26:55.002322 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 13:26:55.008407 systemd-logind[1593]: New session 1 of user core. Oct 30 13:26:55.032149 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 13:26:55.036063 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 13:26:55.050497 (systemd)[1739]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 13:26:55.052938 systemd-logind[1593]: New session c1 of user core. Oct 30 13:26:55.207579 systemd[1739]: Queued start job for default target default.target. Oct 30 13:26:55.231321 systemd[1739]: Created slice app.slice - User Application Slice. Oct 30 13:26:55.231349 systemd[1739]: Reached target paths.target - Paths. Oct 30 13:26:55.231391 systemd[1739]: Reached target timers.target - Timers. Oct 30 13:26:55.233035 systemd[1739]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 13:26:55.246503 systemd[1739]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 13:26:55.246678 systemd[1739]: Reached target sockets.target - Sockets. Oct 30 13:26:55.246755 systemd[1739]: Reached target basic.target - Basic System. Oct 30 13:26:55.246827 systemd[1739]: Reached target default.target - Main User Target. Oct 30 13:26:55.246893 systemd[1739]: Startup finished in 186ms. Oct 30 13:26:55.246978 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 13:26:55.248728 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 13:26:55.260836 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:49660.service - OpenSSH per-connection server daemon (10.0.0.1:49660). Oct 30 13:26:55.318819 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 49660 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:26:55.320306 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:26:55.324619 systemd-logind[1593]: New session 2 of user core. Oct 30 13:26:55.338164 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 13:26:55.351826 sshd[1753]: Connection closed by 10.0.0.1 port 49660 Oct 30 13:26:55.352199 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Oct 30 13:26:55.366655 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:49660.service: Deactivated successfully. Oct 30 13:26:55.368849 systemd[1]: session-2.scope: Deactivated successfully. Oct 30 13:26:55.369681 systemd-logind[1593]: Session 2 logged out. Waiting for processes to exit. Oct 30 13:26:55.372903 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:49666.service - OpenSSH per-connection server daemon (10.0.0.1:49666). Oct 30 13:26:55.373687 systemd-logind[1593]: Removed session 2. Oct 30 13:26:55.429177 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 49666 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:26:55.430940 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:26:55.435675 systemd-logind[1593]: New session 3 of user core. Oct 30 13:26:55.445148 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 13:26:55.454333 sshd[1762]: Connection closed by 10.0.0.1 port 49666 Oct 30 13:26:55.454624 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Oct 30 13:26:55.471807 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:49666.service: Deactivated successfully. Oct 30 13:26:55.473753 systemd[1]: session-3.scope: Deactivated successfully. Oct 30 13:26:55.474508 systemd-logind[1593]: Session 3 logged out. Waiting for processes to exit. Oct 30 13:26:55.477354 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:49674.service - OpenSSH per-connection server daemon (10.0.0.1:49674). Oct 30 13:26:55.477964 systemd-logind[1593]: Removed session 3. Oct 30 13:26:55.538522 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 49674 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:26:55.540185 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:26:55.544660 systemd-logind[1593]: New session 4 of user core. Oct 30 13:26:55.554143 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 13:26:55.568716 sshd[1771]: Connection closed by 10.0.0.1 port 49674 Oct 30 13:26:55.569102 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Oct 30 13:26:55.578476 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:49674.service: Deactivated successfully. Oct 30 13:26:55.580564 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 13:26:55.581324 systemd-logind[1593]: Session 4 logged out. Waiting for processes to exit. Oct 30 13:26:55.584530 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:49678.service - OpenSSH per-connection server daemon (10.0.0.1:49678). Oct 30 13:26:55.585371 systemd-logind[1593]: Removed session 4. Oct 30 13:26:55.635350 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 49678 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:26:55.636990 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:26:55.642061 systemd-logind[1593]: New session 5 of user core. Oct 30 13:26:55.658202 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 13:26:55.684642 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 13:26:55.684963 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:26:55.703683 sudo[1781]: pam_unix(sudo:session): session closed for user root Oct 30 13:26:55.705794 sshd[1780]: Connection closed by 10.0.0.1 port 49678 Oct 30 13:26:55.706208 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Oct 30 13:26:55.725261 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:49678.service: Deactivated successfully. Oct 30 13:26:55.727186 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 13:26:55.727946 systemd-logind[1593]: Session 5 logged out. Waiting for processes to exit. Oct 30 13:26:55.730774 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:49690.service - OpenSSH per-connection server daemon (10.0.0.1:49690). Oct 30 13:26:55.731520 systemd-logind[1593]: Removed session 5. Oct 30 13:26:55.790539 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 49690 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:26:55.792678 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:26:55.797275 systemd-logind[1593]: New session 6 of user core. Oct 30 13:26:55.808176 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 13:26:55.823236 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 13:26:55.823540 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:26:55.832916 sudo[1792]: pam_unix(sudo:session): session closed for user root Oct 30 13:26:55.840364 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 13:26:55.840667 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:26:55.851051 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 13:26:55.909604 augenrules[1814]: No rules Oct 30 13:26:55.911507 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 13:26:55.911796 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 13:26:55.913080 sudo[1791]: pam_unix(sudo:session): session closed for user root Oct 30 13:26:55.915103 sshd[1790]: Connection closed by 10.0.0.1 port 49690 Oct 30 13:26:55.915433 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Oct 30 13:26:55.924538 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:49690.service: Deactivated successfully. Oct 30 13:26:55.926391 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 13:26:55.927219 systemd-logind[1593]: Session 6 logged out. Waiting for processes to exit. Oct 30 13:26:55.930043 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:49698.service - OpenSSH per-connection server daemon (10.0.0.1:49698). Oct 30 13:26:55.930837 systemd-logind[1593]: Removed session 6. Oct 30 13:26:55.985146 sshd[1823]: Accepted publickey for core from 10.0.0.1 port 49698 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:26:55.986689 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:26:55.991308 systemd-logind[1593]: New session 7 of user core. Oct 30 13:26:56.001163 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 13:26:56.014785 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 13:26:56.015115 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:26:56.818553 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 13:26:56.839305 (dockerd)[1847]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 13:26:57.312859 dockerd[1847]: time="2025-10-30T13:26:57.312746161Z" level=info msg="Starting up" Oct 30 13:26:57.313909 dockerd[1847]: time="2025-10-30T13:26:57.313881851Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 13:26:57.341321 dockerd[1847]: time="2025-10-30T13:26:57.341252170Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 13:26:57.399480 dockerd[1847]: time="2025-10-30T13:26:57.399358079Z" level=info msg="Loading containers: start." Oct 30 13:26:57.412032 kernel: Initializing XFRM netlink socket Oct 30 13:26:57.683699 systemd-networkd[1525]: docker0: Link UP Oct 30 13:26:57.687955 dockerd[1847]: time="2025-10-30T13:26:57.687920313Z" level=info msg="Loading containers: done." Oct 30 13:26:57.729042 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2042920095-merged.mount: Deactivated successfully. Oct 30 13:26:57.730876 dockerd[1847]: time="2025-10-30T13:26:57.730801184Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 13:26:57.730974 dockerd[1847]: time="2025-10-30T13:26:57.730959091Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 13:26:57.731078 dockerd[1847]: time="2025-10-30T13:26:57.731063496Z" level=info msg="Initializing buildkit" Oct 30 13:26:57.773482 dockerd[1847]: time="2025-10-30T13:26:57.773435554Z" level=info msg="Completed buildkit initialization" Oct 30 13:26:57.782241 dockerd[1847]: time="2025-10-30T13:26:57.782209355Z" level=info msg="Daemon has completed initialization" Oct 30 13:26:57.782316 dockerd[1847]: time="2025-10-30T13:26:57.782262234Z" level=info msg="API listen on /run/docker.sock" Oct 30 13:26:57.782610 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 13:26:58.679277 containerd[1614]: time="2025-10-30T13:26:58.679212626Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 30 13:26:58.927941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 13:26:58.929691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:26:59.214625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:26:59.219783 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 13:26:59.439118 kubelet[2075]: E1030 13:26:59.439050 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 13:26:59.445966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 13:26:59.446274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 13:26:59.446839 systemd[1]: kubelet.service: Consumed 350ms CPU time, 111.2M memory peak. Oct 30 13:26:59.634438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259611913.mount: Deactivated successfully. Oct 30 13:27:01.029104 containerd[1614]: time="2025-10-30T13:27:01.028992332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:01.029908 containerd[1614]: time="2025-10-30T13:27:01.029827959Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Oct 30 13:27:01.032523 containerd[1614]: time="2025-10-30T13:27:01.031539508Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:01.035817 containerd[1614]: time="2025-10-30T13:27:01.035785078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:01.036694 containerd[1614]: time="2025-10-30T13:27:01.036649489Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.357388413s" Oct 30 13:27:01.036694 containerd[1614]: time="2025-10-30T13:27:01.036691959Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 30 13:27:01.037362 containerd[1614]: time="2025-10-30T13:27:01.037328021Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 30 13:27:02.323808 containerd[1614]: time="2025-10-30T13:27:02.323736818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:02.324629 containerd[1614]: time="2025-10-30T13:27:02.324564600Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Oct 30 13:27:02.326117 containerd[1614]: time="2025-10-30T13:27:02.326076995Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:02.328960 containerd[1614]: time="2025-10-30T13:27:02.328917581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:02.329973 containerd[1614]: time="2025-10-30T13:27:02.329923137Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.292562204s" Oct 30 13:27:02.329973 containerd[1614]: time="2025-10-30T13:27:02.329960477Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 30 13:27:02.330552 containerd[1614]: time="2025-10-30T13:27:02.330519826Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 30 13:27:03.862178 containerd[1614]: time="2025-10-30T13:27:03.862071294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:03.862989 containerd[1614]: time="2025-10-30T13:27:03.862882225Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Oct 30 13:27:03.864291 containerd[1614]: time="2025-10-30T13:27:03.864232246Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:03.867189 containerd[1614]: time="2025-10-30T13:27:03.867148644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:03.868542 containerd[1614]: time="2025-10-30T13:27:03.868480361Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.537925931s" Oct 30 13:27:03.868542 containerd[1614]: time="2025-10-30T13:27:03.868523642Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 30 13:27:03.869101 containerd[1614]: time="2025-10-30T13:27:03.869071420Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 30 13:27:04.806191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2030716635.mount: Deactivated successfully. Oct 30 13:27:05.471495 containerd[1614]: time="2025-10-30T13:27:05.471403732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:05.472298 containerd[1614]: time="2025-10-30T13:27:05.472252774Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Oct 30 13:27:05.473554 containerd[1614]: time="2025-10-30T13:27:05.473522124Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:05.475534 containerd[1614]: time="2025-10-30T13:27:05.475500444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:05.476097 containerd[1614]: time="2025-10-30T13:27:05.476049052Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.606944731s" Oct 30 13:27:05.476134 containerd[1614]: time="2025-10-30T13:27:05.476096932Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 30 13:27:05.476803 containerd[1614]: time="2025-10-30T13:27:05.476588323Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 30 13:27:06.310096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1863949416.mount: Deactivated successfully. Oct 30 13:27:06.965691 containerd[1614]: time="2025-10-30T13:27:06.965620729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:06.966498 containerd[1614]: time="2025-10-30T13:27:06.966443271Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Oct 30 13:27:06.967661 containerd[1614]: time="2025-10-30T13:27:06.967625157Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:06.970082 containerd[1614]: time="2025-10-30T13:27:06.970046788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:06.971054 containerd[1614]: time="2025-10-30T13:27:06.971018600Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.494395983s" Oct 30 13:27:06.971054 containerd[1614]: time="2025-10-30T13:27:06.971053144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 30 13:27:06.971660 containerd[1614]: time="2025-10-30T13:27:06.971616651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 13:27:07.405305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2806814688.mount: Deactivated successfully. Oct 30 13:27:07.412092 containerd[1614]: time="2025-10-30T13:27:07.412034364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:27:07.412852 containerd[1614]: time="2025-10-30T13:27:07.412808526Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 30 13:27:07.414171 containerd[1614]: time="2025-10-30T13:27:07.414125705Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:27:07.417126 containerd[1614]: time="2025-10-30T13:27:07.417091987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:27:07.418115 containerd[1614]: time="2025-10-30T13:27:07.418062577Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 446.414407ms" Oct 30 13:27:07.418159 containerd[1614]: time="2025-10-30T13:27:07.418114665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 30 13:27:07.418871 containerd[1614]: time="2025-10-30T13:27:07.418667872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 30 13:27:08.452376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876403986.mount: Deactivated successfully. Oct 30 13:27:09.678172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 13:27:09.682167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:27:09.958260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:27:09.968627 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 13:27:10.053436 kubelet[2274]: E1030 13:27:10.053354 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 13:27:10.057441 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 13:27:10.057642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 13:27:10.058147 systemd[1]: kubelet.service: Consumed 320ms CPU time, 111.2M memory peak. Oct 30 13:27:11.756991 containerd[1614]: time="2025-10-30T13:27:11.756902932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:11.757604 containerd[1614]: time="2025-10-30T13:27:11.757557138Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Oct 30 13:27:11.758811 containerd[1614]: time="2025-10-30T13:27:11.758754804Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:11.761550 containerd[1614]: time="2025-10-30T13:27:11.761498057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:11.762492 containerd[1614]: time="2025-10-30T13:27:11.762446015Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.343744099s" Oct 30 13:27:11.762492 containerd[1614]: time="2025-10-30T13:27:11.762483445Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 30 13:27:14.462972 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:27:14.463171 systemd[1]: kubelet.service: Consumed 320ms CPU time, 111.2M memory peak. Oct 30 13:27:14.465510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:27:14.572055 systemd[1]: Reload requested from client PID 2314 ('systemctl') (unit session-7.scope)... Oct 30 13:27:14.572081 systemd[1]: Reloading... Oct 30 13:27:14.662313 zram_generator::config[2358]: No configuration found. Oct 30 13:27:14.957959 systemd[1]: Reloading finished in 385 ms. Oct 30 13:27:15.032110 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 13:27:15.032219 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 13:27:15.032613 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:27:15.033075 systemd[1]: kubelet.service: Consumed 164ms CPU time, 98.2M memory peak. Oct 30 13:27:15.035300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:27:15.339125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:27:15.343546 (kubelet)[2406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 13:27:15.427491 kubelet[2406]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:27:15.427491 kubelet[2406]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 13:27:15.427491 kubelet[2406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:27:15.428119 kubelet[2406]: I1030 13:27:15.427543 2406 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 13:27:15.722321 kubelet[2406]: I1030 13:27:15.722279 2406 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 13:27:15.722321 kubelet[2406]: I1030 13:27:15.722307 2406 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 13:27:15.722566 kubelet[2406]: I1030 13:27:15.722542 2406 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 13:27:15.751896 kubelet[2406]: E1030 13:27:15.750699 2406 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:27:15.752322 kubelet[2406]: I1030 13:27:15.752295 2406 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 13:27:15.760272 kubelet[2406]: I1030 13:27:15.760229 2406 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 13:27:15.766834 kubelet[2406]: I1030 13:27:15.766796 2406 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 13:27:15.767903 kubelet[2406]: I1030 13:27:15.767845 2406 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 13:27:15.768113 kubelet[2406]: I1030 13:27:15.767886 2406 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 13:27:15.768276 kubelet[2406]: I1030 13:27:15.768120 2406 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 13:27:15.768276 kubelet[2406]: I1030 13:27:15.768130 2406 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 13:27:15.768350 kubelet[2406]: I1030 13:27:15.768294 2406 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:27:15.771281 kubelet[2406]: I1030 13:27:15.771245 2406 kubelet.go:446] "Attempting to sync node with API server" Oct 30 13:27:15.771281 kubelet[2406]: I1030 13:27:15.771281 2406 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 13:27:15.771367 kubelet[2406]: I1030 13:27:15.771307 2406 kubelet.go:352] "Adding apiserver pod source" Oct 30 13:27:15.771367 kubelet[2406]: I1030 13:27:15.771323 2406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 13:27:15.774431 kubelet[2406]: I1030 13:27:15.774385 2406 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 13:27:15.774834 kubelet[2406]: I1030 13:27:15.774790 2406 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 13:27:15.774901 kubelet[2406]: W1030 13:27:15.774838 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Oct 30 13:27:15.774945 kubelet[2406]: E1030 13:27:15.774900 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:27:15.775383 kubelet[2406]: W1030 13:27:15.775322 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Oct 30 13:27:15.775383 kubelet[2406]: E1030 13:27:15.775371 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:27:15.776201 kubelet[2406]: W1030 13:27:15.776178 2406 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 13:27:15.778345 kubelet[2406]: I1030 13:27:15.778307 2406 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 13:27:15.778847 kubelet[2406]: I1030 13:27:15.778822 2406 server.go:1287] "Started kubelet" Oct 30 13:27:15.778910 kubelet[2406]: I1030 13:27:15.778891 2406 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 13:27:15.782101 kubelet[2406]: I1030 13:27:15.782065 2406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 13:27:15.785034 kubelet[2406]: E1030 13:27:15.784853 2406 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 13:27:15.785576 kubelet[2406]: I1030 13:27:15.785547 2406 server.go:479] "Adding debug handlers to kubelet server" Oct 30 13:27:15.786174 kubelet[2406]: I1030 13:27:15.785754 2406 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 13:27:15.788716 kubelet[2406]: I1030 13:27:15.788507 2406 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 13:27:15.788769 kubelet[2406]: E1030 13:27:15.788750 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:15.788891 kubelet[2406]: E1030 13:27:15.787596 2406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187347cf80ecbd15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 13:27:15.778796821 +0000 UTC m=+0.396617980,LastTimestamp:2025-10-30 13:27:15.778796821 +0000 UTC m=+0.396617980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 13:27:15.789341 kubelet[2406]: I1030 13:27:15.789312 2406 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 13:27:15.789402 kubelet[2406]: I1030 13:27:15.789383 2406 reconciler.go:26] "Reconciler: start to sync state" Oct 30 13:27:15.789746 kubelet[2406]: W1030 13:27:15.789691 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Oct 30 13:27:15.789746 kubelet[2406]: E1030 13:27:15.789742 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:27:15.789840 kubelet[2406]: E1030 13:27:15.789823 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Oct 30 13:27:15.790344 kubelet[2406]: I1030 13:27:15.790086 2406 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 13:27:15.790549 kubelet[2406]: I1030 13:27:15.790524 2406 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 13:27:15.791638 kubelet[2406]: I1030 13:27:15.791600 2406 factory.go:221] Registration of the containerd container factory successfully Oct 30 13:27:15.791638 kubelet[2406]: I1030 13:27:15.791620 2406 factory.go:221] Registration of the systemd container factory successfully Oct 30 13:27:15.791747 kubelet[2406]: I1030 13:27:15.791724 2406 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 13:27:15.804028 kubelet[2406]: I1030 13:27:15.803930 2406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 13:27:15.805426 kubelet[2406]: I1030 13:27:15.805377 2406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 13:27:15.805426 kubelet[2406]: I1030 13:27:15.805416 2406 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 13:27:15.805592 kubelet[2406]: I1030 13:27:15.805451 2406 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 13:27:15.805592 kubelet[2406]: I1030 13:27:15.805463 2406 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 13:27:15.805592 kubelet[2406]: E1030 13:27:15.805518 2406 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 13:27:15.807629 kubelet[2406]: W1030 13:27:15.807586 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Oct 30 13:27:15.807728 kubelet[2406]: E1030 13:27:15.807639 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:27:15.808416 kubelet[2406]: I1030 13:27:15.808398 2406 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 13:27:15.808455 kubelet[2406]: I1030 13:27:15.808435 2406 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 13:27:15.808486 kubelet[2406]: I1030 13:27:15.808455 2406 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:27:15.889105 kubelet[2406]: E1030 13:27:15.889068 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:15.906447 kubelet[2406]: E1030 13:27:15.906401 2406 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 13:27:15.989900 kubelet[2406]: E1030 13:27:15.989718 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:15.991512 kubelet[2406]: E1030 13:27:15.991469 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Oct 30 13:27:16.090704 kubelet[2406]: E1030 13:27:16.090631 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:16.106967 kubelet[2406]: E1030 13:27:16.106923 2406 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 13:27:16.191285 kubelet[2406]: E1030 13:27:16.191223 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:16.292391 kubelet[2406]: E1030 13:27:16.292294 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:16.392354 kubelet[2406]: E1030 13:27:16.392298 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Oct 30 13:27:16.392433 kubelet[2406]: E1030 13:27:16.392375 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:16.492811 kubelet[2406]: E1030 13:27:16.492733 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:16.508053 kubelet[2406]: E1030 13:27:16.507939 2406 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 13:27:16.593797 kubelet[2406]: E1030 13:27:16.593563 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:16.694380 kubelet[2406]: E1030 13:27:16.694275 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:16.739258 kubelet[2406]: W1030 13:27:16.739186 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Oct 30 13:27:16.739392 kubelet[2406]: E1030 13:27:16.739263 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:27:16.795018 kubelet[2406]: E1030 13:27:16.794938 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:16.806684 kubelet[2406]: W1030 13:27:16.806609 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Oct 30 13:27:16.806745 kubelet[2406]: E1030 13:27:16.806710 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:27:16.895503 kubelet[2406]: E1030 13:27:16.895308 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:16.995891 kubelet[2406]: E1030 13:27:16.995821 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:17.004512 kubelet[2406]: W1030 13:27:17.004424 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Oct 30 13:27:17.004512 kubelet[2406]: E1030 13:27:17.004498 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:27:17.037129 kubelet[2406]: I1030 13:27:17.037095 2406 policy_none.go:49] "None policy: Start" Oct 30 13:27:17.037129 kubelet[2406]: I1030 13:27:17.037129 2406 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 13:27:17.037257 kubelet[2406]: I1030 13:27:17.037148 2406 state_mem.go:35] "Initializing new in-memory state store" Oct 30 13:27:17.090076 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 13:27:17.096858 kubelet[2406]: E1030 13:27:17.096819 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:17.105438 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 13:27:17.109082 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 13:27:17.123134 kubelet[2406]: I1030 13:27:17.123048 2406 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 13:27:17.123431 kubelet[2406]: I1030 13:27:17.123349 2406 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 13:27:17.123431 kubelet[2406]: I1030 13:27:17.123373 2406 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 13:27:17.124950 kubelet[2406]: E1030 13:27:17.124573 2406 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 13:27:17.124950 kubelet[2406]: E1030 13:27:17.124684 2406 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 30 13:27:17.126470 kubelet[2406]: I1030 13:27:17.126160 2406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 13:27:17.193805 kubelet[2406]: E1030 13:27:17.193755 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="1.6s" Oct 30 13:27:17.228263 kubelet[2406]: I1030 13:27:17.228231 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:27:17.228638 kubelet[2406]: E1030 13:27:17.228602 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Oct 30 13:27:17.276428 kubelet[2406]: W1030 13:27:17.276385 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Oct 30 13:27:17.276489 kubelet[2406]: E1030 13:27:17.276425 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:27:17.317857 systemd[1]: Created slice kubepods-burstable-pod48db659d74dcefe8108370ae2da460a3.slice - libcontainer container kubepods-burstable-pod48db659d74dcefe8108370ae2da460a3.slice. Oct 30 13:27:17.332868 kubelet[2406]: E1030 13:27:17.332828 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:27:17.334774 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 30 13:27:17.354387 kubelet[2406]: E1030 13:27:17.354356 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:27:17.357371 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 30 13:27:17.359310 kubelet[2406]: E1030 13:27:17.359289 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:27:17.398698 kubelet[2406]: I1030 13:27:17.398649 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48db659d74dcefe8108370ae2da460a3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"48db659d74dcefe8108370ae2da460a3\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:27:17.398698 kubelet[2406]: I1030 13:27:17.398692 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:17.398791 kubelet[2406]: I1030 13:27:17.398712 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:17.398791 kubelet[2406]: I1030 13:27:17.398762 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 30 13:27:17.398841 kubelet[2406]: I1030 13:27:17.398809 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48db659d74dcefe8108370ae2da460a3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"48db659d74dcefe8108370ae2da460a3\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:27:17.398866 kubelet[2406]: I1030 13:27:17.398839 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48db659d74dcefe8108370ae2da460a3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"48db659d74dcefe8108370ae2da460a3\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:27:17.398866 kubelet[2406]: I1030 13:27:17.398864 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:17.398915 kubelet[2406]: I1030 13:27:17.398882 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:17.398915 kubelet[2406]: I1030 13:27:17.398899 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:17.429690 kubelet[2406]: I1030 13:27:17.429638 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:27:17.429957 kubelet[2406]: E1030 13:27:17.429924 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Oct 30 13:27:17.634340 kubelet[2406]: E1030 13:27:17.634188 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:17.635065 containerd[1614]: time="2025-10-30T13:27:17.634936063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:48db659d74dcefe8108370ae2da460a3,Namespace:kube-system,Attempt:0,}" Oct 30 13:27:17.655355 kubelet[2406]: E1030 13:27:17.655309 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:17.655924 containerd[1614]: time="2025-10-30T13:27:17.655892154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 30 13:27:17.660261 kubelet[2406]: E1030 13:27:17.660166 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:17.660574 containerd[1614]: time="2025-10-30T13:27:17.660539501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 30 13:27:17.671715 containerd[1614]: time="2025-10-30T13:27:17.671663647Z" level=info msg="connecting to shim e7a919ef76351718eb8b0ef31fdec77a2ad8f870e94c5ee63217ab5f0a90c657" address="unix:///run/containerd/s/6b3269aa46ecccc80a2a70d1732d8522c5e9730b49e1a0f59235ceb1f958b570" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:27:17.690032 containerd[1614]: time="2025-10-30T13:27:17.689958522Z" level=info msg="connecting to shim 1f69cb98106bdfc5297b850df68033102ce1bee2339ef10595720e12afad609c" address="unix:///run/containerd/s/d17bc4306a128df7ac9bc899dd2c5737616deb2c38df9c75e022f88d084c78ef" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:27:17.706270 containerd[1614]: time="2025-10-30T13:27:17.706204034Z" level=info msg="connecting to shim 9b0c61eb7a8e7de39ab376515dd210dc8a8510b8bd12f0a6e0a3b3c8f0d4cf37" address="unix:///run/containerd/s/af5420a218e400dc128c85f2d6307fa0a660f26622807c7dfaebde8be9b9173e" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:27:17.726769 systemd[1]: Started cri-containerd-e7a919ef76351718eb8b0ef31fdec77a2ad8f870e94c5ee63217ab5f0a90c657.scope - libcontainer container e7a919ef76351718eb8b0ef31fdec77a2ad8f870e94c5ee63217ab5f0a90c657. Oct 30 13:27:17.734180 systemd[1]: Started cri-containerd-1f69cb98106bdfc5297b850df68033102ce1bee2339ef10595720e12afad609c.scope - libcontainer container 1f69cb98106bdfc5297b850df68033102ce1bee2339ef10595720e12afad609c. Oct 30 13:27:17.774259 systemd[1]: Started cri-containerd-9b0c61eb7a8e7de39ab376515dd210dc8a8510b8bd12f0a6e0a3b3c8f0d4cf37.scope - libcontainer container 9b0c61eb7a8e7de39ab376515dd210dc8a8510b8bd12f0a6e0a3b3c8f0d4cf37. Oct 30 13:27:17.814036 containerd[1614]: time="2025-10-30T13:27:17.813970679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:48db659d74dcefe8108370ae2da460a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7a919ef76351718eb8b0ef31fdec77a2ad8f870e94c5ee63217ab5f0a90c657\"" Oct 30 13:27:17.815469 kubelet[2406]: E1030 13:27:17.815396 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:17.818112 containerd[1614]: time="2025-10-30T13:27:17.818083451Z" level=info msg="CreateContainer within sandbox \"e7a919ef76351718eb8b0ef31fdec77a2ad8f870e94c5ee63217ab5f0a90c657\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 13:27:17.822409 containerd[1614]: time="2025-10-30T13:27:17.822371329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f69cb98106bdfc5297b850df68033102ce1bee2339ef10595720e12afad609c\"" Oct 30 13:27:17.824557 kubelet[2406]: E1030 13:27:17.824524 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:17.827872 containerd[1614]: time="2025-10-30T13:27:17.827832488Z" level=info msg="CreateContainer within sandbox \"1f69cb98106bdfc5297b850df68033102ce1bee2339ef10595720e12afad609c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 13:27:17.833099 containerd[1614]: time="2025-10-30T13:27:17.833063365Z" level=info msg="Container f5cb3a2dbba8d67231a78846b3066977e257d4065b40eccaa3aee49f5c5ba77b: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:17.834496 containerd[1614]: time="2025-10-30T13:27:17.834458953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b0c61eb7a8e7de39ab376515dd210dc8a8510b8bd12f0a6e0a3b3c8f0d4cf37\"" Oct 30 13:27:17.834759 kubelet[2406]: I1030 13:27:17.834727 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:27:17.835249 kubelet[2406]: E1030 13:27:17.835211 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:17.835578 kubelet[2406]: E1030 13:27:17.835538 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Oct 30 13:27:17.837418 containerd[1614]: time="2025-10-30T13:27:17.837379407Z" level=info msg="CreateContainer within sandbox \"9b0c61eb7a8e7de39ab376515dd210dc8a8510b8bd12f0a6e0a3b3c8f0d4cf37\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 13:27:17.841451 kubelet[2406]: E1030 13:27:17.841419 2406 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Oct 30 13:27:17.842676 containerd[1614]: time="2025-10-30T13:27:17.842635542Z" level=info msg="Container 059f575460fe06ccdc8011c1ab0421cc132f4b5fd2b90bccb631fb5029a8a4f3: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:17.846440 containerd[1614]: time="2025-10-30T13:27:17.846393634Z" level=info msg="CreateContainer within sandbox \"e7a919ef76351718eb8b0ef31fdec77a2ad8f870e94c5ee63217ab5f0a90c657\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f5cb3a2dbba8d67231a78846b3066977e257d4065b40eccaa3aee49f5c5ba77b\"" Oct 30 13:27:17.847039 containerd[1614]: time="2025-10-30T13:27:17.846978536Z" level=info msg="StartContainer for \"f5cb3a2dbba8d67231a78846b3066977e257d4065b40eccaa3aee49f5c5ba77b\"" Oct 30 13:27:17.848304 containerd[1614]: time="2025-10-30T13:27:17.848274162Z" level=info msg="connecting to shim f5cb3a2dbba8d67231a78846b3066977e257d4065b40eccaa3aee49f5c5ba77b" address="unix:///run/containerd/s/6b3269aa46ecccc80a2a70d1732d8522c5e9730b49e1a0f59235ceb1f958b570" protocol=ttrpc version=3 Oct 30 13:27:17.849478 containerd[1614]: time="2025-10-30T13:27:17.849445379Z" level=info msg="CreateContainer within sandbox \"1f69cb98106bdfc5297b850df68033102ce1bee2339ef10595720e12afad609c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"059f575460fe06ccdc8011c1ab0421cc132f4b5fd2b90bccb631fb5029a8a4f3\"" Oct 30 13:27:17.850794 containerd[1614]: time="2025-10-30T13:27:17.850758750Z" level=info msg="StartContainer for \"059f575460fe06ccdc8011c1ab0421cc132f4b5fd2b90bccb631fb5029a8a4f3\"" Oct 30 13:27:17.855022 containerd[1614]: time="2025-10-30T13:27:17.854552719Z" level=info msg="connecting to shim 059f575460fe06ccdc8011c1ab0421cc132f4b5fd2b90bccb631fb5029a8a4f3" address="unix:///run/containerd/s/d17bc4306a128df7ac9bc899dd2c5737616deb2c38df9c75e022f88d084c78ef" protocol=ttrpc version=3 Oct 30 13:27:17.857715 containerd[1614]: time="2025-10-30T13:27:17.857689268Z" level=info msg="Container 655e5c086b47fb870d5e5613032ef42bc0dd3b51f9e8817b75274b3a4a48a236: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:17.869534 containerd[1614]: time="2025-10-30T13:27:17.869493799Z" level=info msg="CreateContainer within sandbox \"9b0c61eb7a8e7de39ab376515dd210dc8a8510b8bd12f0a6e0a3b3c8f0d4cf37\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"655e5c086b47fb870d5e5613032ef42bc0dd3b51f9e8817b75274b3a4a48a236\"" Oct 30 13:27:17.870106 containerd[1614]: time="2025-10-30T13:27:17.870076397Z" level=info msg="StartContainer for \"655e5c086b47fb870d5e5613032ef42bc0dd3b51f9e8817b75274b3a4a48a236\"" Oct 30 13:27:17.871244 containerd[1614]: time="2025-10-30T13:27:17.871208179Z" level=info msg="connecting to shim 655e5c086b47fb870d5e5613032ef42bc0dd3b51f9e8817b75274b3a4a48a236" address="unix:///run/containerd/s/af5420a218e400dc128c85f2d6307fa0a660f26622807c7dfaebde8be9b9173e" protocol=ttrpc version=3 Oct 30 13:27:17.874177 systemd[1]: Started cri-containerd-f5cb3a2dbba8d67231a78846b3066977e257d4065b40eccaa3aee49f5c5ba77b.scope - libcontainer container f5cb3a2dbba8d67231a78846b3066977e257d4065b40eccaa3aee49f5c5ba77b. Oct 30 13:27:17.880037 systemd[1]: Started cri-containerd-059f575460fe06ccdc8011c1ab0421cc132f4b5fd2b90bccb631fb5029a8a4f3.scope - libcontainer container 059f575460fe06ccdc8011c1ab0421cc132f4b5fd2b90bccb631fb5029a8a4f3. Oct 30 13:27:17.893132 systemd[1]: Started cri-containerd-655e5c086b47fb870d5e5613032ef42bc0dd3b51f9e8817b75274b3a4a48a236.scope - libcontainer container 655e5c086b47fb870d5e5613032ef42bc0dd3b51f9e8817b75274b3a4a48a236. Oct 30 13:27:17.944876 containerd[1614]: time="2025-10-30T13:27:17.944812423Z" level=info msg="StartContainer for \"f5cb3a2dbba8d67231a78846b3066977e257d4065b40eccaa3aee49f5c5ba77b\" returns successfully" Oct 30 13:27:17.948140 containerd[1614]: time="2025-10-30T13:27:17.948089190Z" level=info msg="StartContainer for \"059f575460fe06ccdc8011c1ab0421cc132f4b5fd2b90bccb631fb5029a8a4f3\" returns successfully" Oct 30 13:27:17.963506 containerd[1614]: time="2025-10-30T13:27:17.962464895Z" level=info msg="StartContainer for \"655e5c086b47fb870d5e5613032ef42bc0dd3b51f9e8817b75274b3a4a48a236\" returns successfully" Oct 30 13:27:18.638135 kubelet[2406]: I1030 13:27:18.637680 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:27:18.927092 kubelet[2406]: E1030 13:27:18.926930 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:27:18.927214 kubelet[2406]: E1030 13:27:18.927129 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:18.930019 kubelet[2406]: E1030 13:27:18.929963 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:27:18.930502 kubelet[2406]: E1030 13:27:18.930308 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:18.931863 kubelet[2406]: E1030 13:27:18.931844 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:27:18.932089 kubelet[2406]: E1030 13:27:18.932031 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:19.589417 kubelet[2406]: E1030 13:27:19.589349 2406 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 30 13:27:19.643885 kubelet[2406]: I1030 13:27:19.643839 2406 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 13:27:19.689232 kubelet[2406]: I1030 13:27:19.689161 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:27:19.693203 kubelet[2406]: E1030 13:27:19.693147 2406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 30 13:27:19.693203 kubelet[2406]: I1030 13:27:19.693193 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:19.694790 kubelet[2406]: E1030 13:27:19.694749 2406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:19.694790 kubelet[2406]: I1030 13:27:19.694780 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:27:19.696084 kubelet[2406]: E1030 13:27:19.696062 2406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 13:27:19.921806 kubelet[2406]: I1030 13:27:19.921650 2406 apiserver.go:52] "Watching apiserver" Oct 30 13:27:19.933091 kubelet[2406]: I1030 13:27:19.933069 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:27:19.933198 kubelet[2406]: I1030 13:27:19.933169 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:27:19.936461 kubelet[2406]: E1030 13:27:19.936424 2406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 13:27:19.936661 kubelet[2406]: E1030 13:27:19.936559 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:19.936850 kubelet[2406]: E1030 13:27:19.936811 2406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 30 13:27:19.937042 kubelet[2406]: E1030 13:27:19.936918 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:19.989656 kubelet[2406]: I1030 13:27:19.989625 2406 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 13:27:20.934403 kubelet[2406]: I1030 13:27:20.934366 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:27:20.991267 kubelet[2406]: E1030 13:27:20.991174 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:21.520777 kubelet[2406]: I1030 13:27:21.520738 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:21.525699 kubelet[2406]: E1030 13:27:21.525673 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:21.791106 systemd[1]: Reload requested from client PID 2683 ('systemctl') (unit session-7.scope)... Oct 30 13:27:21.791126 systemd[1]: Reloading... Oct 30 13:27:21.879355 zram_generator::config[2730]: No configuration found. Oct 30 13:27:21.936651 kubelet[2406]: E1030 13:27:21.936611 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:21.937180 kubelet[2406]: E1030 13:27:21.936854 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:22.113202 systemd[1]: Reloading finished in 321 ms. Oct 30 13:27:22.136323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:27:22.148248 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 13:27:22.148585 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:27:22.148647 systemd[1]: kubelet.service: Consumed 980ms CPU time, 131M memory peak. Oct 30 13:27:22.150810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:27:22.384039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:27:22.389488 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 13:27:22.445178 kubelet[2773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:27:22.445178 kubelet[2773]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 13:27:22.445178 kubelet[2773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:27:22.445640 kubelet[2773]: I1030 13:27:22.445249 2773 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 13:27:22.452521 kubelet[2773]: I1030 13:27:22.452466 2773 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 13:27:22.452521 kubelet[2773]: I1030 13:27:22.452511 2773 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 13:27:22.453053 kubelet[2773]: I1030 13:27:22.453025 2773 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 13:27:22.454599 kubelet[2773]: I1030 13:27:22.454565 2773 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 30 13:27:22.457461 kubelet[2773]: I1030 13:27:22.457404 2773 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 13:27:22.465210 kubelet[2773]: I1030 13:27:22.465174 2773 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 13:27:22.471566 kubelet[2773]: I1030 13:27:22.471478 2773 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 13:27:22.473419 kubelet[2773]: I1030 13:27:22.473353 2773 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 13:27:22.473619 kubelet[2773]: I1030 13:27:22.473409 2773 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 13:27:22.474070 kubelet[2773]: I1030 13:27:22.474038 2773 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 13:27:22.474070 kubelet[2773]: I1030 13:27:22.474058 2773 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 13:27:22.474171 kubelet[2773]: I1030 13:27:22.474126 2773 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:27:22.474360 kubelet[2773]: I1030 13:27:22.474339 2773 kubelet.go:446] "Attempting to sync node with API server" Oct 30 13:27:22.474401 kubelet[2773]: I1030 13:27:22.474368 2773 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 13:27:22.474401 kubelet[2773]: I1030 13:27:22.474397 2773 kubelet.go:352] "Adding apiserver pod source" Oct 30 13:27:22.474467 kubelet[2773]: I1030 13:27:22.474411 2773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 13:27:22.475877 kubelet[2773]: I1030 13:27:22.475851 2773 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 13:27:22.476271 kubelet[2773]: I1030 13:27:22.476245 2773 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 13:27:22.476769 kubelet[2773]: I1030 13:27:22.476719 2773 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 13:27:22.476769 kubelet[2773]: I1030 13:27:22.476764 2773 server.go:1287] "Started kubelet" Oct 30 13:27:22.476989 kubelet[2773]: I1030 13:27:22.476930 2773 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 13:27:22.478354 kubelet[2773]: I1030 13:27:22.478332 2773 server.go:479] "Adding debug handlers to kubelet server" Oct 30 13:27:22.478530 kubelet[2773]: I1030 13:27:22.478472 2773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 13:27:22.479693 kubelet[2773]: I1030 13:27:22.479667 2773 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 13:27:22.506306 kubelet[2773]: I1030 13:27:22.506270 2773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 13:27:22.511422 kubelet[2773]: I1030 13:27:22.506818 2773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 13:27:22.511813 kubelet[2773]: I1030 13:27:22.511800 2773 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 13:27:22.513071 kubelet[2773]: E1030 13:27:22.513052 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:27:22.514914 kubelet[2773]: I1030 13:27:22.514857 2773 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 13:27:22.515087 kubelet[2773]: I1030 13:27:22.515056 2773 reconciler.go:26] "Reconciler: start to sync state" Oct 30 13:27:22.516541 kubelet[2773]: I1030 13:27:22.516469 2773 factory.go:221] Registration of the systemd container factory successfully Oct 30 13:27:22.516641 kubelet[2773]: I1030 13:27:22.516609 2773 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 13:27:22.518775 kubelet[2773]: E1030 13:27:22.518756 2773 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 13:27:22.523372 kubelet[2773]: I1030 13:27:22.523283 2773 factory.go:221] Registration of the containerd container factory successfully Oct 30 13:27:22.532385 kubelet[2773]: I1030 13:27:22.532323 2773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 13:27:22.533721 kubelet[2773]: I1030 13:27:22.533688 2773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 13:27:22.533775 kubelet[2773]: I1030 13:27:22.533727 2773 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 13:27:22.533775 kubelet[2773]: I1030 13:27:22.533759 2773 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 13:27:22.533775 kubelet[2773]: I1030 13:27:22.533767 2773 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 13:27:22.533851 kubelet[2773]: E1030 13:27:22.533817 2773 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 13:27:22.564092 kubelet[2773]: I1030 13:27:22.562946 2773 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 13:27:22.564092 kubelet[2773]: I1030 13:27:22.562971 2773 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 13:27:22.564092 kubelet[2773]: I1030 13:27:22.562991 2773 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:27:22.564092 kubelet[2773]: I1030 13:27:22.563201 2773 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 13:27:22.564092 kubelet[2773]: I1030 13:27:22.563212 2773 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 13:27:22.564092 kubelet[2773]: I1030 13:27:22.563233 2773 policy_none.go:49] "None policy: Start" Oct 30 13:27:22.564092 kubelet[2773]: I1030 13:27:22.563244 2773 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 13:27:22.564092 kubelet[2773]: I1030 13:27:22.563254 2773 state_mem.go:35] "Initializing new in-memory state store" Oct 30 13:27:22.564092 kubelet[2773]: I1030 13:27:22.563387 2773 state_mem.go:75] "Updated machine memory state" Oct 30 13:27:22.568970 kubelet[2773]: I1030 13:27:22.568924 2773 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 13:27:22.569220 kubelet[2773]: I1030 13:27:22.569176 2773 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 13:27:22.569220 kubelet[2773]: I1030 13:27:22.569195 2773 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 13:27:22.569459 kubelet[2773]: I1030 13:27:22.569428 2773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 13:27:22.570305 kubelet[2773]: E1030 13:27:22.570283 2773 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 13:27:22.634926 kubelet[2773]: I1030 13:27:22.634787 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:22.635257 kubelet[2773]: I1030 13:27:22.635061 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:27:22.635257 kubelet[2773]: I1030 13:27:22.635230 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:27:22.642445 kubelet[2773]: E1030 13:27:22.642397 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 30 13:27:22.643103 kubelet[2773]: E1030 13:27:22.643064 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:22.675045 kubelet[2773]: I1030 13:27:22.675007 2773 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:27:22.682553 kubelet[2773]: I1030 13:27:22.682504 2773 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 30 13:27:22.682643 kubelet[2773]: I1030 13:27:22.682619 2773 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 13:27:22.716599 kubelet[2773]: I1030 13:27:22.716549 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:22.716684 kubelet[2773]: I1030 13:27:22.716606 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:22.716684 kubelet[2773]: I1030 13:27:22.716645 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:22.716763 kubelet[2773]: I1030 13:27:22.716682 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:22.716763 kubelet[2773]: I1030 13:27:22.716721 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48db659d74dcefe8108370ae2da460a3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"48db659d74dcefe8108370ae2da460a3\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:27:22.716825 kubelet[2773]: I1030 13:27:22.716770 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48db659d74dcefe8108370ae2da460a3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"48db659d74dcefe8108370ae2da460a3\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:27:22.716825 kubelet[2773]: I1030 13:27:22.716810 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:27:22.716868 kubelet[2773]: I1030 13:27:22.716847 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 30 13:27:22.716894 kubelet[2773]: I1030 13:27:22.716880 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48db659d74dcefe8108370ae2da460a3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"48db659d74dcefe8108370ae2da460a3\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:27:22.806387 sudo[2810]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 30 13:27:22.806798 sudo[2810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 30 13:27:22.943308 kubelet[2773]: E1030 13:27:22.943083 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:22.943308 kubelet[2773]: E1030 13:27:22.943163 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:22.943308 kubelet[2773]: E1030 13:27:22.943260 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:23.130855 sudo[2810]: pam_unix(sudo:session): session closed for user root Oct 30 13:27:23.475946 kubelet[2773]: I1030 13:27:23.475880 2773 apiserver.go:52] "Watching apiserver" Oct 30 13:27:23.515201 kubelet[2773]: I1030 13:27:23.515139 2773 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 13:27:23.549716 kubelet[2773]: E1030 13:27:23.549305 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:23.549716 kubelet[2773]: E1030 13:27:23.549577 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:23.550117 kubelet[2773]: E1030 13:27:23.550073 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:23.554322 kubelet[2773]: I1030 13:27:23.554261 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5542341290000001 podStartE2EDuration="1.554234129s" podCreationTimestamp="2025-10-30 13:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:27:23.545948647 +0000 UTC m=+1.152493296" watchObservedRunningTime="2025-10-30 13:27:23.554234129 +0000 UTC m=+1.160778778" Oct 30 13:27:23.561149 kubelet[2773]: I1030 13:27:23.561015 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.56098656 podStartE2EDuration="2.56098656s" podCreationTimestamp="2025-10-30 13:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:27:23.554777333 +0000 UTC m=+1.161321982" watchObservedRunningTime="2025-10-30 13:27:23.56098656 +0000 UTC m=+1.167531209" Oct 30 13:27:23.561149 kubelet[2773]: I1030 13:27:23.561083 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.5610797979999997 podStartE2EDuration="3.561079798s" podCreationTimestamp="2025-10-30 13:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:27:23.560909754 +0000 UTC m=+1.167454413" watchObservedRunningTime="2025-10-30 13:27:23.561079798 +0000 UTC m=+1.167624447" Oct 30 13:27:24.550375 kubelet[2773]: E1030 13:27:24.550325 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:24.550802 kubelet[2773]: E1030 13:27:24.550454 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:24.839169 sudo[1827]: pam_unix(sudo:session): session closed for user root Oct 30 13:27:24.841110 sshd[1826]: Connection closed by 10.0.0.1 port 49698 Oct 30 13:27:24.841679 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Oct 30 13:27:24.846326 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:49698.service: Deactivated successfully. Oct 30 13:27:24.848848 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 13:27:24.849117 systemd[1]: session-7.scope: Consumed 5.061s CPU time, 251M memory peak. Oct 30 13:27:24.850430 systemd-logind[1593]: Session 7 logged out. Waiting for processes to exit. Oct 30 13:27:24.851900 systemd-logind[1593]: Removed session 7. Oct 30 13:27:26.490085 kubelet[2773]: I1030 13:27:26.490042 2773 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 13:27:26.490538 containerd[1614]: time="2025-10-30T13:27:26.490327234Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 13:27:26.490813 kubelet[2773]: I1030 13:27:26.490575 2773 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 13:27:26.549027 kubelet[2773]: W1030 13:27:26.547468 2773 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 30 13:27:26.548071 systemd[1]: Created slice kubepods-besteffort-pod66dab483_cb59_4ef4_9e14_c6f312e5e5e8.slice - libcontainer container kubepods-besteffort-pod66dab483_cb59_4ef4_9e14_c6f312e5e5e8.slice. Oct 30 13:27:26.549888 kubelet[2773]: E1030 13:27:26.549814 2773 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Oct 30 13:27:26.549888 kubelet[2773]: W1030 13:27:26.548919 2773 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 30 13:27:26.549888 kubelet[2773]: E1030 13:27:26.549859 2773 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Oct 30 13:27:26.549888 kubelet[2773]: W1030 13:27:26.548966 2773 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 30 13:27:26.550053 kubelet[2773]: E1030 13:27:26.549911 2773 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Oct 30 13:27:26.564358 systemd[1]: Created slice kubepods-burstable-pod9651b8d4_a317_45bc_9edd_c5b34ed9b061.slice - libcontainer container kubepods-burstable-pod9651b8d4_a317_45bc_9edd_c5b34ed9b061.slice. Oct 30 13:27:26.636448 kubelet[2773]: I1030 13:27:26.636402 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-hostproc\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636448 kubelet[2773]: I1030 13:27:26.636441 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9651b8d4-a317-45bc-9edd-c5b34ed9b061-clustermesh-secrets\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636448 kubelet[2773]: I1030 13:27:26.636470 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-config-path\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636646 kubelet[2773]: I1030 13:27:26.636487 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm8qk\" (UniqueName: \"kubernetes.io/projected/66dab483-cb59-4ef4-9e14-c6f312e5e5e8-kube-api-access-lm8qk\") pod \"kube-proxy-jcmsp\" (UID: \"66dab483-cb59-4ef4-9e14-c6f312e5e5e8\") " pod="kube-system/kube-proxy-jcmsp" Oct 30 13:27:26.636646 kubelet[2773]: I1030 13:27:26.636507 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-cgroup\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636646 kubelet[2773]: I1030 13:27:26.636557 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-etc-cni-netd\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636646 kubelet[2773]: I1030 13:27:26.636614 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cni-path\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636736 kubelet[2773]: I1030 13:27:26.636651 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66dab483-cb59-4ef4-9e14-c6f312e5e5e8-xtables-lock\") pod \"kube-proxy-jcmsp\" (UID: \"66dab483-cb59-4ef4-9e14-c6f312e5e5e8\") " pod="kube-system/kube-proxy-jcmsp" Oct 30 13:27:26.636736 kubelet[2773]: I1030 13:27:26.636667 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9651b8d4-a317-45bc-9edd-c5b34ed9b061-hubble-tls\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636736 kubelet[2773]: I1030 13:27:26.636682 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-host-proc-sys-net\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636736 kubelet[2773]: I1030 13:27:26.636697 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfdpv\" (UniqueName: \"kubernetes.io/projected/9651b8d4-a317-45bc-9edd-c5b34ed9b061-kube-api-access-jfdpv\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636736 kubelet[2773]: I1030 13:27:26.636718 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-host-proc-sys-kernel\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636854 kubelet[2773]: I1030 13:27:26.636760 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-bpf-maps\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636854 kubelet[2773]: I1030 13:27:26.636784 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-xtables-lock\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636854 kubelet[2773]: I1030 13:27:26.636802 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66dab483-cb59-4ef4-9e14-c6f312e5e5e8-lib-modules\") pod \"kube-proxy-jcmsp\" (UID: \"66dab483-cb59-4ef4-9e14-c6f312e5e5e8\") " pod="kube-system/kube-proxy-jcmsp" Oct 30 13:27:26.636929 kubelet[2773]: I1030 13:27:26.636847 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-run\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636929 kubelet[2773]: I1030 13:27:26.636879 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-lib-modules\") pod \"cilium-7k944\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " pod="kube-system/cilium-7k944" Oct 30 13:27:26.636929 kubelet[2773]: I1030 13:27:26.636908 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66dab483-cb59-4ef4-9e14-c6f312e5e5e8-kube-proxy\") pod \"kube-proxy-jcmsp\" (UID: \"66dab483-cb59-4ef4-9e14-c6f312e5e5e8\") " pod="kube-system/kube-proxy-jcmsp" Oct 30 13:27:26.746109 kubelet[2773]: E1030 13:27:26.745682 2773 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 30 13:27:26.746109 kubelet[2773]: E1030 13:27:26.745725 2773 projected.go:194] Error preparing data for projected volume kube-api-access-lm8qk for pod kube-system/kube-proxy-jcmsp: configmap "kube-root-ca.crt" not found Oct 30 13:27:26.746109 kubelet[2773]: E1030 13:27:26.745799 2773 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/66dab483-cb59-4ef4-9e14-c6f312e5e5e8-kube-api-access-lm8qk podName:66dab483-cb59-4ef4-9e14-c6f312e5e5e8 nodeName:}" failed. No retries permitted until 2025-10-30 13:27:27.245777277 +0000 UTC m=+4.852321926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lm8qk" (UniqueName: "kubernetes.io/projected/66dab483-cb59-4ef4-9e14-c6f312e5e5e8-kube-api-access-lm8qk") pod "kube-proxy-jcmsp" (UID: "66dab483-cb59-4ef4-9e14-c6f312e5e5e8") : configmap "kube-root-ca.crt" not found Oct 30 13:27:26.746109 kubelet[2773]: E1030 13:27:26.746054 2773 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 30 13:27:26.746109 kubelet[2773]: E1030 13:27:26.746085 2773 projected.go:194] Error preparing data for projected volume kube-api-access-jfdpv for pod kube-system/cilium-7k944: configmap "kube-root-ca.crt" not found Oct 30 13:27:26.746377 kubelet[2773]: E1030 13:27:26.746147 2773 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9651b8d4-a317-45bc-9edd-c5b34ed9b061-kube-api-access-jfdpv podName:9651b8d4-a317-45bc-9edd-c5b34ed9b061 nodeName:}" failed. No retries permitted until 2025-10-30 13:27:27.246127453 +0000 UTC m=+4.852672102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jfdpv" (UniqueName: "kubernetes.io/projected/9651b8d4-a317-45bc-9edd-c5b34ed9b061-kube-api-access-jfdpv") pod "cilium-7k944" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061") : configmap "kube-root-ca.crt" not found Oct 30 13:27:27.461257 kubelet[2773]: E1030 13:27:27.461209 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:27.462035 containerd[1614]: time="2025-10-30T13:27:27.461871907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jcmsp,Uid:66dab483-cb59-4ef4-9e14-c6f312e5e5e8,Namespace:kube-system,Attempt:0,}" Oct 30 13:27:27.683313 kubelet[2773]: E1030 13:27:27.683254 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:27.739822 kubelet[2773]: E1030 13:27:27.739694 2773 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Oct 30 13:27:27.739822 kubelet[2773]: E1030 13:27:27.739823 2773 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9651b8d4-a317-45bc-9edd-c5b34ed9b061-clustermesh-secrets podName:9651b8d4-a317-45bc-9edd-c5b34ed9b061 nodeName:}" failed. No retries permitted until 2025-10-30 13:27:28.239800642 +0000 UTC m=+5.846345291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9651b8d4-a317-45bc-9edd-c5b34ed9b061-clustermesh-secrets") pod "cilium-7k944" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061") : failed to sync secret cache: timed out waiting for the condition Oct 30 13:27:27.964039 systemd[1]: Created slice kubepods-besteffort-pod1a75c713_d702_44fd_9684_11c60d913520.slice - libcontainer container kubepods-besteffort-pod1a75c713_d702_44fd_9684_11c60d913520.slice. Oct 30 13:27:27.982673 containerd[1614]: time="2025-10-30T13:27:27.982596926Z" level=info msg="connecting to shim a2006801b1939368c2e90479b44655304c484ed4f6cfd0cbc0509e8447a256d0" address="unix:///run/containerd/s/147eb479ccbb60d9525821c624a83862e712d3d1e3c7b87f970d8e09c2c0fa11" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:27:28.047720 kubelet[2773]: I1030 13:27:28.047573 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fq9w\" (UniqueName: \"kubernetes.io/projected/1a75c713-d702-44fd-9684-11c60d913520-kube-api-access-5fq9w\") pod \"cilium-operator-6c4d7847fc-knfrh\" (UID: \"1a75c713-d702-44fd-9684-11c60d913520\") " pod="kube-system/cilium-operator-6c4d7847fc-knfrh" Oct 30 13:27:28.047720 kubelet[2773]: I1030 13:27:28.047644 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a75c713-d702-44fd-9684-11c60d913520-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-knfrh\" (UID: \"1a75c713-d702-44fd-9684-11c60d913520\") " pod="kube-system/cilium-operator-6c4d7847fc-knfrh" Oct 30 13:27:28.049139 systemd[1]: Started cri-containerd-a2006801b1939368c2e90479b44655304c484ed4f6cfd0cbc0509e8447a256d0.scope - libcontainer container a2006801b1939368c2e90479b44655304c484ed4f6cfd0cbc0509e8447a256d0. Oct 30 13:27:28.077653 containerd[1614]: time="2025-10-30T13:27:28.077591344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jcmsp,Uid:66dab483-cb59-4ef4-9e14-c6f312e5e5e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2006801b1939368c2e90479b44655304c484ed4f6cfd0cbc0509e8447a256d0\"" Oct 30 13:27:28.078568 kubelet[2773]: E1030 13:27:28.078543 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:28.080454 containerd[1614]: time="2025-10-30T13:27:28.080419849Z" level=info msg="CreateContainer within sandbox \"a2006801b1939368c2e90479b44655304c484ed4f6cfd0cbc0509e8447a256d0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 13:27:28.091588 containerd[1614]: time="2025-10-30T13:27:28.091525191Z" level=info msg="Container a403a9f8491982c8ef2a2334dd867ef1820ccb00a4a5a61a7fa5f9c44917445c: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:28.100653 containerd[1614]: time="2025-10-30T13:27:28.100594351Z" level=info msg="CreateContainer within sandbox \"a2006801b1939368c2e90479b44655304c484ed4f6cfd0cbc0509e8447a256d0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a403a9f8491982c8ef2a2334dd867ef1820ccb00a4a5a61a7fa5f9c44917445c\"" Oct 30 13:27:28.101180 containerd[1614]: time="2025-10-30T13:27:28.101137991Z" level=info msg="StartContainer for \"a403a9f8491982c8ef2a2334dd867ef1820ccb00a4a5a61a7fa5f9c44917445c\"" Oct 30 13:27:28.102507 containerd[1614]: time="2025-10-30T13:27:28.102478243Z" level=info msg="connecting to shim a403a9f8491982c8ef2a2334dd867ef1820ccb00a4a5a61a7fa5f9c44917445c" address="unix:///run/containerd/s/147eb479ccbb60d9525821c624a83862e712d3d1e3c7b87f970d8e09c2c0fa11" protocol=ttrpc version=3 Oct 30 13:27:28.129177 systemd[1]: Started cri-containerd-a403a9f8491982c8ef2a2334dd867ef1820ccb00a4a5a61a7fa5f9c44917445c.scope - libcontainer container a403a9f8491982c8ef2a2334dd867ef1820ccb00a4a5a61a7fa5f9c44917445c. Oct 30 13:27:28.182248 containerd[1614]: time="2025-10-30T13:27:28.182206472Z" level=info msg="StartContainer for \"a403a9f8491982c8ef2a2334dd867ef1820ccb00a4a5a61a7fa5f9c44917445c\" returns successfully" Oct 30 13:27:28.273940 kubelet[2773]: E1030 13:27:28.273867 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:28.274527 containerd[1614]: time="2025-10-30T13:27:28.274472790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-knfrh,Uid:1a75c713-d702-44fd-9684-11c60d913520,Namespace:kube-system,Attempt:0,}" Oct 30 13:27:28.367412 kubelet[2773]: E1030 13:27:28.367283 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:28.367957 containerd[1614]: time="2025-10-30T13:27:28.367905660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7k944,Uid:9651b8d4-a317-45bc-9edd-c5b34ed9b061,Namespace:kube-system,Attempt:0,}" Oct 30 13:27:28.558521 kubelet[2773]: E1030 13:27:28.558483 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:28.558859 kubelet[2773]: E1030 13:27:28.558826 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:28.585809 containerd[1614]: time="2025-10-30T13:27:28.585722691Z" level=info msg="connecting to shim d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb" address="unix:///run/containerd/s/254d471d7a49f24f5d4f6592f0bf82a6d2de9ec8c78dd18ab0350f271ea022ad" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:27:28.654518 kubelet[2773]: I1030 13:27:28.650586 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jcmsp" podStartSLOduration=2.650562102 podStartE2EDuration="2.650562102s" podCreationTimestamp="2025-10-30 13:27:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:27:28.577240905 +0000 UTC m=+6.183785554" watchObservedRunningTime="2025-10-30 13:27:28.650562102 +0000 UTC m=+6.257106751" Oct 30 13:27:28.661332 systemd[1]: Started cri-containerd-d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb.scope - libcontainer container d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb. Oct 30 13:27:28.664351 containerd[1614]: time="2025-10-30T13:27:28.664239241Z" level=info msg="connecting to shim 95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4" address="unix:///run/containerd/s/2acdd8f33c429d0ef600722d34a9fafeb18dc6d0cd76212e7dbeed6b898ca9de" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:27:28.696168 systemd[1]: Started cri-containerd-95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4.scope - libcontainer container 95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4. Oct 30 13:27:28.727021 containerd[1614]: time="2025-10-30T13:27:28.726954273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7k944,Uid:9651b8d4-a317-45bc-9edd-c5b34ed9b061,Namespace:kube-system,Attempt:0,} returns sandbox id \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\"" Oct 30 13:27:28.728490 kubelet[2773]: E1030 13:27:28.728432 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:28.732249 containerd[1614]: time="2025-10-30T13:27:28.731431295Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 30 13:27:28.742790 containerd[1614]: time="2025-10-30T13:27:28.742730464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-knfrh,Uid:1a75c713-d702-44fd-9684-11c60d913520,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb\"" Oct 30 13:27:28.743582 kubelet[2773]: E1030 13:27:28.743553 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:29.316152 kubelet[2773]: E1030 13:27:29.316093 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:29.562082 kubelet[2773]: E1030 13:27:29.561864 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:29.562417 kubelet[2773]: E1030 13:27:29.562391 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:30.564463 kubelet[2773]: E1030 13:27:30.564412 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:31.114603 kubelet[2773]: E1030 13:27:31.114560 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:31.337800 update_engine[1597]: I20251030 13:27:31.337697 1597 update_attempter.cc:509] Updating boot flags... Oct 30 13:27:31.567068 kubelet[2773]: E1030 13:27:31.567038 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:41.957104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294207492.mount: Deactivated successfully. Oct 30 13:27:44.447982 containerd[1614]: time="2025-10-30T13:27:44.447916239Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:44.448933 containerd[1614]: time="2025-10-30T13:27:44.448890112Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Oct 30 13:27:44.450295 containerd[1614]: time="2025-10-30T13:27:44.450228252Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:44.451873 containerd[1614]: time="2025-10-30T13:27:44.451828815Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.719619525s" Oct 30 13:27:44.451873 containerd[1614]: time="2025-10-30T13:27:44.451867708Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 30 13:27:44.456829 containerd[1614]: time="2025-10-30T13:27:44.456806205Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 30 13:27:44.462544 containerd[1614]: time="2025-10-30T13:27:44.462503422Z" level=info msg="CreateContainer within sandbox \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 30 13:27:44.471191 containerd[1614]: time="2025-10-30T13:27:44.471145282Z" level=info msg="Container 233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:44.475610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164971339.mount: Deactivated successfully. Oct 30 13:27:44.479601 containerd[1614]: time="2025-10-30T13:27:44.479535758Z" level=info msg="CreateContainer within sandbox \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\"" Oct 30 13:27:44.480162 containerd[1614]: time="2025-10-30T13:27:44.480113024Z" level=info msg="StartContainer for \"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\"" Oct 30 13:27:44.481443 containerd[1614]: time="2025-10-30T13:27:44.481413633Z" level=info msg="connecting to shim 233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427" address="unix:///run/containerd/s/2acdd8f33c429d0ef600722d34a9fafeb18dc6d0cd76212e7dbeed6b898ca9de" protocol=ttrpc version=3 Oct 30 13:27:44.505231 systemd[1]: Started cri-containerd-233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427.scope - libcontainer container 233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427. Oct 30 13:27:44.543906 containerd[1614]: time="2025-10-30T13:27:44.543849419Z" level=info msg="StartContainer for \"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\" returns successfully" Oct 30 13:27:44.557303 systemd[1]: cri-containerd-233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427.scope: Deactivated successfully. Oct 30 13:27:44.560427 containerd[1614]: time="2025-10-30T13:27:44.560386033Z" level=info msg="received exit event container_id:\"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\" id:\"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\" pid:3216 exited_at:{seconds:1761830864 nanos:559962074}" Oct 30 13:27:44.560565 containerd[1614]: time="2025-10-30T13:27:44.560470692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\" id:\"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\" pid:3216 exited_at:{seconds:1761830864 nanos:559962074}" Oct 30 13:27:44.587570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427-rootfs.mount: Deactivated successfully. Oct 30 13:27:44.591207 kubelet[2773]: E1030 13:27:44.590592 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:45.593319 kubelet[2773]: E1030 13:27:45.593265 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:45.595689 containerd[1614]: time="2025-10-30T13:27:45.595636085Z" level=info msg="CreateContainer within sandbox \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 30 13:27:45.607253 containerd[1614]: time="2025-10-30T13:27:45.607203016Z" level=info msg="Container c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:45.609734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2025390758.mount: Deactivated successfully. Oct 30 13:27:45.614252 containerd[1614]: time="2025-10-30T13:27:45.614209674Z" level=info msg="CreateContainer within sandbox \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\"" Oct 30 13:27:45.614814 containerd[1614]: time="2025-10-30T13:27:45.614770800Z" level=info msg="StartContainer for \"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\"" Oct 30 13:27:45.615681 containerd[1614]: time="2025-10-30T13:27:45.615647671Z" level=info msg="connecting to shim c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd" address="unix:///run/containerd/s/2acdd8f33c429d0ef600722d34a9fafeb18dc6d0cd76212e7dbeed6b898ca9de" protocol=ttrpc version=3 Oct 30 13:27:45.638186 systemd[1]: Started cri-containerd-c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd.scope - libcontainer container c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd. Oct 30 13:27:45.669832 containerd[1614]: time="2025-10-30T13:27:45.669790819Z" level=info msg="StartContainer for \"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\" returns successfully" Oct 30 13:27:45.687135 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 13:27:45.687657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:27:45.687727 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 30 13:27:45.689838 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 13:27:45.692039 systemd[1]: cri-containerd-c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd.scope: Deactivated successfully. Oct 30 13:27:45.692529 systemd[1]: cri-containerd-c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd.scope: Consumed 28ms CPU time, 7.8M memory peak, 8K read from disk, 2.2M written to disk. Oct 30 13:27:45.692898 containerd[1614]: time="2025-10-30T13:27:45.692690731Z" level=info msg="received exit event container_id:\"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\" id:\"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\" pid:3261 exited_at:{seconds:1761830865 nanos:692330393}" Oct 30 13:27:45.694279 containerd[1614]: time="2025-10-30T13:27:45.694212586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\" id:\"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\" pid:3261 exited_at:{seconds:1761830865 nanos:692330393}" Oct 30 13:27:45.718516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd-rootfs.mount: Deactivated successfully. Oct 30 13:27:45.729857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:27:46.597568 kubelet[2773]: E1030 13:27:46.597500 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:46.599519 containerd[1614]: time="2025-10-30T13:27:46.599389154Z" level=info msg="CreateContainer within sandbox \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 30 13:27:46.620714 containerd[1614]: time="2025-10-30T13:27:46.619146793Z" level=info msg="Container 3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:46.629570 containerd[1614]: time="2025-10-30T13:27:46.629505215Z" level=info msg="CreateContainer within sandbox \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\"" Oct 30 13:27:46.631038 containerd[1614]: time="2025-10-30T13:27:46.630157192Z" level=info msg="StartContainer for \"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\"" Oct 30 13:27:46.631504 containerd[1614]: time="2025-10-30T13:27:46.631479119Z" level=info msg="connecting to shim 3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6" address="unix:///run/containerd/s/2acdd8f33c429d0ef600722d34a9fafeb18dc6d0cd76212e7dbeed6b898ca9de" protocol=ttrpc version=3 Oct 30 13:27:46.656187 systemd[1]: Started cri-containerd-3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6.scope - libcontainer container 3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6. Oct 30 13:27:46.705707 containerd[1614]: time="2025-10-30T13:27:46.705646493Z" level=info msg="StartContainer for \"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\" returns successfully" Oct 30 13:27:46.708956 systemd[1]: cri-containerd-3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6.scope: Deactivated successfully. Oct 30 13:27:46.711698 containerd[1614]: time="2025-10-30T13:27:46.711661091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\" id:\"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\" pid:3309 exited_at:{seconds:1761830866 nanos:711319399}" Oct 30 13:27:46.711882 containerd[1614]: time="2025-10-30T13:27:46.711807036Z" level=info msg="received exit event container_id:\"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\" id:\"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\" pid:3309 exited_at:{seconds:1761830866 nanos:711319399}" Oct 30 13:27:46.739153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6-rootfs.mount: Deactivated successfully. Oct 30 13:27:47.423975 containerd[1614]: time="2025-10-30T13:27:47.423901277Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:47.424721 containerd[1614]: time="2025-10-30T13:27:47.424660335Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Oct 30 13:27:47.425897 containerd[1614]: time="2025-10-30T13:27:47.425856264Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:27:47.427337 containerd[1614]: time="2025-10-30T13:27:47.427299911Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.970372728s" Oct 30 13:27:47.427337 containerd[1614]: time="2025-10-30T13:27:47.427334696Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 30 13:27:47.429468 containerd[1614]: time="2025-10-30T13:27:47.429414158Z" level=info msg="CreateContainer within sandbox \"d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 30 13:27:47.436944 containerd[1614]: time="2025-10-30T13:27:47.436869524Z" level=info msg="Container 9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:47.443873 containerd[1614]: time="2025-10-30T13:27:47.443804922Z" level=info msg="CreateContainer within sandbox \"d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\"" Oct 30 13:27:47.444560 containerd[1614]: time="2025-10-30T13:27:47.444501062Z" level=info msg="StartContainer for \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\"" Oct 30 13:27:47.445699 containerd[1614]: time="2025-10-30T13:27:47.445669229Z" level=info msg="connecting to shim 9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28" address="unix:///run/containerd/s/254d471d7a49f24f5d4f6592f0bf82a6d2de9ec8c78dd18ab0350f271ea022ad" protocol=ttrpc version=3 Oct 30 13:27:47.474202 systemd[1]: Started cri-containerd-9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28.scope - libcontainer container 9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28. Oct 30 13:27:47.510211 containerd[1614]: time="2025-10-30T13:27:47.510155769Z" level=info msg="StartContainer for \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\" returns successfully" Oct 30 13:27:47.604037 kubelet[2773]: E1030 13:27:47.603960 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:47.610802 kubelet[2773]: E1030 13:27:47.609462 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:47.614024 containerd[1614]: time="2025-10-30T13:27:47.612133062Z" level=info msg="CreateContainer within sandbox \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 30 13:27:47.626033 containerd[1614]: time="2025-10-30T13:27:47.623638126Z" level=info msg="Container e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:47.635424 containerd[1614]: time="2025-10-30T13:27:47.635359979Z" level=info msg="CreateContainer within sandbox \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\"" Oct 30 13:27:47.635922 containerd[1614]: time="2025-10-30T13:27:47.635890256Z" level=info msg="StartContainer for \"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\"" Oct 30 13:27:47.636800 containerd[1614]: time="2025-10-30T13:27:47.636762758Z" level=info msg="connecting to shim e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee" address="unix:///run/containerd/s/2acdd8f33c429d0ef600722d34a9fafeb18dc6d0cd76212e7dbeed6b898ca9de" protocol=ttrpc version=3 Oct 30 13:27:47.677204 systemd[1]: Started cri-containerd-e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee.scope - libcontainer container e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee. Oct 30 13:27:47.743917 systemd[1]: cri-containerd-e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee.scope: Deactivated successfully. Oct 30 13:27:47.748209 containerd[1614]: time="2025-10-30T13:27:47.747823830Z" level=info msg="received exit event container_id:\"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\" id:\"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\" pid:3399 exited_at:{seconds:1761830867 nanos:746270036}" Oct 30 13:27:47.749510 containerd[1614]: time="2025-10-30T13:27:47.748224434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\" id:\"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\" pid:3399 exited_at:{seconds:1761830867 nanos:746270036}" Oct 30 13:27:47.750784 containerd[1614]: time="2025-10-30T13:27:47.750759783Z" level=info msg="StartContainer for \"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\" returns successfully" Oct 30 13:27:47.778078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee-rootfs.mount: Deactivated successfully. Oct 30 13:27:48.615105 kubelet[2773]: E1030 13:27:48.615066 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:48.615761 kubelet[2773]: E1030 13:27:48.615518 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:48.617494 containerd[1614]: time="2025-10-30T13:27:48.617447850Z" level=info msg="CreateContainer within sandbox \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 30 13:27:48.633911 kubelet[2773]: I1030 13:27:48.633834 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-knfrh" podStartSLOduration=2.949963392 podStartE2EDuration="21.633788926s" podCreationTimestamp="2025-10-30 13:27:27 +0000 UTC" firstStartedPulling="2025-10-30 13:27:28.744238685 +0000 UTC m=+6.350783334" lastFinishedPulling="2025-10-30 13:27:47.428064219 +0000 UTC m=+25.034608868" observedRunningTime="2025-10-30 13:27:47.722074417 +0000 UTC m=+25.328619056" watchObservedRunningTime="2025-10-30 13:27:48.633788926 +0000 UTC m=+26.240333575" Oct 30 13:27:48.635780 containerd[1614]: time="2025-10-30T13:27:48.635708115Z" level=info msg="Container 3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:48.644107 containerd[1614]: time="2025-10-30T13:27:48.644053343Z" level=info msg="CreateContainer within sandbox \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\"" Oct 30 13:27:48.644616 containerd[1614]: time="2025-10-30T13:27:48.644555045Z" level=info msg="StartContainer for \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\"" Oct 30 13:27:48.645866 containerd[1614]: time="2025-10-30T13:27:48.645835674Z" level=info msg="connecting to shim 3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8" address="unix:///run/containerd/s/2acdd8f33c429d0ef600722d34a9fafeb18dc6d0cd76212e7dbeed6b898ca9de" protocol=ttrpc version=3 Oct 30 13:27:48.671141 systemd[1]: Started cri-containerd-3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8.scope - libcontainer container 3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8. Oct 30 13:27:48.713655 containerd[1614]: time="2025-10-30T13:27:48.713591388Z" level=info msg="StartContainer for \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" returns successfully" Oct 30 13:27:48.822564 containerd[1614]: time="2025-10-30T13:27:48.822520505Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" id:\"4daf4eeb6899a08413cc4a254b942a88007bc92286c9f2cff1fff62595173d8b\" pid:3472 exited_at:{seconds:1761830868 nanos:822189914}" Oct 30 13:27:48.898694 kubelet[2773]: I1030 13:27:48.898558 2773 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 13:27:48.938663 systemd[1]: Created slice kubepods-burstable-podf9e078f6_9853_47b0_aec1_d0e3624d966e.slice - libcontainer container kubepods-burstable-podf9e078f6_9853_47b0_aec1_d0e3624d966e.slice. Oct 30 13:27:48.948313 systemd[1]: Created slice kubepods-burstable-pod5ad43097_f514_401e_a002_c22bb6a670d4.slice - libcontainer container kubepods-burstable-pod5ad43097_f514_401e_a002_c22bb6a670d4.slice. Oct 30 13:27:48.985281 kubelet[2773]: I1030 13:27:48.985204 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9e078f6-9853-47b0-aec1-d0e3624d966e-config-volume\") pod \"coredns-668d6bf9bc-gcjk4\" (UID: \"f9e078f6-9853-47b0-aec1-d0e3624d966e\") " pod="kube-system/coredns-668d6bf9bc-gcjk4" Oct 30 13:27:48.985281 kubelet[2773]: I1030 13:27:48.985270 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2md6l\" (UniqueName: \"kubernetes.io/projected/5ad43097-f514-401e-a002-c22bb6a670d4-kube-api-access-2md6l\") pod \"coredns-668d6bf9bc-49c6x\" (UID: \"5ad43097-f514-401e-a002-c22bb6a670d4\") " pod="kube-system/coredns-668d6bf9bc-49c6x" Oct 30 13:27:48.985281 kubelet[2773]: I1030 13:27:48.985297 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ad43097-f514-401e-a002-c22bb6a670d4-config-volume\") pod \"coredns-668d6bf9bc-49c6x\" (UID: \"5ad43097-f514-401e-a002-c22bb6a670d4\") " pod="kube-system/coredns-668d6bf9bc-49c6x" Oct 30 13:27:48.985575 kubelet[2773]: I1030 13:27:48.985321 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxj67\" (UniqueName: \"kubernetes.io/projected/f9e078f6-9853-47b0-aec1-d0e3624d966e-kube-api-access-bxj67\") pod \"coredns-668d6bf9bc-gcjk4\" (UID: \"f9e078f6-9853-47b0-aec1-d0e3624d966e\") " pod="kube-system/coredns-668d6bf9bc-gcjk4" Oct 30 13:27:49.246871 kubelet[2773]: E1030 13:27:49.246794 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:49.247914 containerd[1614]: time="2025-10-30T13:27:49.247828939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gcjk4,Uid:f9e078f6-9853-47b0-aec1-d0e3624d966e,Namespace:kube-system,Attempt:0,}" Oct 30 13:27:49.257927 kubelet[2773]: E1030 13:27:49.257848 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:49.260173 containerd[1614]: time="2025-10-30T13:27:49.260123529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-49c6x,Uid:5ad43097-f514-401e-a002-c22bb6a670d4,Namespace:kube-system,Attempt:0,}" Oct 30 13:27:49.621904 kubelet[2773]: E1030 13:27:49.621741 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:49.653035 kubelet[2773]: I1030 13:27:49.651672 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7k944" podStartSLOduration=7.925369676 podStartE2EDuration="23.651649166s" podCreationTimestamp="2025-10-30 13:27:26 +0000 UTC" firstStartedPulling="2025-10-30 13:27:28.730266726 +0000 UTC m=+6.336811365" lastFinishedPulling="2025-10-30 13:27:44.456546216 +0000 UTC m=+22.063090855" observedRunningTime="2025-10-30 13:27:49.649517477 +0000 UTC m=+27.256062126" watchObservedRunningTime="2025-10-30 13:27:49.651649166 +0000 UTC m=+27.258193815" Oct 30 13:27:50.623822 kubelet[2773]: E1030 13:27:50.623782 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:50.929805 systemd-networkd[1525]: cilium_host: Link UP Oct 30 13:27:50.930583 systemd-networkd[1525]: cilium_net: Link UP Oct 30 13:27:50.931353 systemd-networkd[1525]: cilium_net: Gained carrier Oct 30 13:27:50.932197 systemd-networkd[1525]: cilium_host: Gained carrier Oct 30 13:27:51.040641 systemd-networkd[1525]: cilium_vxlan: Link UP Oct 30 13:27:51.040654 systemd-networkd[1525]: cilium_vxlan: Gained carrier Oct 30 13:27:51.259132 kernel: NET: Registered PF_ALG protocol family Oct 30 13:27:51.373325 systemd-networkd[1525]: cilium_host: Gained IPv6LL Oct 30 13:27:51.605270 systemd-networkd[1525]: cilium_net: Gained IPv6LL Oct 30 13:27:51.626600 kubelet[2773]: E1030 13:27:51.626401 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:52.006701 systemd-networkd[1525]: lxc_health: Link UP Oct 30 13:27:52.008993 systemd-networkd[1525]: lxc_health: Gained carrier Oct 30 13:27:52.305822 systemd-networkd[1525]: lxc809be9500fe6: Link UP Oct 30 13:27:52.318634 kernel: eth0: renamed from tmpb94ba Oct 30 13:27:52.318744 kernel: eth0: renamed from tmp3d733 Oct 30 13:27:52.323761 systemd-networkd[1525]: lxc65b2f153e79a: Link UP Oct 30 13:27:52.325117 systemd-networkd[1525]: cilium_vxlan: Gained IPv6LL Oct 30 13:27:52.325803 systemd-networkd[1525]: lxc809be9500fe6: Gained carrier Oct 30 13:27:52.327868 systemd-networkd[1525]: lxc65b2f153e79a: Gained carrier Oct 30 13:27:52.631735 kubelet[2773]: E1030 13:27:52.631581 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:53.631816 kubelet[2773]: E1030 13:27:53.631765 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:53.653277 systemd-networkd[1525]: lxc809be9500fe6: Gained IPv6LL Oct 30 13:27:53.654413 systemd-networkd[1525]: lxc65b2f153e79a: Gained IPv6LL Oct 30 13:27:53.654783 systemd-networkd[1525]: lxc_health: Gained IPv6LL Oct 30 13:27:55.841599 containerd[1614]: time="2025-10-30T13:27:55.841529593Z" level=info msg="connecting to shim 3d73323d04eb4a041b694d740fc8f8eb78f24dd550ee1a07573935622b6684a3" address="unix:///run/containerd/s/82e3ec2b2641574470e80fcbcd0211e7b850f97862c8bc11465005cd98352a8c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:27:55.844357 containerd[1614]: time="2025-10-30T13:27:55.844304888Z" level=info msg="connecting to shim b94ba44504e54b41bfcc90078c31e8070e1bc90a8a8a4bbeff5f76c9755e4809" address="unix:///run/containerd/s/ea76aa6e40f271d73e53a62aba188607808497a5796de2d60b0ebf0bd1003a6b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:27:55.879255 systemd[1]: Started cri-containerd-3d73323d04eb4a041b694d740fc8f8eb78f24dd550ee1a07573935622b6684a3.scope - libcontainer container 3d73323d04eb4a041b694d740fc8f8eb78f24dd550ee1a07573935622b6684a3. Oct 30 13:27:55.882798 systemd[1]: Started cri-containerd-b94ba44504e54b41bfcc90078c31e8070e1bc90a8a8a4bbeff5f76c9755e4809.scope - libcontainer container b94ba44504e54b41bfcc90078c31e8070e1bc90a8a8a4bbeff5f76c9755e4809. Oct 30 13:27:55.898345 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:27:55.899440 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:27:55.940864 containerd[1614]: time="2025-10-30T13:27:55.940817583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-49c6x,Uid:5ad43097-f514-401e-a002-c22bb6a670d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b94ba44504e54b41bfcc90078c31e8070e1bc90a8a8a4bbeff5f76c9755e4809\"" Oct 30 13:27:55.941921 containerd[1614]: time="2025-10-30T13:27:55.941878476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gcjk4,Uid:f9e078f6-9853-47b0-aec1-d0e3624d966e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d73323d04eb4a041b694d740fc8f8eb78f24dd550ee1a07573935622b6684a3\"" Oct 30 13:27:55.942677 kubelet[2773]: E1030 13:27:55.942651 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:55.944524 kubelet[2773]: E1030 13:27:55.944496 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:55.949953 containerd[1614]: time="2025-10-30T13:27:55.949908162Z" level=info msg="CreateContainer within sandbox \"b94ba44504e54b41bfcc90078c31e8070e1bc90a8a8a4bbeff5f76c9755e4809\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 13:27:55.950093 containerd[1614]: time="2025-10-30T13:27:55.949928460Z" level=info msg="CreateContainer within sandbox \"3d73323d04eb4a041b694d740fc8f8eb78f24dd550ee1a07573935622b6684a3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 13:27:55.963211 containerd[1614]: time="2025-10-30T13:27:55.963159719Z" level=info msg="Container 9ffc68702a063fcbca0b59472279a58243a961e56ca1230559aba1db26324dca: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:55.970660 containerd[1614]: time="2025-10-30T13:27:55.970613502Z" level=info msg="CreateContainer within sandbox \"3d73323d04eb4a041b694d740fc8f8eb78f24dd550ee1a07573935622b6684a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ffc68702a063fcbca0b59472279a58243a961e56ca1230559aba1db26324dca\"" Oct 30 13:27:55.971232 containerd[1614]: time="2025-10-30T13:27:55.971200605Z" level=info msg="StartContainer for \"9ffc68702a063fcbca0b59472279a58243a961e56ca1230559aba1db26324dca\"" Oct 30 13:27:55.972496 containerd[1614]: time="2025-10-30T13:27:55.972462647Z" level=info msg="connecting to shim 9ffc68702a063fcbca0b59472279a58243a961e56ca1230559aba1db26324dca" address="unix:///run/containerd/s/82e3ec2b2641574470e80fcbcd0211e7b850f97862c8bc11465005cd98352a8c" protocol=ttrpc version=3 Oct 30 13:27:55.999254 systemd[1]: Started cri-containerd-9ffc68702a063fcbca0b59472279a58243a961e56ca1230559aba1db26324dca.scope - libcontainer container 9ffc68702a063fcbca0b59472279a58243a961e56ca1230559aba1db26324dca. Oct 30 13:27:56.011008 containerd[1614]: time="2025-10-30T13:27:56.010940826Z" level=info msg="Container 832689b4131649530ea942180ceff2cd08482376f42f2f813043504b65247c67: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:27:56.018123 containerd[1614]: time="2025-10-30T13:27:56.018079216Z" level=info msg="CreateContainer within sandbox \"b94ba44504e54b41bfcc90078c31e8070e1bc90a8a8a4bbeff5f76c9755e4809\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"832689b4131649530ea942180ceff2cd08482376f42f2f813043504b65247c67\"" Oct 30 13:27:56.019780 containerd[1614]: time="2025-10-30T13:27:56.019740697Z" level=info msg="StartContainer for \"832689b4131649530ea942180ceff2cd08482376f42f2f813043504b65247c67\"" Oct 30 13:27:56.020817 containerd[1614]: time="2025-10-30T13:27:56.020781452Z" level=info msg="connecting to shim 832689b4131649530ea942180ceff2cd08482376f42f2f813043504b65247c67" address="unix:///run/containerd/s/ea76aa6e40f271d73e53a62aba188607808497a5796de2d60b0ebf0bd1003a6b" protocol=ttrpc version=3 Oct 30 13:27:56.047528 systemd[1]: Started cri-containerd-832689b4131649530ea942180ceff2cd08482376f42f2f813043504b65247c67.scope - libcontainer container 832689b4131649530ea942180ceff2cd08482376f42f2f813043504b65247c67. Oct 30 13:27:56.053856 containerd[1614]: time="2025-10-30T13:27:56.053812702Z" level=info msg="StartContainer for \"9ffc68702a063fcbca0b59472279a58243a961e56ca1230559aba1db26324dca\" returns successfully" Oct 30 13:27:56.093101 containerd[1614]: time="2025-10-30T13:27:56.092842402Z" level=info msg="StartContainer for \"832689b4131649530ea942180ceff2cd08482376f42f2f813043504b65247c67\" returns successfully" Oct 30 13:27:56.643871 kubelet[2773]: E1030 13:27:56.643806 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:56.646423 kubelet[2773]: E1030 13:27:56.646351 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:56.678588 kubelet[2773]: I1030 13:27:56.678318 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-49c6x" podStartSLOduration=29.678296589 podStartE2EDuration="29.678296589s" podCreationTimestamp="2025-10-30 13:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:27:56.663816547 +0000 UTC m=+34.270361196" watchObservedRunningTime="2025-10-30 13:27:56.678296589 +0000 UTC m=+34.284841238" Oct 30 13:27:57.647908 kubelet[2773]: E1030 13:27:57.647849 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:57.647908 kubelet[2773]: E1030 13:27:57.647884 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:58.649365 kubelet[2773]: E1030 13:27:58.649303 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:27:58.649984 kubelet[2773]: E1030 13:27:58.649378 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:00.028602 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:57856.service - OpenSSH per-connection server daemon (10.0.0.1:57856). Oct 30 13:28:00.101517 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 57856 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:00.103613 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:00.108574 systemd-logind[1593]: New session 8 of user core. Oct 30 13:28:00.117232 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 13:28:00.256365 sshd[4122]: Connection closed by 10.0.0.1 port 57856 Oct 30 13:28:00.256660 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:00.261290 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:57856.service: Deactivated successfully. Oct 30 13:28:00.263597 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 13:28:00.264503 systemd-logind[1593]: Session 8 logged out. Waiting for processes to exit. Oct 30 13:28:00.265735 systemd-logind[1593]: Removed session 8. Oct 30 13:28:05.269506 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:54562.service - OpenSSH per-connection server daemon (10.0.0.1:54562). Oct 30 13:28:05.323494 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 54562 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:05.324793 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:05.329245 systemd-logind[1593]: New session 9 of user core. Oct 30 13:28:05.346273 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 13:28:05.453318 sshd[4139]: Connection closed by 10.0.0.1 port 54562 Oct 30 13:28:05.453733 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:05.458555 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:54562.service: Deactivated successfully. Oct 30 13:28:05.460624 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 13:28:05.461659 systemd-logind[1593]: Session 9 logged out. Waiting for processes to exit. Oct 30 13:28:05.463225 systemd-logind[1593]: Removed session 9. Oct 30 13:28:10.476477 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:54564.service - OpenSSH per-connection server daemon (10.0.0.1:54564). Oct 30 13:28:10.540931 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 54564 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:10.542705 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:10.547960 systemd-logind[1593]: New session 10 of user core. Oct 30 13:28:10.559233 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 13:28:10.632889 sshd[4156]: Connection closed by 10.0.0.1 port 54564 Oct 30 13:28:10.633215 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:10.638645 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:54564.service: Deactivated successfully. Oct 30 13:28:10.641153 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 13:28:10.641983 systemd-logind[1593]: Session 10 logged out. Waiting for processes to exit. Oct 30 13:28:10.643448 systemd-logind[1593]: Removed session 10. Oct 30 13:28:15.652247 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:53902.service - OpenSSH per-connection server daemon (10.0.0.1:53902). Oct 30 13:28:15.710143 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 53902 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:15.711497 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:15.716346 systemd-logind[1593]: New session 11 of user core. Oct 30 13:28:15.729168 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 13:28:15.867351 sshd[4173]: Connection closed by 10.0.0.1 port 53902 Oct 30 13:28:15.867727 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:15.883166 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:53902.service: Deactivated successfully. Oct 30 13:28:15.885135 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 13:28:15.886027 systemd-logind[1593]: Session 11 logged out. Waiting for processes to exit. Oct 30 13:28:15.889055 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:53906.service - OpenSSH per-connection server daemon (10.0.0.1:53906). Oct 30 13:28:15.889966 systemd-logind[1593]: Removed session 11. Oct 30 13:28:15.944274 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 53906 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:15.946203 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:15.951326 systemd-logind[1593]: New session 12 of user core. Oct 30 13:28:15.958173 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 13:28:16.188469 sshd[4190]: Connection closed by 10.0.0.1 port 53906 Oct 30 13:28:16.188835 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:16.198453 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:53906.service: Deactivated successfully. Oct 30 13:28:16.201904 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 13:28:16.203363 systemd-logind[1593]: Session 12 logged out. Waiting for processes to exit. Oct 30 13:28:16.206886 systemd-logind[1593]: Removed session 12. Oct 30 13:28:16.209485 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:53920.service - OpenSSH per-connection server daemon (10.0.0.1:53920). Oct 30 13:28:16.260731 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 53920 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:16.262090 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:16.266668 systemd-logind[1593]: New session 13 of user core. Oct 30 13:28:16.282165 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 13:28:16.356967 sshd[4204]: Connection closed by 10.0.0.1 port 53920 Oct 30 13:28:16.357304 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:16.361564 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:53920.service: Deactivated successfully. Oct 30 13:28:16.363841 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 13:28:16.364748 systemd-logind[1593]: Session 13 logged out. Waiting for processes to exit. Oct 30 13:28:16.366116 systemd-logind[1593]: Removed session 13. Oct 30 13:28:21.373870 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:53932.service - OpenSSH per-connection server daemon (10.0.0.1:53932). Oct 30 13:28:21.431842 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 53932 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:21.433513 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:21.439770 systemd-logind[1593]: New session 14 of user core. Oct 30 13:28:21.445135 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 13:28:21.511599 sshd[4221]: Connection closed by 10.0.0.1 port 53932 Oct 30 13:28:21.511932 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:21.515904 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:53932.service: Deactivated successfully. Oct 30 13:28:21.517943 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 13:28:21.518916 systemd-logind[1593]: Session 14 logged out. Waiting for processes to exit. Oct 30 13:28:21.520142 systemd-logind[1593]: Removed session 14. Oct 30 13:28:26.526403 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:53820.service - OpenSSH per-connection server daemon (10.0.0.1:53820). Oct 30 13:28:26.583440 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 53820 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:26.585360 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:26.590498 systemd-logind[1593]: New session 15 of user core. Oct 30 13:28:26.600244 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 13:28:26.671856 sshd[4240]: Connection closed by 10.0.0.1 port 53820 Oct 30 13:28:26.672268 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:26.685847 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:53820.service: Deactivated successfully. Oct 30 13:28:26.687726 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 13:28:26.688630 systemd-logind[1593]: Session 15 logged out. Waiting for processes to exit. Oct 30 13:28:26.691324 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:53836.service - OpenSSH per-connection server daemon (10.0.0.1:53836). Oct 30 13:28:26.692023 systemd-logind[1593]: Removed session 15. Oct 30 13:28:26.748646 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 53836 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:26.749922 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:26.754391 systemd-logind[1593]: New session 16 of user core. Oct 30 13:28:26.764177 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 13:28:27.024033 sshd[4257]: Connection closed by 10.0.0.1 port 53836 Oct 30 13:28:27.024568 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:27.036845 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:53836.service: Deactivated successfully. Oct 30 13:28:27.038928 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 13:28:27.039853 systemd-logind[1593]: Session 16 logged out. Waiting for processes to exit. Oct 30 13:28:27.042839 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:53848.service - OpenSSH per-connection server daemon (10.0.0.1:53848). Oct 30 13:28:27.043881 systemd-logind[1593]: Removed session 16. Oct 30 13:28:27.115251 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 53848 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:27.117170 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:27.121762 systemd-logind[1593]: New session 17 of user core. Oct 30 13:28:27.130129 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 13:28:27.614495 sshd[4272]: Connection closed by 10.0.0.1 port 53848 Oct 30 13:28:27.614884 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:27.626833 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:53848.service: Deactivated successfully. Oct 30 13:28:27.629219 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 13:28:27.631510 systemd-logind[1593]: Session 17 logged out. Waiting for processes to exit. Oct 30 13:28:27.634902 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:53852.service - OpenSSH per-connection server daemon (10.0.0.1:53852). Oct 30 13:28:27.636558 systemd-logind[1593]: Removed session 17. Oct 30 13:28:27.688815 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 53852 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:27.690585 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:27.695525 systemd-logind[1593]: New session 18 of user core. Oct 30 13:28:27.705140 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 13:28:27.938430 sshd[4295]: Connection closed by 10.0.0.1 port 53852 Oct 30 13:28:27.938773 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:27.949850 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:53852.service: Deactivated successfully. Oct 30 13:28:27.952196 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 13:28:27.954473 systemd-logind[1593]: Session 18 logged out. Waiting for processes to exit. Oct 30 13:28:27.956881 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:53862.service - OpenSSH per-connection server daemon (10.0.0.1:53862). Oct 30 13:28:27.957702 systemd-logind[1593]: Removed session 18. Oct 30 13:28:28.019328 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 53862 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:28.021245 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:28.026903 systemd-logind[1593]: New session 19 of user core. Oct 30 13:28:28.038265 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 13:28:28.110935 sshd[4310]: Connection closed by 10.0.0.1 port 53862 Oct 30 13:28:28.111272 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:28.116446 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:53862.service: Deactivated successfully. Oct 30 13:28:28.119211 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 13:28:28.120112 systemd-logind[1593]: Session 19 logged out. Waiting for processes to exit. Oct 30 13:28:28.121833 systemd-logind[1593]: Removed session 19. Oct 30 13:28:33.136136 systemd[1]: Started sshd@19-10.0.0.124:22-10.0.0.1:53398.service - OpenSSH per-connection server daemon (10.0.0.1:53398). Oct 30 13:28:33.200030 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 53398 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:33.201719 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:33.206903 systemd-logind[1593]: New session 20 of user core. Oct 30 13:28:33.217209 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 13:28:33.294950 sshd[4329]: Connection closed by 10.0.0.1 port 53398 Oct 30 13:28:33.295377 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:33.300758 systemd[1]: sshd@19-10.0.0.124:22-10.0.0.1:53398.service: Deactivated successfully. Oct 30 13:28:33.303141 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 13:28:33.304168 systemd-logind[1593]: Session 20 logged out. Waiting for processes to exit. Oct 30 13:28:33.305390 systemd-logind[1593]: Removed session 20. Oct 30 13:28:38.312614 systemd[1]: Started sshd@20-10.0.0.124:22-10.0.0.1:53402.service - OpenSSH per-connection server daemon (10.0.0.1:53402). Oct 30 13:28:38.371163 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 53402 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:38.372795 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:38.377650 systemd-logind[1593]: New session 21 of user core. Oct 30 13:28:38.387180 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 13:28:38.457795 sshd[4347]: Connection closed by 10.0.0.1 port 53402 Oct 30 13:28:38.458101 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:38.462123 systemd[1]: sshd@20-10.0.0.124:22-10.0.0.1:53402.service: Deactivated successfully. Oct 30 13:28:38.464162 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 13:28:38.466332 systemd-logind[1593]: Session 21 logged out. Waiting for processes to exit. Oct 30 13:28:38.467267 systemd-logind[1593]: Removed session 21. Oct 30 13:28:43.470732 systemd[1]: Started sshd@21-10.0.0.124:22-10.0.0.1:37064.service - OpenSSH per-connection server daemon (10.0.0.1:37064). Oct 30 13:28:43.525853 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 37064 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:43.527575 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:43.532036 systemd-logind[1593]: New session 22 of user core. Oct 30 13:28:43.542138 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 13:28:43.607653 sshd[4363]: Connection closed by 10.0.0.1 port 37064 Oct 30 13:28:43.607941 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:43.612739 systemd[1]: sshd@21-10.0.0.124:22-10.0.0.1:37064.service: Deactivated successfully. Oct 30 13:28:43.614724 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 13:28:43.615631 systemd-logind[1593]: Session 22 logged out. Waiting for processes to exit. Oct 30 13:28:43.616786 systemd-logind[1593]: Removed session 22. Oct 30 13:28:46.535256 kubelet[2773]: E1030 13:28:46.535191 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:48.625890 systemd[1]: Started sshd@22-10.0.0.124:22-10.0.0.1:37076.service - OpenSSH per-connection server daemon (10.0.0.1:37076). Oct 30 13:28:48.681650 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 37076 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:48.682947 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:48.687076 systemd-logind[1593]: New session 23 of user core. Oct 30 13:28:48.696123 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 13:28:48.758566 sshd[4379]: Connection closed by 10.0.0.1 port 37076 Oct 30 13:28:48.759088 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:48.778646 systemd[1]: sshd@22-10.0.0.124:22-10.0.0.1:37076.service: Deactivated successfully. Oct 30 13:28:48.780506 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 13:28:48.781335 systemd-logind[1593]: Session 23 logged out. Waiting for processes to exit. Oct 30 13:28:48.784142 systemd[1]: Started sshd@23-10.0.0.124:22-10.0.0.1:37086.service - OpenSSH per-connection server daemon (10.0.0.1:37086). Oct 30 13:28:48.784856 systemd-logind[1593]: Removed session 23. Oct 30 13:28:48.842697 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 37086 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:48.844020 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:48.848471 systemd-logind[1593]: New session 24 of user core. Oct 30 13:28:48.864150 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 13:28:50.252074 kubelet[2773]: I1030 13:28:50.251982 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gcjk4" podStartSLOduration=83.251959596 podStartE2EDuration="1m23.251959596s" podCreationTimestamp="2025-10-30 13:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:27:56.697916115 +0000 UTC m=+34.304460764" watchObservedRunningTime="2025-10-30 13:28:50.251959596 +0000 UTC m=+87.858504245" Oct 30 13:28:50.260120 containerd[1614]: time="2025-10-30T13:28:50.259887219Z" level=info msg="StopContainer for \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\" with timeout 30 (s)" Oct 30 13:28:50.262356 containerd[1614]: time="2025-10-30T13:28:50.262060901Z" level=info msg="Stop container \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\" with signal terminated" Oct 30 13:28:50.278332 systemd[1]: cri-containerd-9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28.scope: Deactivated successfully. Oct 30 13:28:50.281441 containerd[1614]: time="2025-10-30T13:28:50.281298608Z" level=info msg="received exit event container_id:\"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\" id:\"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\" pid:3364 exited_at:{seconds:1761830930 nanos:280601674}" Oct 30 13:28:50.281707 containerd[1614]: time="2025-10-30T13:28:50.281575234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\" id:\"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\" pid:3364 exited_at:{seconds:1761830930 nanos:280601674}" Oct 30 13:28:50.307937 containerd[1614]: time="2025-10-30T13:28:50.307898603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" id:\"81672ef90bb482f765b9f7f5b4b15165d530428fbb85789575dca5dadaec1d47\" pid:4424 exited_at:{seconds:1761830930 nanos:307642586}" Oct 30 13:28:50.308969 containerd[1614]: time="2025-10-30T13:28:50.308938019Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 13:28:50.309633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28-rootfs.mount: Deactivated successfully. Oct 30 13:28:50.320526 containerd[1614]: time="2025-10-30T13:28:50.320485634Z" level=info msg="StopContainer for \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\" returns successfully" Oct 30 13:28:50.320713 containerd[1614]: time="2025-10-30T13:28:50.320615722Z" level=info msg="StopContainer for \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" with timeout 2 (s)" Oct 30 13:28:50.320883 containerd[1614]: time="2025-10-30T13:28:50.320857521Z" level=info msg="Stop container \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" with signal terminated" Oct 30 13:28:50.321045 containerd[1614]: time="2025-10-30T13:28:50.320937804Z" level=info msg="StopPodSandbox for \"d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb\"" Oct 30 13:28:50.321124 containerd[1614]: time="2025-10-30T13:28:50.321073843Z" level=info msg="Container to stop \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 30 13:28:50.328853 systemd[1]: cri-containerd-d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb.scope: Deactivated successfully. Oct 30 13:28:50.332529 systemd-networkd[1525]: lxc_health: Link DOWN Oct 30 13:28:50.332890 systemd-networkd[1525]: lxc_health: Lost carrier Oct 30 13:28:50.336574 containerd[1614]: time="2025-10-30T13:28:50.336528697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb\" id:\"d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb\" pid:3102 exit_status:137 exited_at:{seconds:1761830930 nanos:336195653}" Oct 30 13:28:50.353802 systemd[1]: cri-containerd-3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8.scope: Deactivated successfully. Oct 30 13:28:50.354304 systemd[1]: cri-containerd-3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8.scope: Consumed 6.851s CPU time, 123.4M memory peak, 2.4M read from disk, 13.3M written to disk. Oct 30 13:28:50.356833 containerd[1614]: time="2025-10-30T13:28:50.356697133Z" level=info msg="received exit event container_id:\"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" id:\"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" pid:3436 exited_at:{seconds:1761830930 nanos:356497073}" Oct 30 13:28:50.370631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb-rootfs.mount: Deactivated successfully. Oct 30 13:28:50.378778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8-rootfs.mount: Deactivated successfully. Oct 30 13:28:50.399527 containerd[1614]: time="2025-10-30T13:28:50.399487199Z" level=info msg="shim disconnected" id=d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb namespace=k8s.io Oct 30 13:28:50.399527 containerd[1614]: time="2025-10-30T13:28:50.399524209Z" level=warning msg="cleaning up after shim disconnected" id=d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb namespace=k8s.io Oct 30 13:28:50.409688 containerd[1614]: time="2025-10-30T13:28:50.399536734Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 30 13:28:50.437099 containerd[1614]: time="2025-10-30T13:28:50.436362731Z" level=info msg="TearDown network for sandbox \"d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb\" successfully" Oct 30 13:28:50.437099 containerd[1614]: time="2025-10-30T13:28:50.436400643Z" level=info msg="StopPodSandbox for \"d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb\" returns successfully" Oct 30 13:28:50.437099 containerd[1614]: time="2025-10-30T13:28:50.436545217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" id:\"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" pid:3436 exited_at:{seconds:1761830930 nanos:356497073}" Oct 30 13:28:50.438550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb-shm.mount: Deactivated successfully. Oct 30 13:28:50.439922 containerd[1614]: time="2025-10-30T13:28:50.439885949Z" level=info msg="StopContainer for \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" returns successfully" Oct 30 13:28:50.440544 containerd[1614]: time="2025-10-30T13:28:50.440307471Z" level=info msg="StopPodSandbox for \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\"" Oct 30 13:28:50.440544 containerd[1614]: time="2025-10-30T13:28:50.440365781Z" level=info msg="Container to stop \"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 30 13:28:50.440544 containerd[1614]: time="2025-10-30T13:28:50.440376281Z" level=info msg="Container to stop \"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 30 13:28:50.440544 containerd[1614]: time="2025-10-30T13:28:50.440385890Z" level=info msg="Container to stop \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 30 13:28:50.440544 containerd[1614]: time="2025-10-30T13:28:50.440394496Z" level=info msg="Container to stop \"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 30 13:28:50.440544 containerd[1614]: time="2025-10-30T13:28:50.440404014Z" level=info msg="Container to stop \"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 30 13:28:50.447701 containerd[1614]: time="2025-10-30T13:28:50.447658326Z" level=info msg="received exit event sandbox_id:\"d9875f86344162fd851a04a61d762091c4a508c4aeac4865c6bd5b37819fffdb\" exit_status:137 exited_at:{seconds:1761830930 nanos:336195653}" Oct 30 13:28:50.449651 systemd[1]: cri-containerd-95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4.scope: Deactivated successfully. Oct 30 13:28:50.451334 containerd[1614]: time="2025-10-30T13:28:50.451296975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" id:\"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" pid:3138 exit_status:137 exited_at:{seconds:1761830930 nanos:450808336}" Oct 30 13:28:50.479487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4-rootfs.mount: Deactivated successfully. Oct 30 13:28:50.485569 containerd[1614]: time="2025-10-30T13:28:50.485336554Z" level=info msg="shim disconnected" id=95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4 namespace=k8s.io Oct 30 13:28:50.485569 containerd[1614]: time="2025-10-30T13:28:50.485376089Z" level=warning msg="cleaning up after shim disconnected" id=95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4 namespace=k8s.io Oct 30 13:28:50.485569 containerd[1614]: time="2025-10-30T13:28:50.485386319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 30 13:28:50.501050 containerd[1614]: time="2025-10-30T13:28:50.500632536Z" level=info msg="received exit event sandbox_id:\"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" exit_status:137 exited_at:{seconds:1761830930 nanos:450808336}" Oct 30 13:28:50.501050 containerd[1614]: time="2025-10-30T13:28:50.500865038Z" level=info msg="TearDown network for sandbox \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" successfully" Oct 30 13:28:50.501050 containerd[1614]: time="2025-10-30T13:28:50.500888843Z" level=info msg="StopPodSandbox for \"95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4\" returns successfully" Oct 30 13:28:50.558087 kubelet[2773]: I1030 13:28:50.557958 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 30 13:28:50.558087 kubelet[2773]: I1030 13:28:50.557899 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-host-proc-sys-kernel\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558087 kubelet[2773]: I1030 13:28:50.558075 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-cgroup\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558340 kubelet[2773]: I1030 13:28:50.558102 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-host-proc-sys-net\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558340 kubelet[2773]: I1030 13:28:50.558129 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-hostproc\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558340 kubelet[2773]: I1030 13:28:50.558133 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 30 13:28:50.558340 kubelet[2773]: I1030 13:28:50.558148 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cni-path\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558340 kubelet[2773]: I1030 13:28:50.558167 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-lib-modules\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558340 kubelet[2773]: I1030 13:28:50.558174 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-hostproc" (OuterVolumeSpecName: "hostproc") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 30 13:28:50.558543 kubelet[2773]: I1030 13:28:50.558192 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-bpf-maps\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558543 kubelet[2773]: I1030 13:28:50.558196 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 30 13:28:50.558543 kubelet[2773]: I1030 13:28:50.558231 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9651b8d4-a317-45bc-9edd-c5b34ed9b061-clustermesh-secrets\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558543 kubelet[2773]: I1030 13:28:50.558254 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-etc-cni-netd\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558543 kubelet[2773]: I1030 13:28:50.558276 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9651b8d4-a317-45bc-9edd-c5b34ed9b061-hubble-tls\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558543 kubelet[2773]: I1030 13:28:50.558230 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 30 13:28:50.558737 kubelet[2773]: I1030 13:28:50.558295 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-xtables-lock\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558737 kubelet[2773]: I1030 13:28:50.558319 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a75c713-d702-44fd-9684-11c60d913520-cilium-config-path\") pod \"1a75c713-d702-44fd-9684-11c60d913520\" (UID: \"1a75c713-d702-44fd-9684-11c60d913520\") " Oct 30 13:28:50.558737 kubelet[2773]: I1030 13:28:50.558341 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fq9w\" (UniqueName: \"kubernetes.io/projected/1a75c713-d702-44fd-9684-11c60d913520-kube-api-access-5fq9w\") pod \"1a75c713-d702-44fd-9684-11c60d913520\" (UID: \"1a75c713-d702-44fd-9684-11c60d913520\") " Oct 30 13:28:50.558737 kubelet[2773]: I1030 13:28:50.558364 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfdpv\" (UniqueName: \"kubernetes.io/projected/9651b8d4-a317-45bc-9edd-c5b34ed9b061-kube-api-access-jfdpv\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558737 kubelet[2773]: I1030 13:28:50.558384 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-run\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558737 kubelet[2773]: I1030 13:28:50.558406 2773 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-config-path\") pod \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\" (UID: \"9651b8d4-a317-45bc-9edd-c5b34ed9b061\") " Oct 30 13:28:50.558925 kubelet[2773]: I1030 13:28:50.558452 2773 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.558925 kubelet[2773]: I1030 13:28:50.558465 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.558925 kubelet[2773]: I1030 13:28:50.558478 2773 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.558925 kubelet[2773]: I1030 13:28:50.558489 2773 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.558925 kubelet[2773]: I1030 13:28:50.558499 2773 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.558925 kubelet[2773]: I1030 13:28:50.558245 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cni-path" (OuterVolumeSpecName: "cni-path") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 30 13:28:50.558925 kubelet[2773]: I1030 13:28:50.558281 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 30 13:28:50.559209 kubelet[2773]: I1030 13:28:50.558902 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 30 13:28:50.559209 kubelet[2773]: I1030 13:28:50.559054 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 30 13:28:50.562287 kubelet[2773]: I1030 13:28:50.562084 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 30 13:28:50.565444 kubelet[2773]: I1030 13:28:50.565403 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9651b8d4-a317-45bc-9edd-c5b34ed9b061-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 13:28:50.565965 kubelet[2773]: I1030 13:28:50.565939 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9651b8d4-a317-45bc-9edd-c5b34ed9b061-kube-api-access-jfdpv" (OuterVolumeSpecName: "kube-api-access-jfdpv") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "kube-api-access-jfdpv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 13:28:50.566340 kubelet[2773]: I1030 13:28:50.566297 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a75c713-d702-44fd-9684-11c60d913520-kube-api-access-5fq9w" (OuterVolumeSpecName: "kube-api-access-5fq9w") pod "1a75c713-d702-44fd-9684-11c60d913520" (UID: "1a75c713-d702-44fd-9684-11c60d913520"). InnerVolumeSpecName "kube-api-access-5fq9w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 13:28:50.566694 kubelet[2773]: I1030 13:28:50.566662 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 13:28:50.566805 kubelet[2773]: I1030 13:28:50.566771 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9651b8d4-a317-45bc-9edd-c5b34ed9b061-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9651b8d4-a317-45bc-9edd-c5b34ed9b061" (UID: "9651b8d4-a317-45bc-9edd-c5b34ed9b061"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 13:28:50.569345 kubelet[2773]: I1030 13:28:50.569301 2773 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a75c713-d702-44fd-9684-11c60d913520-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a75c713-d702-44fd-9684-11c60d913520" (UID: "1a75c713-d702-44fd-9684-11c60d913520"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 13:28:50.659148 kubelet[2773]: I1030 13:28:50.658950 2773 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.659148 kubelet[2773]: I1030 13:28:50.658987 2773 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.659148 kubelet[2773]: I1030 13:28:50.659027 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a75c713-d702-44fd-9684-11c60d913520-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.659148 kubelet[2773]: I1030 13:28:50.659042 2773 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5fq9w\" (UniqueName: \"kubernetes.io/projected/1a75c713-d702-44fd-9684-11c60d913520-kube-api-access-5fq9w\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.659148 kubelet[2773]: I1030 13:28:50.659055 2773 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9651b8d4-a317-45bc-9edd-c5b34ed9b061-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.659148 kubelet[2773]: I1030 13:28:50.659066 2773 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.659148 kubelet[2773]: I1030 13:28:50.659077 2773 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9651b8d4-a317-45bc-9edd-c5b34ed9b061-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.659148 kubelet[2773]: I1030 13:28:50.659090 2773 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.659480 kubelet[2773]: I1030 13:28:50.659103 2773 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jfdpv\" (UniqueName: \"kubernetes.io/projected/9651b8d4-a317-45bc-9edd-c5b34ed9b061-kube-api-access-jfdpv\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.659480 kubelet[2773]: I1030 13:28:50.659116 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.659480 kubelet[2773]: I1030 13:28:50.659127 2773 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9651b8d4-a317-45bc-9edd-c5b34ed9b061-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 30 13:28:50.786729 kubelet[2773]: I1030 13:28:50.786698 2773 scope.go:117] "RemoveContainer" containerID="3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8" Oct 30 13:28:50.788846 containerd[1614]: time="2025-10-30T13:28:50.788787712Z" level=info msg="RemoveContainer for \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\"" Oct 30 13:28:50.795607 systemd[1]: Removed slice kubepods-burstable-pod9651b8d4_a317_45bc_9edd_c5b34ed9b061.slice - libcontainer container kubepods-burstable-pod9651b8d4_a317_45bc_9edd_c5b34ed9b061.slice. Oct 30 13:28:50.795745 systemd[1]: kubepods-burstable-pod9651b8d4_a317_45bc_9edd_c5b34ed9b061.slice: Consumed 6.974s CPU time, 123.8M memory peak, 2.4M read from disk, 15.6M written to disk. Oct 30 13:28:50.797277 containerd[1614]: time="2025-10-30T13:28:50.797224173Z" level=info msg="RemoveContainer for \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" returns successfully" Oct 30 13:28:50.797467 systemd[1]: Removed slice kubepods-besteffort-pod1a75c713_d702_44fd_9684_11c60d913520.slice - libcontainer container kubepods-besteffort-pod1a75c713_d702_44fd_9684_11c60d913520.slice. Oct 30 13:28:50.797571 kubelet[2773]: I1030 13:28:50.797538 2773 scope.go:117] "RemoveContainer" containerID="e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee" Oct 30 13:28:50.799669 containerd[1614]: time="2025-10-30T13:28:50.799134034Z" level=info msg="RemoveContainer for \"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\"" Oct 30 13:28:50.804807 containerd[1614]: time="2025-10-30T13:28:50.804763216Z" level=info msg="RemoveContainer for \"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\" returns successfully" Oct 30 13:28:50.805053 kubelet[2773]: I1030 13:28:50.805017 2773 scope.go:117] "RemoveContainer" containerID="3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6" Oct 30 13:28:50.809127 containerd[1614]: time="2025-10-30T13:28:50.808912246Z" level=info msg="RemoveContainer for \"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\"" Oct 30 13:28:50.825972 containerd[1614]: time="2025-10-30T13:28:50.825911986Z" level=info msg="RemoveContainer for \"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\" returns successfully" Oct 30 13:28:50.826185 kubelet[2773]: I1030 13:28:50.826153 2773 scope.go:117] "RemoveContainer" containerID="c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd" Oct 30 13:28:50.827619 containerd[1614]: time="2025-10-30T13:28:50.827589286Z" level=info msg="RemoveContainer for \"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\"" Oct 30 13:28:50.837375 containerd[1614]: time="2025-10-30T13:28:50.837327912Z" level=info msg="RemoveContainer for \"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\" returns successfully" Oct 30 13:28:50.837653 kubelet[2773]: I1030 13:28:50.837590 2773 scope.go:117] "RemoveContainer" containerID="233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427" Oct 30 13:28:50.839087 containerd[1614]: time="2025-10-30T13:28:50.839059494Z" level=info msg="RemoveContainer for \"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\"" Oct 30 13:28:50.843700 containerd[1614]: time="2025-10-30T13:28:50.843654992Z" level=info msg="RemoveContainer for \"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\" returns successfully" Oct 30 13:28:50.843948 kubelet[2773]: I1030 13:28:50.843920 2773 scope.go:117] "RemoveContainer" containerID="3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8" Oct 30 13:28:50.850186 containerd[1614]: time="2025-10-30T13:28:50.850135974Z" level=error msg="ContainerStatus for \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\": not found" Oct 30 13:28:50.850343 kubelet[2773]: E1030 13:28:50.850305 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\": not found" containerID="3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8" Oct 30 13:28:50.850456 kubelet[2773]: I1030 13:28:50.850355 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8"} err="failed to get container status \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"3da267113c2649808b6e05cc065204fb1b10e284f349c59ccf7b34fabeeaefc8\": not found" Oct 30 13:28:50.850456 kubelet[2773]: I1030 13:28:50.850444 2773 scope.go:117] "RemoveContainer" containerID="e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee" Oct 30 13:28:50.850651 containerd[1614]: time="2025-10-30T13:28:50.850613682Z" level=error msg="ContainerStatus for \"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\": not found" Oct 30 13:28:50.850776 kubelet[2773]: E1030 13:28:50.850735 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\": not found" containerID="e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee" Oct 30 13:28:50.850776 kubelet[2773]: I1030 13:28:50.850762 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee"} err="failed to get container status \"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"e71c04a38ed3d41887c33f80a9fd7a547f4da2224328bb81a0c657629db727ee\": not found" Oct 30 13:28:50.850885 kubelet[2773]: I1030 13:28:50.850787 2773 scope.go:117] "RemoveContainer" containerID="3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6" Oct 30 13:28:50.850972 containerd[1614]: time="2025-10-30T13:28:50.850940974Z" level=error msg="ContainerStatus for \"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\": not found" Oct 30 13:28:50.851104 kubelet[2773]: E1030 13:28:50.851077 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\": not found" containerID="3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6" Oct 30 13:28:50.851154 kubelet[2773]: I1030 13:28:50.851106 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6"} err="failed to get container status \"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3dd7d7f0fd8b3b532635d62284f235875b4d0589e29a8006f07d9ebac820e5f6\": not found" Oct 30 13:28:50.851154 kubelet[2773]: I1030 13:28:50.851128 2773 scope.go:117] "RemoveContainer" containerID="c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd" Oct 30 13:28:50.851356 containerd[1614]: time="2025-10-30T13:28:50.851324053Z" level=error msg="ContainerStatus for \"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\": not found" Oct 30 13:28:50.851503 kubelet[2773]: E1030 13:28:50.851445 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\": not found" containerID="c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd" Oct 30 13:28:50.851542 kubelet[2773]: I1030 13:28:50.851508 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd"} err="failed to get container status \"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5cda568011f45146fbf0b6725925ce5d6c406cd3c88fdc7c3c035185c6043fd\": not found" Oct 30 13:28:50.851542 kubelet[2773]: I1030 13:28:50.851524 2773 scope.go:117] "RemoveContainer" containerID="233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427" Oct 30 13:28:50.851697 containerd[1614]: time="2025-10-30T13:28:50.851662366Z" level=error msg="ContainerStatus for \"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\": not found" Oct 30 13:28:50.851820 kubelet[2773]: E1030 13:28:50.851796 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\": not found" containerID="233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427" Oct 30 13:28:50.851869 kubelet[2773]: I1030 13:28:50.851820 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427"} err="failed to get container status \"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\": rpc error: code = NotFound desc = an error occurred when try to find container \"233a44242b3014d22424d3de53f4b86654c5ffac2d13ad7cbe29de7772784427\": not found" Oct 30 13:28:50.851869 kubelet[2773]: I1030 13:28:50.851836 2773 scope.go:117] "RemoveContainer" containerID="9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28" Oct 30 13:28:50.853264 containerd[1614]: time="2025-10-30T13:28:50.853231629Z" level=info msg="RemoveContainer for \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\"" Oct 30 13:28:50.859614 containerd[1614]: time="2025-10-30T13:28:50.859551806Z" level=info msg="RemoveContainer for \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\" returns successfully" Oct 30 13:28:50.859985 kubelet[2773]: I1030 13:28:50.859942 2773 scope.go:117] "RemoveContainer" containerID="9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28" Oct 30 13:28:50.860262 containerd[1614]: time="2025-10-30T13:28:50.860209967Z" level=error msg="ContainerStatus for \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\": not found" Oct 30 13:28:50.860419 kubelet[2773]: E1030 13:28:50.860366 2773 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\": not found" containerID="9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28" Oct 30 13:28:50.860472 kubelet[2773]: I1030 13:28:50.860407 2773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28"} err="failed to get container status \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b0e01f68c2d12db8d89ab8ce71975b00ca571fd29a1d28c5b32a5b1c088ba28\": not found" Oct 30 13:28:51.308966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95d079594ad70245e02e23497946500de389303900fad49d19079d7b265198a4-shm.mount: Deactivated successfully. Oct 30 13:28:51.309111 systemd[1]: var-lib-kubelet-pods-9651b8d4\x2da317\x2d45bc\x2d9edd\x2dc5b34ed9b061-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 30 13:28:51.309193 systemd[1]: var-lib-kubelet-pods-1a75c713\x2dd702\x2d44fd\x2d9684\x2d11c60d913520-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5fq9w.mount: Deactivated successfully. Oct 30 13:28:51.309281 systemd[1]: var-lib-kubelet-pods-9651b8d4\x2da317\x2d45bc\x2d9edd\x2dc5b34ed9b061-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 30 13:28:51.309353 systemd[1]: var-lib-kubelet-pods-9651b8d4\x2da317\x2d45bc\x2d9edd\x2dc5b34ed9b061-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djfdpv.mount: Deactivated successfully. Oct 30 13:28:51.534963 kubelet[2773]: E1030 13:28:51.534919 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:52.210818 sshd[4396]: Connection closed by 10.0.0.1 port 37086 Oct 30 13:28:52.211463 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:52.224868 systemd[1]: sshd@23-10.0.0.124:22-10.0.0.1:37086.service: Deactivated successfully. Oct 30 13:28:52.226823 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 13:28:52.227612 systemd-logind[1593]: Session 24 logged out. Waiting for processes to exit. Oct 30 13:28:52.230489 systemd[1]: Started sshd@24-10.0.0.124:22-10.0.0.1:37090.service - OpenSSH per-connection server daemon (10.0.0.1:37090). Oct 30 13:28:52.231651 systemd-logind[1593]: Removed session 24. Oct 30 13:28:52.306673 sshd[4555]: Accepted publickey for core from 10.0.0.1 port 37090 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:52.307991 sshd-session[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:52.312573 systemd-logind[1593]: New session 25 of user core. Oct 30 13:28:52.329138 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 30 13:28:52.536905 kubelet[2773]: I1030 13:28:52.536776 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a75c713-d702-44fd-9684-11c60d913520" path="/var/lib/kubelet/pods/1a75c713-d702-44fd-9684-11c60d913520/volumes" Oct 30 13:28:52.537503 kubelet[2773]: I1030 13:28:52.537483 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9651b8d4-a317-45bc-9edd-c5b34ed9b061" path="/var/lib/kubelet/pods/9651b8d4-a317-45bc-9edd-c5b34ed9b061/volumes" Oct 30 13:28:52.599597 kubelet[2773]: E1030 13:28:52.599533 2773 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 30 13:28:52.651537 sshd[4558]: Connection closed by 10.0.0.1 port 37090 Oct 30 13:28:52.652261 sshd-session[4555]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:52.664678 kubelet[2773]: I1030 13:28:52.664637 2773 memory_manager.go:355] "RemoveStaleState removing state" podUID="9651b8d4-a317-45bc-9edd-c5b34ed9b061" containerName="cilium-agent" Oct 30 13:28:52.664678 kubelet[2773]: I1030 13:28:52.664663 2773 memory_manager.go:355] "RemoveStaleState removing state" podUID="1a75c713-d702-44fd-9684-11c60d913520" containerName="cilium-operator" Oct 30 13:28:52.665764 systemd[1]: sshd@24-10.0.0.124:22-10.0.0.1:37090.service: Deactivated successfully. Oct 30 13:28:52.671103 systemd[1]: session-25.scope: Deactivated successfully. Oct 30 13:28:52.672205 systemd-logind[1593]: Session 25 logged out. Waiting for processes to exit. Oct 30 13:28:52.677253 systemd[1]: Started sshd@25-10.0.0.124:22-10.0.0.1:37092.service - OpenSSH per-connection server daemon (10.0.0.1:37092). Oct 30 13:28:52.681242 systemd-logind[1593]: Removed session 25. Oct 30 13:28:52.694142 systemd[1]: Created slice kubepods-burstable-pod3b5006f5_6823_4a8f_a0e5_ad2209adce75.slice - libcontainer container kubepods-burstable-pod3b5006f5_6823_4a8f_a0e5_ad2209adce75.slice. Oct 30 13:28:52.735077 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 37092 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:52.736356 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:52.744447 systemd-logind[1593]: New session 26 of user core. Oct 30 13:28:52.754138 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 30 13:28:52.764236 sshd[4572]: Connection closed by 10.0.0.1 port 37092 Oct 30 13:28:52.764530 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Oct 30 13:28:52.771039 kubelet[2773]: I1030 13:28:52.770987 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b5006f5-6823-4a8f-a0e5-ad2209adce75-hostproc\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771099 kubelet[2773]: I1030 13:28:52.771047 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b5006f5-6823-4a8f-a0e5-ad2209adce75-cni-path\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771099 kubelet[2773]: I1030 13:28:52.771067 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b5006f5-6823-4a8f-a0e5-ad2209adce75-cilium-config-path\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771099 kubelet[2773]: I1030 13:28:52.771083 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b5006f5-6823-4a8f-a0e5-ad2209adce75-cilium-ipsec-secrets\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771099 kubelet[2773]: I1030 13:28:52.771097 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9685\" (UniqueName: \"kubernetes.io/projected/3b5006f5-6823-4a8f-a0e5-ad2209adce75-kube-api-access-l9685\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771208 kubelet[2773]: I1030 13:28:52.771113 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b5006f5-6823-4a8f-a0e5-ad2209adce75-lib-modules\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771208 kubelet[2773]: I1030 13:28:52.771126 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b5006f5-6823-4a8f-a0e5-ad2209adce75-bpf-maps\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771208 kubelet[2773]: I1030 13:28:52.771140 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b5006f5-6823-4a8f-a0e5-ad2209adce75-host-proc-sys-kernel\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771208 kubelet[2773]: I1030 13:28:52.771155 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b5006f5-6823-4a8f-a0e5-ad2209adce75-xtables-lock\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771308 kubelet[2773]: I1030 13:28:52.771257 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b5006f5-6823-4a8f-a0e5-ad2209adce75-host-proc-sys-net\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771335 kubelet[2773]: I1030 13:28:52.771314 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b5006f5-6823-4a8f-a0e5-ad2209adce75-cilium-cgroup\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771358 kubelet[2773]: I1030 13:28:52.771342 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b5006f5-6823-4a8f-a0e5-ad2209adce75-etc-cni-netd\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771380 kubelet[2773]: I1030 13:28:52.771361 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b5006f5-6823-4a8f-a0e5-ad2209adce75-clustermesh-secrets\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771403 kubelet[2773]: I1030 13:28:52.771381 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b5006f5-6823-4a8f-a0e5-ad2209adce75-cilium-run\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.771403 kubelet[2773]: I1030 13:28:52.771396 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b5006f5-6823-4a8f-a0e5-ad2209adce75-hubble-tls\") pod \"cilium-swx66\" (UID: \"3b5006f5-6823-4a8f-a0e5-ad2209adce75\") " pod="kube-system/cilium-swx66" Oct 30 13:28:52.776559 systemd[1]: sshd@25-10.0.0.124:22-10.0.0.1:37092.service: Deactivated successfully. Oct 30 13:28:52.778514 systemd[1]: session-26.scope: Deactivated successfully. Oct 30 13:28:52.779215 systemd-logind[1593]: Session 26 logged out. Waiting for processes to exit. Oct 30 13:28:52.781963 systemd[1]: Started sshd@26-10.0.0.124:22-10.0.0.1:37096.service - OpenSSH per-connection server daemon (10.0.0.1:37096). Oct 30 13:28:52.782965 systemd-logind[1593]: Removed session 26. Oct 30 13:28:52.841884 sshd[4579]: Accepted publickey for core from 10.0.0.1 port 37096 ssh2: RSA SHA256:mlTsuLRWjPLTUPQEpnHqgnJWsw2pd+paIlKt9wiuXEQ Oct 30 13:28:52.843579 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:28:52.847648 systemd-logind[1593]: New session 27 of user core. Oct 30 13:28:52.860120 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 30 13:28:52.998591 kubelet[2773]: E1030 13:28:52.998518 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:52.999333 containerd[1614]: time="2025-10-30T13:28:52.999199393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swx66,Uid:3b5006f5-6823-4a8f-a0e5-ad2209adce75,Namespace:kube-system,Attempt:0,}" Oct 30 13:28:53.017164 containerd[1614]: time="2025-10-30T13:28:53.017093843Z" level=info msg="connecting to shim 7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9" address="unix:///run/containerd/s/c3ccc828d1369eecd1a6ba1e31dedc344d428ef7d89ea5f097807f767f0fac35" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:28:53.047143 systemd[1]: Started cri-containerd-7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9.scope - libcontainer container 7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9. Oct 30 13:28:53.075731 containerd[1614]: time="2025-10-30T13:28:53.075686457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swx66,Uid:3b5006f5-6823-4a8f-a0e5-ad2209adce75,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9\"" Oct 30 13:28:53.076802 kubelet[2773]: E1030 13:28:53.076762 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:53.078613 containerd[1614]: time="2025-10-30T13:28:53.078577790Z" level=info msg="CreateContainer within sandbox \"7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 30 13:28:53.085277 containerd[1614]: time="2025-10-30T13:28:53.085238251Z" level=info msg="Container b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:28:53.092452 containerd[1614]: time="2025-10-30T13:28:53.092342875Z" level=info msg="CreateContainer within sandbox \"7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056\"" Oct 30 13:28:53.093153 containerd[1614]: time="2025-10-30T13:28:53.093104342Z" level=info msg="StartContainer for \"b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056\"" Oct 30 13:28:53.093896 containerd[1614]: time="2025-10-30T13:28:53.093865498Z" level=info msg="connecting to shim b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056" address="unix:///run/containerd/s/c3ccc828d1369eecd1a6ba1e31dedc344d428ef7d89ea5f097807f767f0fac35" protocol=ttrpc version=3 Oct 30 13:28:53.120260 systemd[1]: Started cri-containerd-b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056.scope - libcontainer container b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056. Oct 30 13:28:53.152107 containerd[1614]: time="2025-10-30T13:28:53.151963442Z" level=info msg="StartContainer for \"b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056\" returns successfully" Oct 30 13:28:53.161303 systemd[1]: cri-containerd-b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056.scope: Deactivated successfully. Oct 30 13:28:53.163919 containerd[1614]: time="2025-10-30T13:28:53.163763667Z" level=info msg="received exit event container_id:\"b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056\" id:\"b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056\" pid:4653 exited_at:{seconds:1761830933 nanos:163486270}" Oct 30 13:28:53.163919 containerd[1614]: time="2025-10-30T13:28:53.163870640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056\" id:\"b0ec021d27bfbc51aa4956c2f051321708404e134880d95a10194dff9110e056\" pid:4653 exited_at:{seconds:1761830933 nanos:163486270}" Oct 30 13:28:53.806023 kubelet[2773]: E1030 13:28:53.805978 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:53.810016 containerd[1614]: time="2025-10-30T13:28:53.809338159Z" level=info msg="CreateContainer within sandbox \"7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 30 13:28:53.816514 containerd[1614]: time="2025-10-30T13:28:53.816479274Z" level=info msg="Container 594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:28:53.821712 containerd[1614]: time="2025-10-30T13:28:53.821678860Z" level=info msg="CreateContainer within sandbox \"7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08\"" Oct 30 13:28:53.822267 containerd[1614]: time="2025-10-30T13:28:53.822214778Z" level=info msg="StartContainer for \"594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08\"" Oct 30 13:28:53.823235 containerd[1614]: time="2025-10-30T13:28:53.823195831Z" level=info msg="connecting to shim 594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08" address="unix:///run/containerd/s/c3ccc828d1369eecd1a6ba1e31dedc344d428ef7d89ea5f097807f767f0fac35" protocol=ttrpc version=3 Oct 30 13:28:53.856164 systemd[1]: Started cri-containerd-594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08.scope - libcontainer container 594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08. Oct 30 13:28:53.887953 containerd[1614]: time="2025-10-30T13:28:53.887898431Z" level=info msg="StartContainer for \"594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08\" returns successfully" Oct 30 13:28:53.894295 systemd[1]: cri-containerd-594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08.scope: Deactivated successfully. Oct 30 13:28:53.894642 containerd[1614]: time="2025-10-30T13:28:53.894605922Z" level=info msg="received exit event container_id:\"594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08\" id:\"594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08\" pid:4698 exited_at:{seconds:1761830933 nanos:894395973}" Oct 30 13:28:53.894919 containerd[1614]: time="2025-10-30T13:28:53.894829216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08\" id:\"594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08\" pid:4698 exited_at:{seconds:1761830933 nanos:894395973}" Oct 30 13:28:53.914991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-594b1bd2bb23cff95ebbebf3830aec2a5169f91c25ccd84bdd776c75cbf7fb08-rootfs.mount: Deactivated successfully. Oct 30 13:28:54.173616 kubelet[2773]: I1030 13:28:54.173471 2773 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-30T13:28:54Z","lastTransitionTime":"2025-10-30T13:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 30 13:28:54.534666 kubelet[2773]: E1030 13:28:54.534620 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:54.809679 kubelet[2773]: E1030 13:28:54.809507 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:54.811229 containerd[1614]: time="2025-10-30T13:28:54.811176479Z" level=info msg="CreateContainer within sandbox \"7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 30 13:28:55.021422 containerd[1614]: time="2025-10-30T13:28:55.021355424Z" level=info msg="Container 4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:28:55.033213 containerd[1614]: time="2025-10-30T13:28:55.033161385Z" level=info msg="CreateContainer within sandbox \"7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0\"" Oct 30 13:28:55.033800 containerd[1614]: time="2025-10-30T13:28:55.033763989Z" level=info msg="StartContainer for \"4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0\"" Oct 30 13:28:55.035584 containerd[1614]: time="2025-10-30T13:28:55.035538558Z" level=info msg="connecting to shim 4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0" address="unix:///run/containerd/s/c3ccc828d1369eecd1a6ba1e31dedc344d428ef7d89ea5f097807f767f0fac35" protocol=ttrpc version=3 Oct 30 13:28:55.058183 systemd[1]: Started cri-containerd-4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0.scope - libcontainer container 4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0. Oct 30 13:28:55.101772 containerd[1614]: time="2025-10-30T13:28:55.101642154Z" level=info msg="StartContainer for \"4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0\" returns successfully" Oct 30 13:28:55.102956 systemd[1]: cri-containerd-4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0.scope: Deactivated successfully. Oct 30 13:28:55.105789 containerd[1614]: time="2025-10-30T13:28:55.105734233Z" level=info msg="received exit event container_id:\"4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0\" id:\"4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0\" pid:4743 exited_at:{seconds:1761830935 nanos:105508224}" Oct 30 13:28:55.110504 containerd[1614]: time="2025-10-30T13:28:55.110468510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0\" id:\"4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0\" pid:4743 exited_at:{seconds:1761830935 nanos:105508224}" Oct 30 13:28:55.130914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c8934d7a7ee3145a3d6b2b29190e63e70b49a75ad2b1ad8ca8c7c4126472fc0-rootfs.mount: Deactivated successfully. Oct 30 13:28:55.814878 kubelet[2773]: E1030 13:28:55.814820 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:55.816912 containerd[1614]: time="2025-10-30T13:28:55.816861308Z" level=info msg="CreateContainer within sandbox \"7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 30 13:28:55.828058 containerd[1614]: time="2025-10-30T13:28:55.827986468Z" level=info msg="Container b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:28:55.835650 containerd[1614]: time="2025-10-30T13:28:55.835590762Z" level=info msg="CreateContainer within sandbox \"7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604\"" Oct 30 13:28:55.836163 containerd[1614]: time="2025-10-30T13:28:55.836128223Z" level=info msg="StartContainer for \"b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604\"" Oct 30 13:28:55.837189 containerd[1614]: time="2025-10-30T13:28:55.837138782Z" level=info msg="connecting to shim b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604" address="unix:///run/containerd/s/c3ccc828d1369eecd1a6ba1e31dedc344d428ef7d89ea5f097807f767f0fac35" protocol=ttrpc version=3 Oct 30 13:28:55.861171 systemd[1]: Started cri-containerd-b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604.scope - libcontainer container b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604. Oct 30 13:28:55.893798 systemd[1]: cri-containerd-b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604.scope: Deactivated successfully. Oct 30 13:28:55.894638 containerd[1614]: time="2025-10-30T13:28:55.894566257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604\" id:\"b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604\" pid:4782 exited_at:{seconds:1761830935 nanos:894300262}" Oct 30 13:28:55.894638 containerd[1614]: time="2025-10-30T13:28:55.894594861Z" level=info msg="received exit event container_id:\"b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604\" id:\"b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604\" pid:4782 exited_at:{seconds:1761830935 nanos:894300262}" Oct 30 13:28:55.903887 containerd[1614]: time="2025-10-30T13:28:55.903831826Z" level=info msg="StartContainer for \"b4a0b4f1df3cdba5bda03ea1d31138eae63d9f1b9a4d4cd85d839df2b2dfa604\" returns successfully" Oct 30 13:28:56.819690 kubelet[2773]: E1030 13:28:56.819643 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:56.821471 containerd[1614]: time="2025-10-30T13:28:56.821393733Z" level=info msg="CreateContainer within sandbox \"7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 30 13:28:56.831863 containerd[1614]: time="2025-10-30T13:28:56.831802246Z" level=info msg="Container 13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:28:56.840454 containerd[1614]: time="2025-10-30T13:28:56.840410453Z" level=info msg="CreateContainer within sandbox \"7bff6db6c516204834ce581f36bab96ff26217ac289c0cd9c4f1ca170c4efea9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37\"" Oct 30 13:28:56.841007 containerd[1614]: time="2025-10-30T13:28:56.840948694Z" level=info msg="StartContainer for \"13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37\"" Oct 30 13:28:56.842245 containerd[1614]: time="2025-10-30T13:28:56.842193858Z" level=info msg="connecting to shim 13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37" address="unix:///run/containerd/s/c3ccc828d1369eecd1a6ba1e31dedc344d428ef7d89ea5f097807f767f0fac35" protocol=ttrpc version=3 Oct 30 13:28:56.868390 systemd[1]: Started cri-containerd-13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37.scope - libcontainer container 13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37. Oct 30 13:28:56.925425 containerd[1614]: time="2025-10-30T13:28:56.925379015Z" level=info msg="StartContainer for \"13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37\" returns successfully" Oct 30 13:28:56.990731 containerd[1614]: time="2025-10-30T13:28:56.990675125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37\" id:\"8dcb23669a5170320d44d8c2b95b7dd200cdeea3bbdfaa2c6ca52ef7ea8a12cb\" pid:4856 exited_at:{seconds:1761830936 nanos:990339798}" Oct 30 13:28:57.343046 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Oct 30 13:28:57.825317 kubelet[2773]: E1030 13:28:57.825276 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:57.838446 kubelet[2773]: I1030 13:28:57.838370 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-swx66" podStartSLOduration=5.838348497 podStartE2EDuration="5.838348497s" podCreationTimestamp="2025-10-30 13:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:28:57.838023561 +0000 UTC m=+95.444568230" watchObservedRunningTime="2025-10-30 13:28:57.838348497 +0000 UTC m=+95.444893147" Oct 30 13:28:58.999952 kubelet[2773]: E1030 13:28:58.999907 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:28:59.309182 containerd[1614]: time="2025-10-30T13:28:59.309043579Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37\" id:\"17b8fdd133f594a1d81c31ab262054615cf56ae4f4e7dd826e8dc08700f03e2c\" pid:5045 exit_status:1 exited_at:{seconds:1761830939 nanos:307800932}" Oct 30 13:28:59.331684 kubelet[2773]: E1030 13:28:59.331633 2773 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54766->127.0.0.1:41145: write tcp 127.0.0.1:54766->127.0.0.1:41145: write: broken pipe Oct 30 13:28:59.535357 kubelet[2773]: E1030 13:28:59.535304 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:29:00.535033 systemd-networkd[1525]: lxc_health: Link UP Oct 30 13:29:00.535376 systemd-networkd[1525]: lxc_health: Gained carrier Oct 30 13:29:00.999930 kubelet[2773]: E1030 13:29:00.999738 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:29:01.455166 containerd[1614]: time="2025-10-30T13:29:01.455105768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37\" id:\"caae0a864c734850f1f25c56fcfd81349988a7da06e598d0bfb27ff8e8deb146\" pid:5410 exited_at:{seconds:1761830941 nanos:454715158}" Oct 30 13:29:01.833471 kubelet[2773]: E1030 13:29:01.833320 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:29:01.941292 systemd-networkd[1525]: lxc_health: Gained IPv6LL Oct 30 13:29:02.835960 kubelet[2773]: E1030 13:29:02.835882 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:29:03.550983 containerd[1614]: time="2025-10-30T13:29:03.550936671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37\" id:\"a7a8c7300dd91ee0251bdf7a1ed8a6fab7d5b4536f946d9c2644fe0858fd352a\" pid:5446 exited_at:{seconds:1761830943 nanos:550409222}" Oct 30 13:29:05.534610 kubelet[2773]: E1030 13:29:05.534567 2773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:29:05.669697 containerd[1614]: time="2025-10-30T13:29:05.669646167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13dc4a8760d083c93a6f16924b00fee411152ffcee707e64f73c54ad60097e37\" id:\"131c44e94e481b6822c2fdba76b44d98ee01b23e211c7401d9b970c079e7b7de\" pid:5481 exited_at:{seconds:1761830945 nanos:669217425}" Oct 30 13:29:05.676153 sshd[4582]: Connection closed by 10.0.0.1 port 37096 Oct 30 13:29:05.676717 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Oct 30 13:29:05.681858 systemd[1]: sshd@26-10.0.0.124:22-10.0.0.1:37096.service: Deactivated successfully. Oct 30 13:29:05.684582 systemd[1]: session-27.scope: Deactivated successfully. Oct 30 13:29:05.685960 systemd-logind[1593]: Session 27 logged out. Waiting for processes to exit. Oct 30 13:29:05.687604 systemd-logind[1593]: Removed session 27.