Sep 13 00:27:42.463179 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:15:39 -00 2025 Sep 13 00:27:42.463210 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:27:42.463225 kernel: BIOS-provided physical RAM map: Sep 13 00:27:42.463233 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:27:42.463242 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:27:42.463250 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:27:42.463260 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:27:42.463269 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:27:42.463280 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 13 00:27:42.463289 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 13 00:27:42.463297 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 13 00:27:42.463307 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 13 00:27:42.463318 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 13 00:27:42.463329 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 13 00:27:42.463346 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 13 00:27:42.463359 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:27:42.463371 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 13 00:27:42.463382 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 13 00:27:42.463394 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 13 00:27:42.463406 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 13 00:27:42.463417 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 13 00:27:42.463429 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:27:42.463442 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 00:27:42.463454 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:27:42.463466 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 13 00:27:42.463482 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:27:42.463516 kernel: NX (Execute Disable) protection: active Sep 13 00:27:42.463528 kernel: APIC: Static calls initialized Sep 13 00:27:42.463538 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 13 00:27:42.463548 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 13 00:27:42.463557 kernel: extended physical RAM map: Sep 13 00:27:42.463566 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:27:42.463576 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:27:42.463585 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:27:42.463595 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:27:42.463604 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:27:42.463618 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 13 00:27:42.463627 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 13 00:27:42.463637 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 13 00:27:42.463647 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 13 00:27:42.463661 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 13 00:27:42.463671 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 13 00:27:42.463683 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 13 00:27:42.463693 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 13 00:27:42.463703 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 13 00:27:42.463713 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 13 00:27:42.463724 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 13 00:27:42.463734 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:27:42.463743 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 13 00:27:42.463752 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 13 00:27:42.463762 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 13 00:27:42.463775 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 13 00:27:42.463785 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 13 00:27:42.463796 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:27:42.463805 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 00:27:42.463828 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:27:42.463837 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 13 00:27:42.463847 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:27:42.463856 kernel: efi: EFI v2.7 by EDK II Sep 13 00:27:42.463867 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 13 00:27:42.463876 kernel: random: crng init done Sep 13 00:27:42.463886 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 13 00:27:42.463896 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 13 00:27:42.463910 kernel: secureboot: Secure boot disabled Sep 13 00:27:42.463920 kernel: SMBIOS 2.8 present. Sep 13 00:27:42.463946 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 13 00:27:42.463958 kernel: DMI: Memory slots populated: 1/1 Sep 13 00:27:42.463983 kernel: Hypervisor detected: KVM Sep 13 00:27:42.463994 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:27:42.464004 kernel: kvm-clock: using sched offset of 4450040423 cycles Sep 13 00:27:42.464014 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:27:42.464025 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:27:42.464035 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:27:42.464046 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:27:42.464060 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 13 00:27:42.464071 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:27:42.464088 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:27:42.464098 kernel: Using GB pages for direct mapping Sep 13 00:27:42.464108 kernel: ACPI: Early table checksum verification disabled Sep 13 00:27:42.464119 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 13 00:27:42.464129 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:27:42.464139 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:27:42.464149 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:27:42.464162 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 13 00:27:42.464173 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:27:42.464183 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:27:42.464193 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:27:42.464204 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:27:42.464214 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:27:42.464225 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 13 00:27:42.464236 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 13 00:27:42.464250 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 13 00:27:42.464264 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 13 00:27:42.464278 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 13 00:27:42.464291 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 13 00:27:42.464305 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 13 00:27:42.464319 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 13 00:27:42.464332 kernel: No NUMA configuration found Sep 13 00:27:42.464345 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 13 00:27:42.464358 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 13 00:27:42.464370 kernel: Zone ranges: Sep 13 00:27:42.464385 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:27:42.464395 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 13 00:27:42.464405 kernel: Normal empty Sep 13 00:27:42.464415 kernel: Device empty Sep 13 00:27:42.464425 kernel: Movable zone start for each node Sep 13 00:27:42.464435 kernel: Early memory node ranges Sep 13 00:27:42.464445 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:27:42.464454 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 13 00:27:42.464464 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 13 00:27:42.464477 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 13 00:27:42.464507 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 13 00:27:42.464518 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 13 00:27:42.464528 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 13 00:27:42.464539 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 13 00:27:42.464550 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 13 00:27:42.464561 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:27:42.464572 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:27:42.464597 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 13 00:27:42.464608 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:27:42.464619 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 13 00:27:42.464631 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 13 00:27:42.464645 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 13 00:27:42.464657 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 13 00:27:42.464668 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 13 00:27:42.464680 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:27:42.464692 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:27:42.464706 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:27:42.464718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:27:42.464730 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:27:42.464741 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:27:42.464752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:27:42.464764 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:27:42.464775 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:27:42.464787 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:27:42.464798 kernel: TSC deadline timer available Sep 13 00:27:42.465166 kernel: CPU topo: Max. logical packages: 1 Sep 13 00:27:42.465181 kernel: CPU topo: Max. logical dies: 1 Sep 13 00:27:42.465193 kernel: CPU topo: Max. dies per package: 1 Sep 13 00:27:42.465205 kernel: CPU topo: Max. threads per core: 1 Sep 13 00:27:42.465217 kernel: CPU topo: Num. cores per package: 4 Sep 13 00:27:42.465228 kernel: CPU topo: Num. threads per package: 4 Sep 13 00:27:42.465239 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 13 00:27:42.465251 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:27:42.465262 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:27:42.465278 kernel: kvm-guest: setup PV sched yield Sep 13 00:27:42.465290 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 13 00:27:42.465302 kernel: Booting paravirtualized kernel on KVM Sep 13 00:27:42.465315 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:27:42.465329 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:27:42.465344 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 13 00:27:42.465358 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 13 00:27:42.465372 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:27:42.465386 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:27:42.465406 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:27:42.465423 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:27:42.465438 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:27:42.465452 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:27:42.465466 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:27:42.465480 kernel: Fallback order for Node 0: 0 Sep 13 00:27:42.465519 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 13 00:27:42.465534 kernel: Policy zone: DMA32 Sep 13 00:27:42.465550 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:27:42.465562 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:27:42.465574 kernel: ftrace: allocating 40122 entries in 157 pages Sep 13 00:27:42.465586 kernel: ftrace: allocated 157 pages with 5 groups Sep 13 00:27:42.465598 kernel: Dynamic Preempt: voluntary Sep 13 00:27:42.465610 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:27:42.465623 kernel: rcu: RCU event tracing is enabled. Sep 13 00:27:42.465635 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:27:42.465647 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:27:42.465659 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:27:42.465674 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:27:42.465686 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:27:42.465697 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:27:42.465709 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:27:42.465721 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:27:42.465733 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:27:42.465744 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:27:42.465754 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:27:42.465768 kernel: Console: colour dummy device 80x25 Sep 13 00:27:42.465779 kernel: printk: legacy console [ttyS0] enabled Sep 13 00:27:42.465790 kernel: ACPI: Core revision 20240827 Sep 13 00:27:42.465800 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:27:42.465820 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:27:42.465831 kernel: x2apic enabled Sep 13 00:27:42.465841 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:27:42.465852 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 00:27:42.465863 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 00:27:42.465878 kernel: kvm-guest: setup PV IPIs Sep 13 00:27:42.465890 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:27:42.465902 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 00:27:42.465914 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:27:42.465925 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:27:42.465937 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:27:42.465949 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:27:42.465961 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:27:42.465973 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:27:42.465989 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:27:42.466001 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:27:42.466013 kernel: active return thunk: retbleed_return_thunk Sep 13 00:27:42.466024 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:27:42.466036 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:27:42.466048 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:27:42.466060 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 00:27:42.466073 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 00:27:42.466084 kernel: active return thunk: srso_return_thunk Sep 13 00:27:42.466100 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 00:27:42.466112 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:27:42.466124 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:27:42.466135 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:27:42.466147 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:27:42.466159 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:27:42.466169 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:27:42.466180 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:27:42.466191 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 00:27:42.466204 kernel: landlock: Up and running. Sep 13 00:27:42.466215 kernel: SELinux: Initializing. Sep 13 00:27:42.466226 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:27:42.466238 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:27:42.466251 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:27:42.466264 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:27:42.466277 kernel: ... version: 0 Sep 13 00:27:42.466291 kernel: ... bit width: 48 Sep 13 00:27:42.466306 kernel: ... generic registers: 6 Sep 13 00:27:42.466324 kernel: ... value mask: 0000ffffffffffff Sep 13 00:27:42.466338 kernel: ... max period: 00007fffffffffff Sep 13 00:27:42.466352 kernel: ... fixed-purpose events: 0 Sep 13 00:27:42.466367 kernel: ... event mask: 000000000000003f Sep 13 00:27:42.466381 kernel: signal: max sigframe size: 1776 Sep 13 00:27:42.466393 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:27:42.466404 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:27:42.466415 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 13 00:27:42.466427 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:27:42.466442 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:27:42.466454 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 00:27:42.466465 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:27:42.466477 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:27:42.466509 kernel: Memory: 2424724K/2565800K available (14336K kernel code, 2432K rwdata, 9960K rodata, 53828K init, 1088K bss, 135148K reserved, 0K cma-reserved) Sep 13 00:27:42.466520 kernel: devtmpfs: initialized Sep 13 00:27:42.466531 kernel: x86/mm: Memory block size: 128MB Sep 13 00:27:42.466541 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 13 00:27:42.466552 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 13 00:27:42.466566 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 13 00:27:42.466577 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 13 00:27:42.466588 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 13 00:27:42.466598 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 13 00:27:42.466609 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:27:42.466619 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:27:42.466630 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:27:42.466641 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:27:42.466655 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:27:42.466667 kernel: audit: type=2000 audit(1757723260.586:1): state=initialized audit_enabled=0 res=1 Sep 13 00:27:42.466679 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:27:42.466691 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:27:42.466701 kernel: cpuidle: using governor menu Sep 13 00:27:42.466713 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:27:42.466724 kernel: dca service started, version 1.12.1 Sep 13 00:27:42.466735 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 13 00:27:42.466745 kernel: PCI: Using configuration type 1 for base access Sep 13 00:27:42.466759 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:27:42.466770 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:27:42.466780 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:27:42.466791 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:27:42.466801 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:27:42.468838 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:27:42.468852 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:27:42.468863 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:27:42.468875 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:27:42.468891 kernel: ACPI: Interpreter enabled Sep 13 00:27:42.468902 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:27:42.468913 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:27:42.468924 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:27:42.468935 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:27:42.468946 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:27:42.468957 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:27:42.469241 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:27:42.469414 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:27:42.469625 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:27:42.469641 kernel: PCI host bridge to bus 0000:00 Sep 13 00:27:42.469787 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:27:42.469928 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:27:42.470065 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:27:42.470221 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 13 00:27:42.470388 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 13 00:27:42.470526 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 13 00:27:42.470951 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:27:42.471135 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 13 00:27:42.471304 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 13 00:27:42.471482 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 13 00:27:42.471677 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 13 00:27:42.472858 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 13 00:27:42.473011 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:27:42.473185 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 13 00:27:42.473340 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 13 00:27:42.473529 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 13 00:27:42.473663 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 13 00:27:42.473802 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 13 00:27:42.473970 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 13 00:27:42.474104 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 13 00:27:42.474236 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 13 00:27:42.474407 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 13 00:27:42.474556 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 13 00:27:42.474687 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 13 00:27:42.474835 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 13 00:27:42.474971 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 13 00:27:42.475118 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 13 00:27:42.475251 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:27:42.475395 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 13 00:27:42.475553 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 13 00:27:42.475689 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 13 00:27:42.477267 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 13 00:27:42.477502 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 13 00:27:42.477523 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:27:42.477535 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:27:42.477547 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:27:42.477558 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:27:42.477570 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:27:42.477582 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:27:42.477599 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:27:42.477611 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:27:42.477623 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:27:42.477634 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:27:42.477646 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:27:42.477658 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:27:42.477670 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:27:42.477681 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:27:42.477693 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:27:42.477709 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:27:42.477720 kernel: iommu: Default domain type: Translated Sep 13 00:27:42.477732 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:27:42.477744 kernel: efivars: Registered efivars operations Sep 13 00:27:42.477756 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:27:42.477767 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:27:42.477779 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 13 00:27:42.477791 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 13 00:27:42.477803 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 13 00:27:42.477830 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 13 00:27:42.477842 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 13 00:27:42.477853 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 13 00:27:42.477865 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 13 00:27:42.477876 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 13 00:27:42.478050 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:27:42.478220 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:27:42.478375 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:27:42.478394 kernel: vgaarb: loaded Sep 13 00:27:42.478405 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:27:42.478417 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:27:42.478428 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:27:42.478440 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:27:42.478452 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:27:42.478463 kernel: pnp: PnP ACPI init Sep 13 00:27:42.478675 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 13 00:27:42.478701 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:27:42.478713 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:27:42.478724 kernel: NET: Registered PF_INET protocol family Sep 13 00:27:42.478735 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:27:42.478746 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:27:42.478757 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:27:42.478768 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:27:42.478780 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:27:42.478793 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:27:42.478804 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:27:42.479585 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:27:42.479601 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:27:42.479612 kernel: NET: Registered PF_XDP protocol family Sep 13 00:27:42.479788 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 13 00:27:42.479971 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 13 00:27:42.480129 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:27:42.480274 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:27:42.480408 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:27:42.480552 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 13 00:27:42.480681 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 13 00:27:42.481845 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 13 00:27:42.481864 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:27:42.481875 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 00:27:42.481885 kernel: Initialise system trusted keyrings Sep 13 00:27:42.481900 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:27:42.481910 kernel: Key type asymmetric registered Sep 13 00:27:42.481919 kernel: Asymmetric key parser 'x509' registered Sep 13 00:27:42.481929 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:27:42.481938 kernel: io scheduler mq-deadline registered Sep 13 00:27:42.481948 kernel: io scheduler kyber registered Sep 13 00:27:42.481957 kernel: io scheduler bfq registered Sep 13 00:27:42.481970 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:27:42.481980 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:27:42.481990 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:27:42.482000 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:27:42.482009 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:27:42.482019 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:27:42.482029 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:27:42.482039 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:27:42.482048 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:27:42.482061 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:27:42.482232 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:27:42.482402 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:27:42.482576 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:27:41 UTC (1757723261) Sep 13 00:27:42.482730 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 13 00:27:42.482748 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:27:42.482760 kernel: efifb: probing for efifb Sep 13 00:27:42.482772 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 13 00:27:42.482788 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 13 00:27:42.482800 kernel: efifb: scrolling: redraw Sep 13 00:27:42.482823 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:27:42.482835 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:27:42.482846 kernel: fb0: EFI VGA frame buffer device Sep 13 00:27:42.482857 kernel: pstore: Using crash dump compression: deflate Sep 13 00:27:42.482869 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:27:42.482881 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:27:42.482892 kernel: Segment Routing with IPv6 Sep 13 00:27:42.482906 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:27:42.482918 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:27:42.482930 kernel: Key type dns_resolver registered Sep 13 00:27:42.482941 kernel: IPI shorthand broadcast: enabled Sep 13 00:27:42.482953 kernel: sched_clock: Marking stable (3037002138, 157627399)->(3214441263, -19811726) Sep 13 00:27:42.482964 kernel: registered taskstats version 1 Sep 13 00:27:42.482976 kernel: Loading compiled-in X.509 certificates Sep 13 00:27:42.482988 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: dd6b45f5ed9ac8d42d60bdb17f83ef06c8bcd8f6' Sep 13 00:27:42.483000 kernel: Demotion targets for Node 0: null Sep 13 00:27:42.483015 kernel: Key type .fscrypt registered Sep 13 00:27:42.483027 kernel: Key type fscrypt-provisioning registered Sep 13 00:27:42.483039 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:27:42.483051 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:27:42.483062 kernel: ima: No architecture policies found Sep 13 00:27:42.483073 kernel: clk: Disabling unused clocks Sep 13 00:27:42.483084 kernel: Warning: unable to open an initial console. Sep 13 00:27:42.483096 kernel: Freeing unused kernel image (initmem) memory: 53828K Sep 13 00:27:42.483111 kernel: Write protecting the kernel read-only data: 24576k Sep 13 00:27:42.483123 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 13 00:27:42.483134 kernel: Run /init as init process Sep 13 00:27:42.483146 kernel: with arguments: Sep 13 00:27:42.483158 kernel: /init Sep 13 00:27:42.483169 kernel: with environment: Sep 13 00:27:42.483181 kernel: HOME=/ Sep 13 00:27:42.483192 kernel: TERM=linux Sep 13 00:27:42.483203 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:27:42.483216 systemd[1]: Successfully made /usr/ read-only. Sep 13 00:27:42.483236 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 00:27:42.483249 systemd[1]: Detected virtualization kvm. Sep 13 00:27:42.483261 systemd[1]: Detected architecture x86-64. Sep 13 00:27:42.483273 systemd[1]: Running in initrd. Sep 13 00:27:42.483286 systemd[1]: No hostname configured, using default hostname. Sep 13 00:27:42.483298 systemd[1]: Hostname set to . Sep 13 00:27:42.483310 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:27:42.483325 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:27:42.483337 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:27:42.483349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:27:42.483362 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:27:42.483374 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:27:42.483388 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:27:42.483402 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:27:42.483421 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:27:42.483436 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:27:42.483449 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:27:42.483461 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:27:42.483474 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:27:42.483505 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:27:42.483518 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:27:42.483531 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:27:42.483548 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:27:42.483562 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:27:42.483574 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:27:42.483586 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 00:27:42.483598 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:27:42.483610 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:27:42.483622 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:27:42.483634 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:27:42.483653 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:27:42.483666 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:27:42.483683 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:27:42.483697 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 00:27:42.483711 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:27:42.483724 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:27:42.483737 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:27:42.483750 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:27:42.483762 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:27:42.483780 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:27:42.483793 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:27:42.483807 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:27:42.485996 systemd-journald[222]: Collecting audit messages is disabled. Sep 13 00:27:42.486077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:27:42.486094 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:27:42.486108 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:27:42.486121 systemd-journald[222]: Journal started Sep 13 00:27:42.486155 systemd-journald[222]: Runtime Journal (/run/log/journal/192c327bbd1d4dc38c0997639f0c9231) is 6M, max 48.5M, 42.4M free. Sep 13 00:27:42.460387 systemd-modules-load[223]: Inserted module 'overlay' Sep 13 00:27:42.495162 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:27:42.514911 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:27:42.525884 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:27:42.565627 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:27:42.569585 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 00:27:42.574736 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:27:42.584737 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:27:42.626103 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:27:42.633841 kernel: Bridge firewalling registered Sep 13 00:27:42.631255 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 13 00:27:42.631985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:27:42.643943 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:27:42.661399 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:27:42.671515 dracut-cmdline[255]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:27:42.723040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:27:42.740088 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:27:42.868947 systemd-resolved[294]: Positive Trust Anchors: Sep 13 00:27:42.868962 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:27:42.869000 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:27:42.873698 systemd-resolved[294]: Defaulting to hostname 'linux'. Sep 13 00:27:42.876950 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:27:42.882671 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:27:42.949537 kernel: SCSI subsystem initialized Sep 13 00:27:42.974550 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:27:43.014146 kernel: iscsi: registered transport (tcp) Sep 13 00:27:43.069114 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:27:43.069204 kernel: QLogic iSCSI HBA Driver Sep 13 00:27:43.144194 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:27:43.224928 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:27:43.227033 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:27:43.424758 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:27:43.436799 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:27:43.560701 kernel: raid6: avx2x4 gen() 15392 MB/s Sep 13 00:27:43.577573 kernel: raid6: avx2x2 gen() 14406 MB/s Sep 13 00:27:43.596570 kernel: raid6: avx2x1 gen() 15588 MB/s Sep 13 00:27:43.596670 kernel: raid6: using algorithm avx2x1 gen() 15588 MB/s Sep 13 00:27:43.614268 kernel: raid6: .... xor() 9110 MB/s, rmw enabled Sep 13 00:27:43.614334 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:27:43.672070 kernel: xor: automatically using best checksumming function avx Sep 13 00:27:44.111548 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:27:44.136237 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:27:44.143922 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:27:44.232329 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 13 00:27:44.250102 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:27:44.269348 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:27:44.437107 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Sep 13 00:27:44.542063 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:27:44.545655 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:27:44.838215 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:27:44.844188 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:27:45.024193 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:27:45.024292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:27:45.101975 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:27:45.116650 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:27:45.120284 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 00:27:45.142972 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:27:45.143324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:27:45.152515 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 00:27:45.157503 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:27:45.155644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:27:45.167911 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:27:45.167947 kernel: GPT:9289727 != 19775487 Sep 13 00:27:45.167959 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:27:45.168668 kernel: GPT:9289727 != 19775487 Sep 13 00:27:45.169707 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:27:45.169731 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:27:45.440775 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:27:45.440849 kernel: libata version 3.00 loaded. Sep 13 00:27:45.441318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:27:45.559558 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 13 00:27:45.590525 kernel: AES CTR mode by8 optimization enabled Sep 13 00:27:45.616866 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:27:45.617134 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:27:45.618043 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:27:45.632092 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 13 00:27:45.632333 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 13 00:27:45.632575 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:27:45.632765 kernel: scsi host0: ahci Sep 13 00:27:45.637043 kernel: scsi host1: ahci Sep 13 00:27:45.637261 kernel: scsi host2: ahci Sep 13 00:27:45.642160 kernel: scsi host3: ahci Sep 13 00:27:45.642372 kernel: scsi host4: ahci Sep 13 00:27:45.643302 kernel: scsi host5: ahci Sep 13 00:27:45.644030 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 13 00:27:45.645658 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 13 00:27:45.645682 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 13 00:27:45.647945 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 13 00:27:45.647970 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 13 00:27:45.651510 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 13 00:27:45.655910 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:27:45.676011 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:27:45.677572 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:27:45.707903 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:27:45.728852 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:27:45.767559 disk-uuid[637]: Primary Header is updated. Sep 13 00:27:45.767559 disk-uuid[637]: Secondary Entries is updated. Sep 13 00:27:45.767559 disk-uuid[637]: Secondary Header is updated. Sep 13 00:27:45.785937 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:27:45.967327 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:27:45.967388 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:27:45.967507 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:27:45.968510 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:27:45.971257 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:27:45.980526 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:27:45.980609 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 00:27:45.985142 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:27:45.985189 kernel: ata3.00: applying bridge limits Sep 13 00:27:45.988797 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 00:27:45.988829 kernel: ata3.00: configured for UDMA/100 Sep 13 00:27:45.995820 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:27:46.073542 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:27:46.073910 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:27:46.154853 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:27:46.600253 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:27:46.603575 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:27:46.606164 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:27:46.608565 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:27:46.612297 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:27:46.637727 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:27:46.802534 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:27:46.802595 disk-uuid[638]: The operation has completed successfully. Sep 13 00:27:46.834297 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:27:46.834438 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:27:46.883400 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:27:46.907662 sh[667]: Success Sep 13 00:27:46.927737 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:27:46.927822 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:27:46.929782 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 00:27:46.948778 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 13 00:27:47.001318 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:27:47.003833 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:27:47.019326 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:27:47.029526 kernel: BTRFS: device fsid ca815b72-c68a-4b5e-8622-cfb6842bab47 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (679) Sep 13 00:27:47.031975 kernel: BTRFS info (device dm-0): first mount of filesystem ca815b72-c68a-4b5e-8622-cfb6842bab47 Sep 13 00:27:47.032001 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:27:47.039162 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:27:47.039188 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 00:27:47.040652 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:27:47.041169 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 00:27:47.044007 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:27:47.044955 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:27:47.049089 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:27:47.077314 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (713) Sep 13 00:27:47.077351 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:27:47.077365 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:27:47.082037 kernel: BTRFS info (device vda6): turning on async discard Sep 13 00:27:47.082070 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 00:27:47.087568 kernel: BTRFS info (device vda6): last unmount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:27:47.088668 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:27:47.091175 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:27:47.198708 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:27:47.205328 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:27:47.254929 ignition[757]: Ignition 2.21.0 Sep 13 00:27:47.255280 ignition[757]: Stage: fetch-offline Sep 13 00:27:47.255385 ignition[757]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:27:47.255395 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:27:47.255640 ignition[757]: parsed url from cmdline: "" Sep 13 00:27:47.255644 ignition[757]: no config URL provided Sep 13 00:27:47.255650 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:27:47.255660 ignition[757]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:27:47.260823 systemd-networkd[849]: lo: Link UP Sep 13 00:27:47.255696 ignition[757]: op(1): [started] loading QEMU firmware config module Sep 13 00:27:47.260828 systemd-networkd[849]: lo: Gained carrier Sep 13 00:27:47.255701 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:27:47.262459 systemd-networkd[849]: Enumeration completed Sep 13 00:27:47.262599 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:27:47.262865 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:27:47.262870 systemd-networkd[849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:27:47.264501 systemd[1]: Reached target network.target - Network. Sep 13 00:27:47.265234 systemd-networkd[849]: eth0: Link UP Sep 13 00:27:47.265377 systemd-networkd[849]: eth0: Gained carrier Sep 13 00:27:47.265387 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:27:47.285342 ignition[757]: op(1): [finished] loading QEMU firmware config module Sep 13 00:27:47.316555 systemd-networkd[849]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:27:47.329230 ignition[757]: parsing config with SHA512: e10b0fa5df1825646ce27753c7d9496f8528438a28661b1ad129e6a07230dbbe01aa1c62bd65733998e10b4fbde53de8a6217190e3514837a7a93d513596456b Sep 13 00:27:47.337067 unknown[757]: fetched base config from "system" Sep 13 00:27:47.337080 unknown[757]: fetched user config from "qemu" Sep 13 00:27:47.337550 ignition[757]: fetch-offline: fetch-offline passed Sep 13 00:27:47.337612 ignition[757]: Ignition finished successfully Sep 13 00:27:47.342030 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:27:47.343406 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:27:47.344313 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:27:47.406896 ignition[862]: Ignition 2.21.0 Sep 13 00:27:47.406915 ignition[862]: Stage: kargs Sep 13 00:27:47.407135 ignition[862]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:27:47.407148 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:27:47.412103 ignition[862]: kargs: kargs passed Sep 13 00:27:47.412165 ignition[862]: Ignition finished successfully Sep 13 00:27:47.418161 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:27:47.421664 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:27:47.467949 ignition[870]: Ignition 2.21.0 Sep 13 00:27:47.467974 ignition[870]: Stage: disks Sep 13 00:27:47.468157 ignition[870]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:27:47.468170 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:27:47.469477 ignition[870]: disks: disks passed Sep 13 00:27:47.469543 ignition[870]: Ignition finished successfully Sep 13 00:27:47.476521 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:27:47.478748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:27:47.478848 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:27:47.480881 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:27:47.483151 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:27:47.483468 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:27:47.485100 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:27:47.604169 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 13 00:27:47.613651 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:27:47.616396 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:27:47.795543 kernel: EXT4-fs (vda9): mounted filesystem 7f859ed0-e8c8-40c1-91d3-e1e964d8c4e8 r/w with ordered data mode. Quota mode: none. Sep 13 00:27:47.796310 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:27:47.798116 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:27:47.801116 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:27:47.803033 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:27:47.804179 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:27:47.804225 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:27:47.804249 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:27:47.819307 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:27:47.820993 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:27:47.826585 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Sep 13 00:27:47.828994 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:27:47.829017 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:27:47.832663 kernel: BTRFS info (device vda6): turning on async discard Sep 13 00:27:47.832719 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 00:27:47.835241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:27:47.868633 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:27:47.873264 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:27:47.877817 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:27:47.883358 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:27:47.997421 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:27:48.000470 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:27:48.002368 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:27:48.030200 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:27:48.031690 kernel: BTRFS info (device vda6): last unmount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:27:48.046885 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:27:48.063167 ignition[1002]: INFO : Ignition 2.21.0 Sep 13 00:27:48.063167 ignition[1002]: INFO : Stage: mount Sep 13 00:27:48.065002 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:27:48.065002 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:27:48.067853 ignition[1002]: INFO : mount: mount passed Sep 13 00:27:48.067853 ignition[1002]: INFO : Ignition finished successfully Sep 13 00:27:48.072870 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:27:48.075156 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:27:48.110594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:27:48.147090 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Sep 13 00:27:48.147124 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:27:48.147137 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:27:48.151522 kernel: BTRFS info (device vda6): turning on async discard Sep 13 00:27:48.151553 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 00:27:48.153518 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:27:48.198333 ignition[1031]: INFO : Ignition 2.21.0 Sep 13 00:27:48.199641 ignition[1031]: INFO : Stage: files Sep 13 00:27:48.199641 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:27:48.201942 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:27:48.201942 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:27:48.204845 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:27:48.204845 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:27:48.204845 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:27:48.209634 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:27:48.209634 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:27:48.209634 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:27:48.209634 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 00:27:48.205580 unknown[1031]: wrote ssh authorized keys file for user: core Sep 13 00:27:48.238198 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:27:48.375680 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:27:48.375680 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:27:48.379751 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:27:48.421695 systemd-networkd[849]: eth0: Gained IPv6LL Sep 13 00:27:48.489949 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:27:48.643047 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:27:48.643047 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:27:48.647594 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:27:48.647594 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:27:48.647594 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:27:48.647594 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:27:48.647594 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:27:48.647594 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:27:48.647594 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:27:48.661712 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:27:48.661712 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:27:48.661712 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:27:48.661712 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:27:48.661712 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:27:48.661712 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 00:27:49.055829 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:27:50.058224 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:27:50.058224 ignition[1031]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:27:50.062787 ignition[1031]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:27:50.065480 ignition[1031]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:27:50.065480 ignition[1031]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:27:50.065480 ignition[1031]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:27:50.069988 ignition[1031]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:27:50.069988 ignition[1031]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:27:50.069988 ignition[1031]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:27:50.069988 ignition[1031]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:27:50.088275 ignition[1031]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:27:50.149304 ignition[1031]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:27:50.151752 ignition[1031]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:27:50.153552 ignition[1031]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:27:50.153552 ignition[1031]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:27:50.157077 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:27:50.159385 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:27:50.161277 ignition[1031]: INFO : files: files passed Sep 13 00:27:50.162170 ignition[1031]: INFO : Ignition finished successfully Sep 13 00:27:50.166851 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:27:50.169241 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:27:50.172828 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:27:50.194899 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:27:50.195049 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:27:50.199570 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:27:50.204331 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:27:50.206258 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:27:50.207962 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:27:50.208992 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:27:50.211196 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:27:50.214864 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:27:50.260505 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:27:50.260676 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:27:50.263180 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:27:50.264336 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:27:50.267454 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:27:50.268597 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:27:50.313693 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:27:50.316576 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:27:50.351970 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:27:50.354512 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:27:50.354701 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:27:50.357993 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:27:50.358153 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:27:50.362310 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:27:50.362478 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:27:50.365435 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:27:50.368646 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:27:50.369997 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:27:50.371321 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 00:27:50.375638 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:27:50.376853 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:27:50.379186 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:27:50.381518 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:27:50.383592 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:27:50.385453 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:27:50.385633 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:27:50.388397 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:27:50.390575 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:27:50.390927 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:27:50.391067 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:27:50.393948 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:27:50.394067 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:27:50.398548 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:27:50.398693 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:27:50.399660 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:27:50.400067 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:27:50.406571 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:27:50.407992 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:27:50.411298 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:27:50.412394 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:27:50.412510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:27:50.416557 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:27:50.416710 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:27:50.419727 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:27:50.419880 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:27:50.421208 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:27:50.421332 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:27:50.426848 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:27:50.427863 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:27:50.427994 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:27:50.432537 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:27:50.434501 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:27:50.434691 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:27:50.437008 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:27:50.437144 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:27:50.446133 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:27:50.446276 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:27:50.466744 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:27:50.477259 ignition[1086]: INFO : Ignition 2.21.0 Sep 13 00:27:50.477259 ignition[1086]: INFO : Stage: umount Sep 13 00:27:50.480452 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:27:50.480452 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:27:50.480452 ignition[1086]: INFO : umount: umount passed Sep 13 00:27:50.480452 ignition[1086]: INFO : Ignition finished successfully Sep 13 00:27:50.486038 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:27:50.486195 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:27:50.489335 systemd[1]: Stopped target network.target - Network. Sep 13 00:27:50.489414 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:27:50.489478 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:27:50.491202 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:27:50.491250 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:27:50.491743 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:27:50.491798 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:27:50.492100 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:27:50.492143 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:27:50.492592 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:27:50.493043 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:27:50.511235 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:27:50.511418 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:27:50.517955 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 00:27:50.518255 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:27:50.518402 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:27:50.522927 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 00:27:50.523923 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 00:27:50.524460 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:27:50.524539 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:27:50.528889 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:27:50.531259 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:27:50.531338 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:27:50.534007 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:27:50.534071 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:27:50.536547 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:27:50.536623 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:27:50.538001 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:27:50.538063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:27:50.542684 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:27:50.545304 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:27:50.545407 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 00:27:50.570090 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:27:50.570301 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:27:50.590243 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:27:50.590604 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:27:50.593710 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:27:50.593797 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:27:50.598970 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:27:50.600158 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:27:50.602581 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:27:50.603706 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:27:50.606282 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:27:50.606762 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:27:50.610144 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:27:50.610212 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:27:50.614430 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:27:50.616124 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 00:27:50.616190 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:27:50.621121 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:27:50.621186 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:27:50.623953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:27:50.624011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:27:50.630588 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 13 00:27:50.630696 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:27:50.630774 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 00:27:50.650440 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:27:50.650637 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:27:50.660138 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:27:50.660306 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:27:50.664218 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:27:50.665692 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:27:50.665830 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:27:50.669358 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:27:50.691458 systemd[1]: Switching root. Sep 13 00:27:50.726362 systemd-journald[222]: Journal stopped Sep 13 00:27:52.224109 systemd-journald[222]: Received SIGTERM from PID 1 (systemd). Sep 13 00:27:52.224184 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:27:52.224200 kernel: SELinux: policy capability open_perms=1 Sep 13 00:27:52.224214 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:27:52.224227 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:27:52.224246 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:27:52.224263 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:27:52.224277 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:27:52.224290 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:27:52.224303 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 00:27:52.224323 kernel: audit: type=1403 audit(1757723271.339:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:27:52.224339 systemd[1]: Successfully loaded SELinux policy in 52.769ms. Sep 13 00:27:52.224368 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.505ms. Sep 13 00:27:52.224389 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 00:27:52.224404 systemd[1]: Detected virtualization kvm. Sep 13 00:27:52.224420 systemd[1]: Detected architecture x86-64. Sep 13 00:27:52.224435 systemd[1]: Detected first boot. Sep 13 00:27:52.224450 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:27:52.224464 zram_generator::config[1131]: No configuration found. Sep 13 00:27:52.224479 kernel: Guest personality initialized and is inactive Sep 13 00:27:52.224511 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 00:27:52.224525 kernel: Initialized host personality Sep 13 00:27:52.224539 kernel: NET: Registered PF_VSOCK protocol family Sep 13 00:27:52.224566 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:27:52.224582 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 00:27:52.224597 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:27:52.224611 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:27:52.224626 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:27:52.224641 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:27:52.224655 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:27:52.224670 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:27:52.224684 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:27:52.224704 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:27:52.224720 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:27:52.224736 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:27:52.224753 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:27:52.224767 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:27:52.224782 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:27:52.224796 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:27:52.224811 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:27:52.224829 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:27:52.224844 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:27:52.224859 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:27:52.224874 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:27:52.224888 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:27:52.224903 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:27:52.224917 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:27:52.224932 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:27:52.224969 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:27:52.224990 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:27:52.225004 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:27:52.225019 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:27:52.225033 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:27:52.225048 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:27:52.225062 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:27:52.225077 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 00:27:52.225091 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:27:52.225108 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:27:52.225122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:27:52.225136 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:27:52.225150 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:27:52.225165 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:27:52.225179 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:27:52.225193 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:27:52.225208 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:27:52.225222 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:27:52.225239 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:27:52.225254 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:27:52.225270 systemd[1]: Reached target machines.target - Containers. Sep 13 00:27:52.225292 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:27:52.225307 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:27:52.225322 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:27:52.225336 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:27:52.225350 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:27:52.225364 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:27:52.225382 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:27:52.225396 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:27:52.225410 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:27:52.225425 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:27:52.225440 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:27:52.225456 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:27:52.225470 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:27:52.225484 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:27:52.225994 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:27:52.226009 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:27:52.226023 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:27:52.226038 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:27:52.226052 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:27:52.226069 kernel: fuse: init (API version 7.41) Sep 13 00:27:52.226083 kernel: loop: module loaded Sep 13 00:27:52.226113 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 00:27:52.226128 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:27:52.226142 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:27:52.226157 systemd[1]: Stopped verity-setup.service. Sep 13 00:27:52.226172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:27:52.226188 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:27:52.226203 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:27:52.226217 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:27:52.226232 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:27:52.226246 kernel: ACPI: bus type drm_connector registered Sep 13 00:27:52.226259 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:27:52.226297 systemd-journald[1206]: Collecting audit messages is disabled. Sep 13 00:27:52.226325 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:27:52.226340 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:27:52.226355 systemd-journald[1206]: Journal started Sep 13 00:27:52.226383 systemd-journald[1206]: Runtime Journal (/run/log/journal/192c327bbd1d4dc38c0997639f0c9231) is 6M, max 48.5M, 42.4M free. Sep 13 00:27:51.960269 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:27:51.984100 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:27:51.984690 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:27:52.229621 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:27:52.232730 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:27:52.234739 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:27:52.235030 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:27:52.237848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:27:52.238103 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:27:52.239709 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:27:52.239981 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:27:52.241606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:27:52.241894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:27:52.243748 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:27:52.244011 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:27:52.245571 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:27:52.245814 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:27:52.247384 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:27:52.249018 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:27:52.250795 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:27:52.252750 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 00:27:52.272388 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:27:52.275506 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:27:52.277978 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:27:52.279356 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:27:52.279390 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:27:52.281773 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 00:27:52.285794 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:27:52.288763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:27:52.290630 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:27:52.294621 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:27:52.296106 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:27:52.297273 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:27:52.298721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:27:52.300733 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:27:52.305190 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:27:52.312433 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:27:52.320635 systemd-journald[1206]: Time spent on flushing to /var/log/journal/192c327bbd1d4dc38c0997639f0c9231 is 127.399ms for 1074 entries. Sep 13 00:27:52.320635 systemd-journald[1206]: System Journal (/var/log/journal/192c327bbd1d4dc38c0997639f0c9231) is 8M, max 195.6M, 187.6M free. Sep 13 00:27:52.584332 systemd-journald[1206]: Received client request to flush runtime journal. Sep 13 00:27:52.584439 kernel: loop0: detected capacity change from 0 to 146240 Sep 13 00:27:52.318741 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:27:52.320648 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:27:52.342825 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:27:52.345059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:27:52.347472 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:27:52.589626 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:27:52.357695 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 00:27:52.589303 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:27:52.616225 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 00:27:52.620669 kernel: loop1: detected capacity change from 0 to 229808 Sep 13 00:27:52.623173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:27:52.638792 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:27:52.644113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:27:52.659518 kernel: loop2: detected capacity change from 0 to 113872 Sep 13 00:27:52.680374 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 13 00:27:52.680814 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 13 00:27:52.687526 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:27:52.696515 kernel: loop3: detected capacity change from 0 to 146240 Sep 13 00:27:52.717551 kernel: loop4: detected capacity change from 0 to 229808 Sep 13 00:27:52.728574 kernel: loop5: detected capacity change from 0 to 113872 Sep 13 00:27:52.738732 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:27:52.739368 (sd-merge)[1274]: Merged extensions into '/usr'. Sep 13 00:27:52.747069 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:27:52.747091 systemd[1]: Reloading... Sep 13 00:27:52.849539 zram_generator::config[1297]: No configuration found. Sep 13 00:27:52.969930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:27:53.016816 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:27:53.075087 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:27:53.075209 systemd[1]: Reloading finished in 327 ms. Sep 13 00:27:53.108098 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:27:53.109900 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:27:53.126688 systemd[1]: Starting ensure-sysext.service... Sep 13 00:27:53.129366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:27:53.142223 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:27:53.142255 systemd[1]: Reloading... Sep 13 00:27:53.156832 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 00:27:53.157067 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 00:27:53.157413 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:27:53.157872 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:27:53.158928 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:27:53.159238 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 13 00:27:53.159318 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 13 00:27:53.164929 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:27:53.164945 systemd-tmpfiles[1339]: Skipping /boot Sep 13 00:27:53.182988 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:27:53.183163 systemd-tmpfiles[1339]: Skipping /boot Sep 13 00:27:53.200547 zram_generator::config[1369]: No configuration found. Sep 13 00:27:53.336460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:27:53.435229 systemd[1]: Reloading finished in 292 ms. Sep 13 00:27:53.460196 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:27:53.486660 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:27:53.497348 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 00:27:53.501172 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:27:53.502811 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:27:53.519888 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:27:53.524054 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:27:53.529796 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:27:53.535875 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:27:53.536313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:27:53.542119 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:27:53.546386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:27:53.549705 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:27:53.551025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:27:53.551163 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:27:53.555258 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:27:53.558910 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:27:53.561446 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:27:53.565442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:27:53.566180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:27:53.568264 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:27:53.568680 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:27:53.571092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:27:53.571470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:27:53.581672 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Sep 13 00:27:53.585259 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:27:53.586550 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:27:53.588100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:27:53.590795 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:27:53.593362 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:27:53.594675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:27:53.594808 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:27:53.600843 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:27:53.602176 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:27:53.603581 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:27:53.607074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:27:53.608175 augenrules[1442]: No rules Sep 13 00:27:53.609044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:27:53.611559 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:27:53.611826 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 00:27:53.613786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:27:53.614016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:27:53.616390 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:27:53.616785 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:27:53.626830 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:27:53.629312 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:27:53.637826 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:27:53.639796 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:27:53.654536 systemd[1]: Finished ensure-sysext.service. Sep 13 00:27:53.661352 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:27:53.665228 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 00:27:53.666775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:27:53.671639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:27:53.675453 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:27:53.685664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:27:53.689327 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:27:53.690669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:27:53.690726 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:27:53.695732 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:27:53.700156 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:27:53.701412 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:27:53.701460 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:27:53.702379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:27:53.703850 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:27:53.705447 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:27:53.705738 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:27:53.707184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:27:53.707384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:27:53.718749 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:27:53.720006 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:27:53.723572 augenrules[1487]: /sbin/augenrules: No change Sep 13 00:27:53.723701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:27:53.725411 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:27:53.737177 augenrules[1519]: No rules Sep 13 00:27:53.739970 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:27:53.740388 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:27:53.740790 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 00:27:53.869254 systemd-networkd[1496]: lo: Link UP Sep 13 00:27:53.869268 systemd-networkd[1496]: lo: Gained carrier Sep 13 00:27:53.875144 systemd-networkd[1496]: Enumeration completed Sep 13 00:27:53.876163 systemd-resolved[1408]: Positive Trust Anchors: Sep 13 00:27:53.876183 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:27:53.876216 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:27:53.877179 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:27:53.880442 systemd-resolved[1408]: Defaulting to hostname 'linux'. Sep 13 00:27:53.881056 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:27:53.882949 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:27:53.881421 systemd-networkd[1496]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:27:53.881550 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 00:27:53.883815 systemd-networkd[1496]: eth0: Link UP Sep 13 00:27:53.884012 systemd-networkd[1496]: eth0: Gained carrier Sep 13 00:27:53.884028 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:27:53.885624 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:27:53.892522 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:27:53.892762 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:27:53.921610 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:27:53.921654 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 13 00:27:53.921920 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:27:56.000577 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:27:53.894201 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:27:53.894304 systemd[1]: Reached target network.target - Network. Sep 13 00:27:53.894830 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:27:53.895205 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:27:53.895821 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:27:53.896132 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:27:53.896445 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 13 00:27:53.897410 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:27:53.897787 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:27:53.897810 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:27:53.898123 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:27:53.898704 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:27:53.899117 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:27:53.899392 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:27:53.902369 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:27:53.905920 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:27:53.911586 systemd-networkd[1496]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:27:53.911645 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 00:27:53.911922 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 00:27:53.912296 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 00:27:53.922016 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:27:53.922619 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 00:27:53.923024 systemd-timesyncd[1499]: Network configuration changed, trying to establish connection. Sep 13 00:27:53.923900 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:27:55.999170 systemd-resolved[1408]: Clock change detected. Flushing caches. Sep 13 00:27:55.999257 systemd-timesyncd[1499]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:27:55.999301 systemd-timesyncd[1499]: Initial clock synchronization to Sat 2025-09-13 00:27:55.999120 UTC. Sep 13 00:27:56.006639 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:27:56.006960 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:27:56.007313 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:27:56.007981 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:27:56.008009 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:27:56.009315 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:27:56.010781 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:27:56.013951 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:27:56.020711 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:27:56.023668 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:27:56.024887 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:27:56.026737 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 13 00:27:56.030707 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:27:56.036666 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:27:56.039195 extend-filesystems[1547]: Found /dev/vda6 Sep 13 00:27:56.041332 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:27:56.044584 jq[1546]: false Sep 13 00:27:56.046223 extend-filesystems[1547]: Found /dev/vda9 Sep 13 00:27:56.047777 extend-filesystems[1547]: Checking size of /dev/vda9 Sep 13 00:27:56.053317 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:27:56.055148 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Refreshing passwd entry cache Sep 13 00:27:56.055165 oslogin_cache_refresh[1549]: Refreshing passwd entry cache Sep 13 00:27:56.058718 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:27:56.064273 extend-filesystems[1547]: Resized partition /dev/vda9 Sep 13 00:27:56.065612 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Failure getting users, quitting Sep 13 00:27:56.065604 oslogin_cache_refresh[1549]: Failure getting users, quitting Sep 13 00:27:56.065703 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 00:27:56.065703 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Refreshing group entry cache Sep 13 00:27:56.065625 oslogin_cache_refresh[1549]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 00:27:56.065691 oslogin_cache_refresh[1549]: Refreshing group entry cache Sep 13 00:27:56.070303 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Failure getting groups, quitting Sep 13 00:27:56.070303 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 00:27:56.070294 oslogin_cache_refresh[1549]: Failure getting groups, quitting Sep 13 00:27:56.070306 oslogin_cache_refresh[1549]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 00:27:56.071647 extend-filesystems[1569]: resize2fs 1.47.2 (1-Jan-2025) Sep 13 00:27:56.074672 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:27:56.076827 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:27:56.077584 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:27:56.082740 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:27:56.089861 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:27:56.092516 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:27:56.094152 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 00:27:56.100162 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:27:56.101849 jq[1573]: true Sep 13 00:27:56.102240 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:27:56.103013 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:27:56.103411 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 13 00:27:56.104798 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 13 00:27:56.108072 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:27:56.108403 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:27:56.110625 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:27:56.131552 jq[1584]: true Sep 13 00:27:56.134064 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:27:56.134546 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:27:56.144015 tar[1580]: linux-amd64/LICENSE Sep 13 00:27:56.219651 tar[1580]: linux-amd64/helm Sep 13 00:27:56.219720 update_engine[1571]: I20250913 00:27:56.144706 1571 main.cc:92] Flatcar Update Engine starting Sep 13 00:27:56.231520 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:27:56.236662 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:27:56.257370 dbus-daemon[1540]: [system] SELinux support is enabled Sep 13 00:27:56.283676 update_engine[1571]: I20250913 00:27:56.261981 1571 update_check_scheduler.cc:74] Next update check in 8m46s Sep 13 00:27:56.257617 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:27:56.283847 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:27:56.283847 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:27:56.283847 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:27:56.263878 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:27:56.302720 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Sep 13 00:27:56.264342 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:27:56.268266 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:27:56.268301 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:27:56.270097 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:27:56.270144 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:27:56.273323 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:27:56.282791 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:27:56.321472 bash[1618]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:27:56.324503 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:27:56.325713 systemd-logind[1570]: New seat seat0. Sep 13 00:27:56.327618 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:27:56.328744 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:27:56.336379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:27:56.349438 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:27:56.525949 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:27:56.526340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:27:56.540884 systemd-logind[1570]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 00:27:56.556808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:27:56.604595 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:27:56.755514 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:27:56.757589 kernel: kvm_amd: TSC scaling supported Sep 13 00:27:56.757645 kernel: kvm_amd: Nested Virtualization enabled Sep 13 00:27:56.757693 kernel: kvm_amd: Nested Paging enabled Sep 13 00:27:56.757708 kernel: kvm_amd: LBR virtualization supported Sep 13 00:27:56.761523 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 00:27:56.761598 kernel: kvm_amd: Virtual GIF supported Sep 13 00:27:56.779050 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:27:56.783686 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:27:56.802575 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:27:56.804648 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:27:56.804935 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:27:56.811978 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:27:56.826535 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:27:56.839906 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:27:56.865923 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:27:56.869539 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:27:56.871062 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:27:56.992125 containerd[1597]: time="2025-09-13T00:27:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 00:27:56.993178 containerd[1597]: time="2025-09-13T00:27:56.993152864Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 13 00:27:57.003324 containerd[1597]: time="2025-09-13T00:27:57.003270141Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.79µs" Sep 13 00:27:57.003324 containerd[1597]: time="2025-09-13T00:27:57.003305528Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 00:27:57.003324 containerd[1597]: time="2025-09-13T00:27:57.003325415Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 00:27:57.003564 containerd[1597]: time="2025-09-13T00:27:57.003538855Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 00:27:57.003564 containerd[1597]: time="2025-09-13T00:27:57.003555577Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 00:27:57.003627 containerd[1597]: time="2025-09-13T00:27:57.003580614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 00:27:57.003663 containerd[1597]: time="2025-09-13T00:27:57.003643942Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 00:27:57.003663 containerd[1597]: time="2025-09-13T00:27:57.003657598Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 00:27:57.004045 containerd[1597]: time="2025-09-13T00:27:57.004011091Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 00:27:57.004045 containerd[1597]: time="2025-09-13T00:27:57.004032351Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 00:27:57.004095 containerd[1597]: time="2025-09-13T00:27:57.004044233Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 00:27:57.004095 containerd[1597]: time="2025-09-13T00:27:57.004053481Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 00:27:57.004215 containerd[1597]: time="2025-09-13T00:27:57.004187933Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 00:27:57.004549 containerd[1597]: time="2025-09-13T00:27:57.004519364Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 00:27:57.004585 containerd[1597]: time="2025-09-13T00:27:57.004562585Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 00:27:57.004585 containerd[1597]: time="2025-09-13T00:27:57.004573736Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 00:27:57.004630 containerd[1597]: time="2025-09-13T00:27:57.004619943Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 00:27:57.004998 containerd[1597]: time="2025-09-13T00:27:57.004955593Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 00:27:57.005066 containerd[1597]: time="2025-09-13T00:27:57.005043798Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:27:57.013280 containerd[1597]: time="2025-09-13T00:27:57.013198193Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 00:27:57.013620 containerd[1597]: time="2025-09-13T00:27:57.013425700Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 00:27:57.013620 containerd[1597]: time="2025-09-13T00:27:57.013444876Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 00:27:57.013762 containerd[1597]: time="2025-09-13T00:27:57.013745139Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 00:27:57.013820 containerd[1597]: time="2025-09-13T00:27:57.013806114Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 00:27:57.013881 containerd[1597]: time="2025-09-13T00:27:57.013867749Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 00:27:57.013941 containerd[1597]: time="2025-09-13T00:27:57.013928433Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 00:27:57.014005 containerd[1597]: time="2025-09-13T00:27:57.013994136Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 00:27:57.014064 containerd[1597]: time="2025-09-13T00:27:57.014051895Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 00:27:57.014111 containerd[1597]: time="2025-09-13T00:27:57.014100766Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 00:27:57.014155 containerd[1597]: time="2025-09-13T00:27:57.014144819Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 00:27:57.014206 containerd[1597]: time="2025-09-13T00:27:57.014195674Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 00:27:57.014405 containerd[1597]: time="2025-09-13T00:27:57.014389628Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 00:27:57.014501 containerd[1597]: time="2025-09-13T00:27:57.014463497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 00:27:57.014561 containerd[1597]: time="2025-09-13T00:27:57.014549027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 00:27:57.014621 containerd[1597]: time="2025-09-13T00:27:57.014609040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 00:27:57.014669 containerd[1597]: time="2025-09-13T00:27:57.014658593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 00:27:57.014715 containerd[1597]: time="2025-09-13T00:27:57.014704529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 00:27:57.014769 containerd[1597]: time="2025-09-13T00:27:57.014757628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 00:27:57.014832 containerd[1597]: time="2025-09-13T00:27:57.014820286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 00:27:57.014883 containerd[1597]: time="2025-09-13T00:27:57.014872674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 00:27:57.014929 containerd[1597]: time="2025-09-13T00:27:57.014919131Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 00:27:57.014988 containerd[1597]: time="2025-09-13T00:27:57.014976970Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 00:27:57.015146 containerd[1597]: time="2025-09-13T00:27:57.015115630Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 00:27:57.015146 containerd[1597]: time="2025-09-13T00:27:57.015142510Z" level=info msg="Start snapshots syncer" Sep 13 00:27:57.015229 containerd[1597]: time="2025-09-13T00:27:57.015186894Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 00:27:57.015763 containerd[1597]: time="2025-09-13T00:27:57.015696540Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 00:27:57.015921 containerd[1597]: time="2025-09-13T00:27:57.015768775Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 00:27:57.015921 containerd[1597]: time="2025-09-13T00:27:57.015879142Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 00:27:57.016024 containerd[1597]: time="2025-09-13T00:27:57.015993166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 00:27:57.016024 containerd[1597]: time="2025-09-13T00:27:57.016019145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 00:27:57.016088 containerd[1597]: time="2025-09-13T00:27:57.016031919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 00:27:57.016088 containerd[1597]: time="2025-09-13T00:27:57.016045664Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 00:27:57.016088 containerd[1597]: time="2025-09-13T00:27:57.016059871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 00:27:57.016088 containerd[1597]: time="2025-09-13T00:27:57.016077544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 00:27:57.016195 containerd[1597]: time="2025-09-13T00:27:57.016090909Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 00:27:57.016195 containerd[1597]: time="2025-09-13T00:27:57.016141404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 00:27:57.016195 containerd[1597]: time="2025-09-13T00:27:57.016156923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 00:27:57.016195 containerd[1597]: time="2025-09-13T00:27:57.016171000Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 00:27:57.016291 containerd[1597]: time="2025-09-13T00:27:57.016203851Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 00:27:57.016291 containerd[1597]: time="2025-09-13T00:27:57.016221995Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 00:27:57.016291 containerd[1597]: time="2025-09-13T00:27:57.016233717Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 00:27:57.016291 containerd[1597]: time="2025-09-13T00:27:57.016247844Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 00:27:57.016291 containerd[1597]: time="2025-09-13T00:27:57.016257752Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 00:27:57.016291 containerd[1597]: time="2025-09-13T00:27:57.016269685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 00:27:57.016291 containerd[1597]: time="2025-09-13T00:27:57.016287177Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 00:27:57.016584 containerd[1597]: time="2025-09-13T00:27:57.016310822Z" level=info msg="runtime interface created" Sep 13 00:27:57.016584 containerd[1597]: time="2025-09-13T00:27:57.016317364Z" level=info msg="created NRI interface" Sep 13 00:27:57.016584 containerd[1597]: time="2025-09-13T00:27:57.016327283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 00:27:57.017576 containerd[1597]: time="2025-09-13T00:27:57.017507867Z" level=info msg="Connect containerd service" Sep 13 00:27:57.017671 containerd[1597]: time="2025-09-13T00:27:57.017647058Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:27:57.021333 containerd[1597]: time="2025-09-13T00:27:57.021276907Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.140574665Z" level=info msg="Start subscribing containerd event" Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.140619890Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.140660857Z" level=info msg="Start recovering state" Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.140696123Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.140822500Z" level=info msg="Start event monitor" Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.140850212Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.140862535Z" level=info msg="Start streaming server" Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.140874137Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.140888193Z" level=info msg="runtime interface starting up..." Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.140898964Z" level=info msg="starting plugins..." Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.140919342Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 00:27:57.141350 containerd[1597]: time="2025-09-13T00:27:57.141131931Z" level=info msg="containerd successfully booted in 0.149588s" Sep 13 00:27:57.141392 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:27:57.271424 tar[1580]: linux-amd64/README.md Sep 13 00:27:57.298350 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:27:57.792751 systemd-networkd[1496]: eth0: Gained IPv6LL Sep 13 00:27:57.796247 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:27:57.798256 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:27:57.801175 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:27:57.803826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:57.806319 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:27:57.830785 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:27:57.831110 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:27:57.833776 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:27:57.836647 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:27:58.687768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:58.689685 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:27:58.691062 systemd[1]: Startup finished in 3.134s (kernel) + 9.665s (initrd) + 5.328s (userspace) = 18.129s. Sep 13 00:27:58.694208 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:27:59.336632 kubelet[1704]: E0913 00:27:59.336555 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:27:59.340858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:27:59.341055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:27:59.341471 systemd[1]: kubelet.service: Consumed 1.306s CPU time, 269.4M memory peak. Sep 13 00:28:00.283596 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:28:00.287147 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:45412.service - OpenSSH per-connection server daemon (10.0.0.1:45412). Sep 13 00:28:00.447855 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 45412 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:28:00.454551 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:28:00.470399 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:28:00.472687 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:28:00.492761 systemd-logind[1570]: New session 1 of user core. Sep 13 00:28:00.538622 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:28:00.550412 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:28:00.586394 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:28:00.591277 systemd-logind[1570]: New session c1 of user core. Sep 13 00:28:00.862466 systemd[1721]: Queued start job for default target default.target. Sep 13 00:28:00.889235 systemd[1721]: Created slice app.slice - User Application Slice. Sep 13 00:28:00.889276 systemd[1721]: Reached target paths.target - Paths. Sep 13 00:28:00.889331 systemd[1721]: Reached target timers.target - Timers. Sep 13 00:28:00.892962 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:28:00.909783 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:28:00.909962 systemd[1721]: Reached target sockets.target - Sockets. Sep 13 00:28:00.910025 systemd[1721]: Reached target basic.target - Basic System. Sep 13 00:28:00.910076 systemd[1721]: Reached target default.target - Main User Target. Sep 13 00:28:00.910117 systemd[1721]: Startup finished in 303ms. Sep 13 00:28:00.910808 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:28:00.914020 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:28:00.988953 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:45422.service - OpenSSH per-connection server daemon (10.0.0.1:45422). Sep 13 00:28:01.073712 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 45422 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:28:01.075320 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:28:01.083688 systemd-logind[1570]: New session 2 of user core. Sep 13 00:28:01.106841 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:28:01.176109 sshd[1734]: Connection closed by 10.0.0.1 port 45422 Sep 13 00:28:01.178449 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Sep 13 00:28:01.191190 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:45422.service: Deactivated successfully. Sep 13 00:28:01.195831 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:28:01.197218 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:28:01.203417 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:45434.service - OpenSSH per-connection server daemon (10.0.0.1:45434). Sep 13 00:28:01.205003 systemd-logind[1570]: Removed session 2. Sep 13 00:28:01.266419 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 45434 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:28:01.268345 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:28:01.283000 systemd-logind[1570]: New session 3 of user core. Sep 13 00:28:01.297221 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:28:01.362686 sshd[1742]: Connection closed by 10.0.0.1 port 45434 Sep 13 00:28:01.362416 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 13 00:28:01.387134 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:45434.service: Deactivated successfully. Sep 13 00:28:01.391040 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:28:01.394945 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:28:01.403816 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:45448.service - OpenSSH per-connection server daemon (10.0.0.1:45448). Sep 13 00:28:01.404679 systemd-logind[1570]: Removed session 3. Sep 13 00:28:01.498689 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 45448 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:28:01.504307 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:28:01.534473 systemd-logind[1570]: New session 4 of user core. Sep 13 00:28:01.551915 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:28:01.621853 sshd[1750]: Connection closed by 10.0.0.1 port 45448 Sep 13 00:28:01.621131 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Sep 13 00:28:01.638971 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:45448.service: Deactivated successfully. Sep 13 00:28:01.641651 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:28:01.642917 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:28:01.645891 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:45460.service - OpenSSH per-connection server daemon (10.0.0.1:45460). Sep 13 00:28:01.647476 systemd-logind[1570]: Removed session 4. Sep 13 00:28:01.718589 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 45460 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:28:01.720616 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:28:01.727958 systemd-logind[1570]: New session 5 of user core. Sep 13 00:28:01.746792 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:28:01.815290 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:28:01.815842 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:28:01.840817 sudo[1759]: pam_unix(sudo:session): session closed for user root Sep 13 00:28:01.843917 sshd[1758]: Connection closed by 10.0.0.1 port 45460 Sep 13 00:28:01.845935 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Sep 13 00:28:01.864904 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:45460.service: Deactivated successfully. Sep 13 00:28:01.867374 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:28:01.869735 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:28:01.874927 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:45468.service - OpenSSH per-connection server daemon (10.0.0.1:45468). Sep 13 00:28:01.876121 systemd-logind[1570]: Removed session 5. Sep 13 00:28:01.945033 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 45468 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:28:01.946114 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:28:01.952389 systemd-logind[1570]: New session 6 of user core. Sep 13 00:28:01.962675 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:28:02.019923 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:28:02.020261 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:28:02.654514 sudo[1769]: pam_unix(sudo:session): session closed for user root Sep 13 00:28:02.661898 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 00:28:02.662231 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:28:02.674393 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 00:28:02.738492 augenrules[1791]: No rules Sep 13 00:28:02.742359 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:28:02.742707 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 00:28:02.745124 sudo[1768]: pam_unix(sudo:session): session closed for user root Sep 13 00:28:02.750517 sshd[1767]: Connection closed by 10.0.0.1 port 45468 Sep 13 00:28:02.750698 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Sep 13 00:28:02.766085 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:45468.service: Deactivated successfully. Sep 13 00:28:02.769727 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:28:02.772526 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:28:02.776580 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:45484.service - OpenSSH per-connection server daemon (10.0.0.1:45484). Sep 13 00:28:02.777588 systemd-logind[1570]: Removed session 6. Sep 13 00:28:02.843984 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 45484 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:28:02.845717 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:28:02.852660 systemd-logind[1570]: New session 7 of user core. Sep 13 00:28:02.866768 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:28:02.925806 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:28:02.926159 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:28:04.278448 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:28:04.306124 (dockerd)[1824]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:28:04.918677 dockerd[1824]: time="2025-09-13T00:28:04.918576584Z" level=info msg="Starting up" Sep 13 00:28:04.920419 dockerd[1824]: time="2025-09-13T00:28:04.920379025Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 00:28:05.437982 dockerd[1824]: time="2025-09-13T00:28:05.437903182Z" level=info msg="Loading containers: start." Sep 13 00:28:05.449501 kernel: Initializing XFRM netlink socket Sep 13 00:28:05.746272 systemd-networkd[1496]: docker0: Link UP Sep 13 00:28:05.753028 dockerd[1824]: time="2025-09-13T00:28:05.752956822Z" level=info msg="Loading containers: done." Sep 13 00:28:05.856692 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2328105054-merged.mount: Deactivated successfully. Sep 13 00:28:05.859010 dockerd[1824]: time="2025-09-13T00:28:05.858931831Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:28:05.859129 dockerd[1824]: time="2025-09-13T00:28:05.859102852Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 13 00:28:05.859354 dockerd[1824]: time="2025-09-13T00:28:05.859316012Z" level=info msg="Initializing buildkit" Sep 13 00:28:05.898099 dockerd[1824]: time="2025-09-13T00:28:05.898004871Z" level=info msg="Completed buildkit initialization" Sep 13 00:28:05.907665 dockerd[1824]: time="2025-09-13T00:28:05.907416384Z" level=info msg="Daemon has completed initialization" Sep 13 00:28:05.907994 dockerd[1824]: time="2025-09-13T00:28:05.907904069Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:28:05.908067 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:28:07.105092 containerd[1597]: time="2025-09-13T00:28:07.105007854Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:28:07.820370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206754587.mount: Deactivated successfully. Sep 13 00:28:09.400253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:28:09.402107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:28:09.616456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:28:09.628951 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:28:09.677669 kubelet[2102]: E0913 00:28:09.677497 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:28:09.686371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:28:09.686698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:28:09.687293 systemd[1]: kubelet.service: Consumed 244ms CPU time, 110.3M memory peak. Sep 13 00:28:10.086997 containerd[1597]: time="2025-09-13T00:28:10.086838756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:10.088055 containerd[1597]: time="2025-09-13T00:28:10.088014482Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 13 00:28:10.090095 containerd[1597]: time="2025-09-13T00:28:10.090037717Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:10.092987 containerd[1597]: time="2025-09-13T00:28:10.092926315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:10.094046 containerd[1597]: time="2025-09-13T00:28:10.093999969Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.988917635s" Sep 13 00:28:10.094110 containerd[1597]: time="2025-09-13T00:28:10.094048690Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 00:28:10.095167 containerd[1597]: time="2025-09-13T00:28:10.095111113Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:28:12.275130 containerd[1597]: time="2025-09-13T00:28:12.275055145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:12.276158 containerd[1597]: time="2025-09-13T00:28:12.276047847Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 13 00:28:12.277743 containerd[1597]: time="2025-09-13T00:28:12.277666012Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:12.281536 containerd[1597]: time="2025-09-13T00:28:12.281219848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:12.282206 containerd[1597]: time="2025-09-13T00:28:12.282157777Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.186984006s" Sep 13 00:28:12.282206 containerd[1597]: time="2025-09-13T00:28:12.282195708Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 00:28:12.283110 containerd[1597]: time="2025-09-13T00:28:12.283042787Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:28:15.052431 containerd[1597]: time="2025-09-13T00:28:15.052348112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:15.061434 containerd[1597]: time="2025-09-13T00:28:15.061319410Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 13 00:28:15.065351 containerd[1597]: time="2025-09-13T00:28:15.065272765Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:15.068637 containerd[1597]: time="2025-09-13T00:28:15.068579318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:15.069896 containerd[1597]: time="2025-09-13T00:28:15.069816358Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.786715101s" Sep 13 00:28:15.069896 containerd[1597]: time="2025-09-13T00:28:15.069885007Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 00:28:15.070653 containerd[1597]: time="2025-09-13T00:28:15.070613663Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:28:16.299829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3278317055.mount: Deactivated successfully. Sep 13 00:28:17.290806 containerd[1597]: time="2025-09-13T00:28:17.290730418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:17.291464 containerd[1597]: time="2025-09-13T00:28:17.291391648Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 13 00:28:17.293222 containerd[1597]: time="2025-09-13T00:28:17.293188659Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:17.295436 containerd[1597]: time="2025-09-13T00:28:17.295387483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:17.295857 containerd[1597]: time="2025-09-13T00:28:17.295814454Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.225171445s" Sep 13 00:28:17.295902 containerd[1597]: time="2025-09-13T00:28:17.295861262Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 00:28:17.296636 containerd[1597]: time="2025-09-13T00:28:17.296613773Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:28:17.936051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3559665089.mount: Deactivated successfully. Sep 13 00:28:19.237926 containerd[1597]: time="2025-09-13T00:28:19.237845740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:19.239134 containerd[1597]: time="2025-09-13T00:28:19.239099912Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 13 00:28:19.240919 containerd[1597]: time="2025-09-13T00:28:19.240862889Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:19.244893 containerd[1597]: time="2025-09-13T00:28:19.244831423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:19.246315 containerd[1597]: time="2025-09-13T00:28:19.246246267Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.94960426s" Sep 13 00:28:19.246315 containerd[1597]: time="2025-09-13T00:28:19.246282374Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 00:28:19.247517 containerd[1597]: time="2025-09-13T00:28:19.247032481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:28:19.900315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:28:19.902549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:28:20.202295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:28:20.226055 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:28:20.473956 kubelet[2186]: E0913 00:28:20.473769 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:28:20.479522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:28:20.479787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:28:20.480320 systemd[1]: kubelet.service: Consumed 337ms CPU time, 111.3M memory peak. Sep 13 00:28:20.548408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569206878.mount: Deactivated successfully. Sep 13 00:28:20.569153 containerd[1597]: time="2025-09-13T00:28:20.569075206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:28:20.570182 containerd[1597]: time="2025-09-13T00:28:20.570125035Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:28:20.571602 containerd[1597]: time="2025-09-13T00:28:20.571561529Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:28:20.573807 containerd[1597]: time="2025-09-13T00:28:20.573747530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:28:20.574420 containerd[1597]: time="2025-09-13T00:28:20.574366040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.327276201s" Sep 13 00:28:20.574420 containerd[1597]: time="2025-09-13T00:28:20.574414040Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:28:20.575070 containerd[1597]: time="2025-09-13T00:28:20.575039022Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:28:21.112019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount949705995.mount: Deactivated successfully. Sep 13 00:28:23.362748 containerd[1597]: time="2025-09-13T00:28:23.362644453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:23.363942 containerd[1597]: time="2025-09-13T00:28:23.363905899Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 13 00:28:23.365094 containerd[1597]: time="2025-09-13T00:28:23.365020540Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:23.368123 containerd[1597]: time="2025-09-13T00:28:23.368081952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:23.369239 containerd[1597]: time="2025-09-13T00:28:23.369201142Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.794127685s" Sep 13 00:28:23.369301 containerd[1597]: time="2025-09-13T00:28:23.369235887Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 00:28:27.725945 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:28:27.726163 systemd[1]: kubelet.service: Consumed 337ms CPU time, 111.3M memory peak. Sep 13 00:28:27.728516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:28:27.754322 systemd[1]: Reload requested from client PID 2284 ('systemctl') (unit session-7.scope)... Sep 13 00:28:27.754335 systemd[1]: Reloading... Sep 13 00:28:27.926523 zram_generator::config[2325]: No configuration found. Sep 13 00:28:28.151165 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:28:28.297369 systemd[1]: Reloading finished in 542 ms. Sep 13 00:28:28.915312 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:28:28.915473 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:28:28.915942 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:28:28.916009 systemd[1]: kubelet.service: Consumed 227ms CPU time, 98.2M memory peak. Sep 13 00:28:28.919055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:28:29.139320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:28:29.150839 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:28:29.198591 kubelet[2374]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:28:29.198591 kubelet[2374]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:28:29.198591 kubelet[2374]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:28:29.198591 kubelet[2374]: I0913 00:28:29.198546 2374 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:28:29.582923 kubelet[2374]: I0913 00:28:29.582789 2374 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:28:29.582923 kubelet[2374]: I0913 00:28:29.582822 2374 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:28:29.583154 kubelet[2374]: I0913 00:28:29.583132 2374 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:28:29.622322 kubelet[2374]: E0913 00:28:29.622226 2374 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:28:29.625543 kubelet[2374]: I0913 00:28:29.625508 2374 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:28:29.638234 kubelet[2374]: I0913 00:28:29.638190 2374 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 00:28:29.644758 kubelet[2374]: I0913 00:28:29.644726 2374 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:28:29.645022 kubelet[2374]: I0913 00:28:29.644972 2374 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:28:29.645203 kubelet[2374]: I0913 00:28:29.644999 2374 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:28:29.645203 kubelet[2374]: I0913 00:28:29.645203 2374 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:28:29.645436 kubelet[2374]: I0913 00:28:29.645214 2374 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:28:29.645436 kubelet[2374]: I0913 00:28:29.645376 2374 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:28:29.649457 kubelet[2374]: I0913 00:28:29.649424 2374 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:28:29.649457 kubelet[2374]: I0913 00:28:29.649453 2374 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:28:29.649554 kubelet[2374]: I0913 00:28:29.649502 2374 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:28:29.649554 kubelet[2374]: I0913 00:28:29.649522 2374 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:28:29.656147 kubelet[2374]: I0913 00:28:29.656110 2374 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 13 00:28:29.656696 kubelet[2374]: I0913 00:28:29.656661 2374 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:28:29.657868 kubelet[2374]: E0913 00:28:29.657839 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:28:29.657925 kubelet[2374]: E0913 00:28:29.657894 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:28:29.657987 kubelet[2374]: W0913 00:28:29.657962 2374 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:28:29.661198 kubelet[2374]: I0913 00:28:29.661168 2374 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:28:29.661242 kubelet[2374]: I0913 00:28:29.661222 2374 server.go:1289] "Started kubelet" Sep 13 00:28:29.661841 kubelet[2374]: I0913 00:28:29.661789 2374 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:28:29.661943 kubelet[2374]: I0913 00:28:29.661908 2374 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:28:29.662241 kubelet[2374]: I0913 00:28:29.662206 2374 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:28:29.664500 kubelet[2374]: I0913 00:28:29.663362 2374 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:28:29.664500 kubelet[2374]: I0913 00:28:29.663754 2374 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:28:29.664500 kubelet[2374]: I0913 00:28:29.664047 2374 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:28:29.664500 kubelet[2374]: E0913 00:28:29.664307 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:28:29.665466 kubelet[2374]: I0913 00:28:29.665434 2374 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:28:29.667014 kubelet[2374]: I0913 00:28:29.666984 2374 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:28:29.667179 kubelet[2374]: E0913 00:28:29.667153 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" Sep 13 00:28:29.667744 kubelet[2374]: I0913 00:28:29.667630 2374 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:28:29.667744 kubelet[2374]: I0913 00:28:29.667713 2374 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:28:29.667825 kubelet[2374]: E0913 00:28:29.667753 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:28:29.668775 kubelet[2374]: E0913 00:28:29.668740 2374 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:28:29.668901 kubelet[2374]: I0913 00:28:29.668873 2374 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:28:29.673889 kubelet[2374]: I0913 00:28:29.673854 2374 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:28:29.681052 kubelet[2374]: E0913 00:28:29.676655 2374 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b0098dc77bb0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:28:29.661191088 +0000 UTC m=+0.506273179,LastTimestamp:2025-09-13 00:28:29.661191088 +0000 UTC m=+0.506273179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:28:29.690565 kubelet[2374]: I0913 00:28:29.690547 2374 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:28:29.690755 kubelet[2374]: I0913 00:28:29.690743 2374 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:28:29.690820 kubelet[2374]: I0913 00:28:29.690811 2374 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:28:29.697688 kubelet[2374]: I0913 00:28:29.697640 2374 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:28:29.699262 kubelet[2374]: I0913 00:28:29.699195 2374 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:28:29.699262 kubelet[2374]: I0913 00:28:29.699216 2374 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:28:29.699262 kubelet[2374]: I0913 00:28:29.699236 2374 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:28:29.699262 kubelet[2374]: I0913 00:28:29.699245 2374 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:28:29.699402 kubelet[2374]: E0913 00:28:29.699283 2374 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:28:29.700037 kubelet[2374]: E0913 00:28:29.700010 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:28:29.764466 kubelet[2374]: E0913 00:28:29.764398 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:28:29.800300 kubelet[2374]: E0913 00:28:29.800231 2374 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:28:29.865779 kubelet[2374]: E0913 00:28:29.865618 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:28:29.868917 kubelet[2374]: E0913 00:28:29.868872 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" Sep 13 00:28:29.932792 kubelet[2374]: I0913 00:28:29.932727 2374 policy_none.go:49] "None policy: Start" Sep 13 00:28:29.932792 kubelet[2374]: I0913 00:28:29.932770 2374 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:28:29.932792 kubelet[2374]: I0913 00:28:29.932787 2374 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:28:29.943074 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:28:29.963240 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:28:29.966257 kubelet[2374]: E0913 00:28:29.966217 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:28:29.967101 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:28:29.977822 kubelet[2374]: E0913 00:28:29.977773 2374 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:28:29.978660 kubelet[2374]: I0913 00:28:29.978598 2374 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:28:29.979545 kubelet[2374]: I0913 00:28:29.978621 2374 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:28:29.979830 kubelet[2374]: I0913 00:28:29.979789 2374 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:28:29.980298 kubelet[2374]: E0913 00:28:29.980250 2374 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:28:29.980298 kubelet[2374]: E0913 00:28:29.980294 2374 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:28:30.021345 systemd[1]: Created slice kubepods-burstable-pod71e17dea7b1c51c98ef0362b53a2aef3.slice - libcontainer container kubepods-burstable-pod71e17dea7b1c51c98ef0362b53a2aef3.slice. Sep 13 00:28:30.043675 kubelet[2374]: E0913 00:28:30.043607 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:28:30.046288 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 13 00:28:30.058035 kubelet[2374]: E0913 00:28:30.057988 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:28:30.061669 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 13 00:28:30.064549 kubelet[2374]: E0913 00:28:30.064506 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:28:30.069867 kubelet[2374]: I0913 00:28:30.069808 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:30.069867 kubelet[2374]: I0913 00:28:30.069838 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:30.069867 kubelet[2374]: I0913 00:28:30.069859 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:28:30.069867 kubelet[2374]: I0913 00:28:30.069875 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71e17dea7b1c51c98ef0362b53a2aef3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"71e17dea7b1c51c98ef0362b53a2aef3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:30.070145 kubelet[2374]: I0913 00:28:30.069896 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71e17dea7b1c51c98ef0362b53a2aef3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"71e17dea7b1c51c98ef0362b53a2aef3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:30.070145 kubelet[2374]: I0913 00:28:30.069920 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71e17dea7b1c51c98ef0362b53a2aef3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"71e17dea7b1c51c98ef0362b53a2aef3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:30.070145 kubelet[2374]: I0913 00:28:30.070013 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:30.070145 kubelet[2374]: I0913 00:28:30.070054 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:30.070145 kubelet[2374]: I0913 00:28:30.070081 2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:30.082218 kubelet[2374]: I0913 00:28:30.082147 2374 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:28:30.082719 kubelet[2374]: E0913 00:28:30.082672 2374 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Sep 13 00:28:30.269606 kubelet[2374]: E0913 00:28:30.269469 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" Sep 13 00:28:30.285394 kubelet[2374]: I0913 00:28:30.285344 2374 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:28:30.285872 kubelet[2374]: E0913 00:28:30.285834 2374 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Sep 13 00:28:30.344317 kubelet[2374]: E0913 00:28:30.344245 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:30.345535 containerd[1597]: time="2025-09-13T00:28:30.345155389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:71e17dea7b1c51c98ef0362b53a2aef3,Namespace:kube-system,Attempt:0,}" Sep 13 00:28:30.359672 kubelet[2374]: E0913 00:28:30.359586 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:30.360165 containerd[1597]: time="2025-09-13T00:28:30.360122710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 13 00:28:30.366035 kubelet[2374]: E0913 00:28:30.365962 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:30.366791 containerd[1597]: time="2025-09-13T00:28:30.366754166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 13 00:28:30.392879 containerd[1597]: time="2025-09-13T00:28:30.392810916Z" level=info msg="connecting to shim 261b6ec80308850f505d63570f45b062c4ae1198ff69cbab375f4898d51add2b" address="unix:///run/containerd/s/66436fb2c897cc0a53a79559763e64abd5e5fac7c2b21f7fd68619674af46faf" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:28:30.409751 containerd[1597]: time="2025-09-13T00:28:30.409670068Z" level=info msg="connecting to shim 7529cca845d455458c959ccd5b53c2c45ccab774047ec87a527d435fb6562009" address="unix:///run/containerd/s/480d309a78ed2690b95c3196d5c8c0504331fe16dbc99aabb104b4cb79e451f2" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:28:30.459557 containerd[1597]: time="2025-09-13T00:28:30.458900137Z" level=info msg="connecting to shim b60f70cf92d9a7b0aff048500667bc9a6a92600a83a561031ca6944a50894bfc" address="unix:///run/containerd/s/208f53ecdd313d63bc7c24a3e6ff7568545c32abc70331f31747fc8aa4dfbc91" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:28:30.459806 systemd[1]: Started cri-containerd-261b6ec80308850f505d63570f45b062c4ae1198ff69cbab375f4898d51add2b.scope - libcontainer container 261b6ec80308850f505d63570f45b062c4ae1198ff69cbab375f4898d51add2b. Sep 13 00:28:30.469631 systemd[1]: Started cri-containerd-7529cca845d455458c959ccd5b53c2c45ccab774047ec87a527d435fb6562009.scope - libcontainer container 7529cca845d455458c959ccd5b53c2c45ccab774047ec87a527d435fb6562009. Sep 13 00:28:30.529677 systemd[1]: Started cri-containerd-b60f70cf92d9a7b0aff048500667bc9a6a92600a83a561031ca6944a50894bfc.scope - libcontainer container b60f70cf92d9a7b0aff048500667bc9a6a92600a83a561031ca6944a50894bfc. Sep 13 00:28:30.550676 containerd[1597]: time="2025-09-13T00:28:30.550599882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:71e17dea7b1c51c98ef0362b53a2aef3,Namespace:kube-system,Attempt:0,} returns sandbox id \"261b6ec80308850f505d63570f45b062c4ae1198ff69cbab375f4898d51add2b\"" Sep 13 00:28:30.552436 kubelet[2374]: E0913 00:28:30.552293 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:30.559026 containerd[1597]: time="2025-09-13T00:28:30.558979742Z" level=info msg="CreateContainer within sandbox \"261b6ec80308850f505d63570f45b062c4ae1198ff69cbab375f4898d51add2b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:28:30.560931 containerd[1597]: time="2025-09-13T00:28:30.560902211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7529cca845d455458c959ccd5b53c2c45ccab774047ec87a527d435fb6562009\"" Sep 13 00:28:30.561663 kubelet[2374]: E0913 00:28:30.561622 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:30.567160 containerd[1597]: time="2025-09-13T00:28:30.567120133Z" level=info msg="CreateContainer within sandbox \"7529cca845d455458c959ccd5b53c2c45ccab774047ec87a527d435fb6562009\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:28:30.570500 containerd[1597]: time="2025-09-13T00:28:30.569707638Z" level=info msg="Container 566d39d5dca1a2ab74271a62677697e10cc714b1da6b0989ea9ca3e65f39f387: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:28:30.582580 containerd[1597]: time="2025-09-13T00:28:30.582548729Z" level=info msg="Container 9f438b5f5bba86b09a4f3351e171da179b2b3fe9f333026e091a6979010b4d5f: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:28:30.589379 containerd[1597]: time="2025-09-13T00:28:30.589301858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b60f70cf92d9a7b0aff048500667bc9a6a92600a83a561031ca6944a50894bfc\"" Sep 13 00:28:30.591225 containerd[1597]: time="2025-09-13T00:28:30.591189730Z" level=info msg="CreateContainer within sandbox \"261b6ec80308850f505d63570f45b062c4ae1198ff69cbab375f4898d51add2b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"566d39d5dca1a2ab74271a62677697e10cc714b1da6b0989ea9ca3e65f39f387\"" Sep 13 00:28:30.593784 containerd[1597]: time="2025-09-13T00:28:30.593741337Z" level=info msg="StartContainer for \"566d39d5dca1a2ab74271a62677697e10cc714b1da6b0989ea9ca3e65f39f387\"" Sep 13 00:28:30.595458 containerd[1597]: time="2025-09-13T00:28:30.595416451Z" level=info msg="connecting to shim 566d39d5dca1a2ab74271a62677697e10cc714b1da6b0989ea9ca3e65f39f387" address="unix:///run/containerd/s/66436fb2c897cc0a53a79559763e64abd5e5fac7c2b21f7fd68619674af46faf" protocol=ttrpc version=3 Sep 13 00:28:30.595934 kubelet[2374]: E0913 00:28:30.595881 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:30.601800 containerd[1597]: time="2025-09-13T00:28:30.601605918Z" level=info msg="CreateContainer within sandbox \"7529cca845d455458c959ccd5b53c2c45ccab774047ec87a527d435fb6562009\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9f438b5f5bba86b09a4f3351e171da179b2b3fe9f333026e091a6979010b4d5f\"" Sep 13 00:28:30.602164 containerd[1597]: time="2025-09-13T00:28:30.602138360Z" level=info msg="StartContainer for \"9f438b5f5bba86b09a4f3351e171da179b2b3fe9f333026e091a6979010b4d5f\"" Sep 13 00:28:30.602890 containerd[1597]: time="2025-09-13T00:28:30.602846599Z" level=info msg="CreateContainer within sandbox \"b60f70cf92d9a7b0aff048500667bc9a6a92600a83a561031ca6944a50894bfc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:28:30.603800 containerd[1597]: time="2025-09-13T00:28:30.603776574Z" level=info msg="connecting to shim 9f438b5f5bba86b09a4f3351e171da179b2b3fe9f333026e091a6979010b4d5f" address="unix:///run/containerd/s/480d309a78ed2690b95c3196d5c8c0504331fe16dbc99aabb104b4cb79e451f2" protocol=ttrpc version=3 Sep 13 00:28:30.613651 kubelet[2374]: E0913 00:28:30.613542 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:28:30.663340 containerd[1597]: time="2025-09-13T00:28:30.663283633Z" level=info msg="Container 63c59e0b6a03454a8fc740bb075ab58b033065b76b766466aeb1101f4ec50e3e: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:28:30.675730 systemd[1]: Started cri-containerd-9f438b5f5bba86b09a4f3351e171da179b2b3fe9f333026e091a6979010b4d5f.scope - libcontainer container 9f438b5f5bba86b09a4f3351e171da179b2b3fe9f333026e091a6979010b4d5f. Sep 13 00:28:30.677861 containerd[1597]: time="2025-09-13T00:28:30.677819326Z" level=info msg="CreateContainer within sandbox \"b60f70cf92d9a7b0aff048500667bc9a6a92600a83a561031ca6944a50894bfc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"63c59e0b6a03454a8fc740bb075ab58b033065b76b766466aeb1101f4ec50e3e\"" Sep 13 00:28:30.678800 containerd[1597]: time="2025-09-13T00:28:30.678752527Z" level=info msg="StartContainer for \"63c59e0b6a03454a8fc740bb075ab58b033065b76b766466aeb1101f4ec50e3e\"" Sep 13 00:28:30.680177 systemd[1]: Started cri-containerd-566d39d5dca1a2ab74271a62677697e10cc714b1da6b0989ea9ca3e65f39f387.scope - libcontainer container 566d39d5dca1a2ab74271a62677697e10cc714b1da6b0989ea9ca3e65f39f387. Sep 13 00:28:30.680923 containerd[1597]: time="2025-09-13T00:28:30.680805887Z" level=info msg="connecting to shim 63c59e0b6a03454a8fc740bb075ab58b033065b76b766466aeb1101f4ec50e3e" address="unix:///run/containerd/s/208f53ecdd313d63bc7c24a3e6ff7568545c32abc70331f31747fc8aa4dfbc91" protocol=ttrpc version=3 Sep 13 00:28:30.688197 kubelet[2374]: I0913 00:28:30.688149 2374 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:28:30.688714 kubelet[2374]: E0913 00:28:30.688678 2374 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Sep 13 00:28:30.718775 systemd[1]: Started cri-containerd-63c59e0b6a03454a8fc740bb075ab58b033065b76b766466aeb1101f4ec50e3e.scope - libcontainer container 63c59e0b6a03454a8fc740bb075ab58b033065b76b766466aeb1101f4ec50e3e. Sep 13 00:28:30.731252 kubelet[2374]: E0913 00:28:30.731196 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:28:30.795757 containerd[1597]: time="2025-09-13T00:28:30.795103866Z" level=info msg="StartContainer for \"9f438b5f5bba86b09a4f3351e171da179b2b3fe9f333026e091a6979010b4d5f\" returns successfully" Sep 13 00:28:30.976351 containerd[1597]: time="2025-09-13T00:28:30.976295870Z" level=info msg="StartContainer for \"566d39d5dca1a2ab74271a62677697e10cc714b1da6b0989ea9ca3e65f39f387\" returns successfully" Sep 13 00:28:30.989420 containerd[1597]: time="2025-09-13T00:28:30.989355340Z" level=info msg="StartContainer for \"63c59e0b6a03454a8fc740bb075ab58b033065b76b766466aeb1101f4ec50e3e\" returns successfully" Sep 13 00:28:30.994737 kubelet[2374]: E0913 00:28:30.994540 2374 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:28:31.492000 kubelet[2374]: I0913 00:28:31.491946 2374 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:28:31.789444 kubelet[2374]: E0913 00:28:31.789331 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:28:31.790217 kubelet[2374]: E0913 00:28:31.789462 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:31.792282 kubelet[2374]: E0913 00:28:31.792260 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:28:31.792885 kubelet[2374]: E0913 00:28:31.792869 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:31.794724 kubelet[2374]: E0913 00:28:31.794706 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:28:31.794820 kubelet[2374]: E0913 00:28:31.794808 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:32.801188 kubelet[2374]: E0913 00:28:32.799423 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:28:32.801188 kubelet[2374]: E0913 00:28:32.799621 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:32.801188 kubelet[2374]: E0913 00:28:32.800825 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:28:32.801188 kubelet[2374]: E0913 00:28:32.800946 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:32.803873 kubelet[2374]: E0913 00:28:32.803585 2374 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:28:32.803873 kubelet[2374]: E0913 00:28:32.803803 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:33.066271 kubelet[2374]: E0913 00:28:33.066146 2374 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:28:33.143901 kubelet[2374]: I0913 00:28:33.143834 2374 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:28:33.143901 kubelet[2374]: E0913 00:28:33.143883 2374 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:28:33.155055 kubelet[2374]: E0913 00:28:33.155010 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:28:33.206620 kubelet[2374]: E0913 00:28:33.206321 2374 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864b0098dc77bb0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:28:29.661191088 +0000 UTC m=+0.506273179,LastTimestamp:2025-09-13 00:28:29.661191088 +0000 UTC m=+0.506273179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:28:33.255882 kubelet[2374]: E0913 00:28:33.255828 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:28:33.358172 kubelet[2374]: E0913 00:28:33.357471 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:28:33.459405 kubelet[2374]: E0913 00:28:33.459294 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:28:33.559937 kubelet[2374]: E0913 00:28:33.559844 2374 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:28:33.658800 kubelet[2374]: I0913 00:28:33.658747 2374 apiserver.go:52] "Watching apiserver" Sep 13 00:28:33.666079 kubelet[2374]: I0913 00:28:33.665999 2374 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:28:33.666079 kubelet[2374]: I0913 00:28:33.665933 2374 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:33.673306 kubelet[2374]: E0913 00:28:33.673271 2374 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:33.673374 kubelet[2374]: I0913 00:28:33.673317 2374 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:33.675412 kubelet[2374]: E0913 00:28:33.675377 2374 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:33.675412 kubelet[2374]: I0913 00:28:33.675399 2374 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:28:33.676867 kubelet[2374]: E0913 00:28:33.676838 2374 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:28:33.798377 kubelet[2374]: I0913 00:28:33.798317 2374 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:33.798815 kubelet[2374]: I0913 00:28:33.798791 2374 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:28:33.801461 kubelet[2374]: E0913 00:28:33.801405 2374 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:33.801461 kubelet[2374]: E0913 00:28:33.801420 2374 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:28:33.802067 kubelet[2374]: E0913 00:28:33.801686 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:33.802067 kubelet[2374]: E0913 00:28:33.801670 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:34.799924 kubelet[2374]: I0913 00:28:34.799889 2374 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:34.806459 kubelet[2374]: E0913 00:28:34.806403 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:35.137271 systemd[1]: Reload requested from client PID 2657 ('systemctl') (unit session-7.scope)... Sep 13 00:28:35.137292 systemd[1]: Reloading... Sep 13 00:28:35.231529 zram_generator::config[2703]: No configuration found. Sep 13 00:28:35.449365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:28:35.612506 systemd[1]: Reloading finished in 474 ms. Sep 13 00:28:35.651131 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:28:35.662560 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:28:35.662988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:28:35.663060 systemd[1]: kubelet.service: Consumed 1.113s CPU time, 130.4M memory peak. Sep 13 00:28:35.665363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:28:35.896060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:28:35.905868 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:28:35.950705 kubelet[2745]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:28:35.952222 kubelet[2745]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:28:35.952222 kubelet[2745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:28:35.952222 kubelet[2745]: I0913 00:28:35.950986 2745 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:28:35.959595 kubelet[2745]: I0913 00:28:35.959553 2745 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:28:35.959595 kubelet[2745]: I0913 00:28:35.959590 2745 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:28:35.959877 kubelet[2745]: I0913 00:28:35.959837 2745 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:28:35.961708 kubelet[2745]: I0913 00:28:35.961517 2745 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:28:35.964009 kubelet[2745]: I0913 00:28:35.963896 2745 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:28:35.969552 kubelet[2745]: I0913 00:28:35.967752 2745 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 00:28:35.974421 kubelet[2745]: I0913 00:28:35.974369 2745 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:28:35.974647 kubelet[2745]: I0913 00:28:35.974607 2745 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:28:35.974808 kubelet[2745]: I0913 00:28:35.974631 2745 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:28:35.974808 kubelet[2745]: I0913 00:28:35.974805 2745 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:28:35.974966 kubelet[2745]: I0913 00:28:35.974815 2745 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:28:35.974966 kubelet[2745]: I0913 00:28:35.974865 2745 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:28:35.975054 kubelet[2745]: I0913 00:28:35.975033 2745 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:28:35.975054 kubelet[2745]: I0913 00:28:35.975047 2745 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:28:35.975112 kubelet[2745]: I0913 00:28:35.975070 2745 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:28:35.975112 kubelet[2745]: I0913 00:28:35.975091 2745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:28:35.976181 kubelet[2745]: I0913 00:28:35.976091 2745 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 13 00:28:35.976843 kubelet[2745]: I0913 00:28:35.976829 2745 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:28:35.980358 kubelet[2745]: I0913 00:28:35.980343 2745 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:28:35.980543 kubelet[2745]: I0913 00:28:35.980533 2745 server.go:1289] "Started kubelet" Sep 13 00:28:35.980802 kubelet[2745]: I0913 00:28:35.980738 2745 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:28:35.981105 kubelet[2745]: I0913 00:28:35.981055 2745 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:28:35.981412 kubelet[2745]: I0913 00:28:35.981398 2745 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:28:35.984908 kubelet[2745]: I0913 00:28:35.984867 2745 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:28:35.988731 kubelet[2745]: I0913 00:28:35.988366 2745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:28:35.990154 kubelet[2745]: I0913 00:28:35.990123 2745 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:28:35.990720 kubelet[2745]: I0913 00:28:35.990657 2745 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:28:35.990934 kubelet[2745]: I0913 00:28:35.990912 2745 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:28:35.992335 kubelet[2745]: I0913 00:28:35.991095 2745 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:28:35.992759 kubelet[2745]: I0913 00:28:35.992736 2745 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:28:35.992996 kubelet[2745]: E0913 00:28:35.992965 2745 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:28:35.993628 kubelet[2745]: I0913 00:28:35.993596 2745 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:28:35.996066 kubelet[2745]: I0913 00:28:35.995991 2745 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:28:36.015316 kubelet[2745]: I0913 00:28:36.015100 2745 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:28:36.017585 kubelet[2745]: I0913 00:28:36.017559 2745 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:28:36.017655 kubelet[2745]: I0913 00:28:36.017592 2745 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:28:36.017655 kubelet[2745]: I0913 00:28:36.017614 2745 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:28:36.017655 kubelet[2745]: I0913 00:28:36.017623 2745 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:28:36.017735 kubelet[2745]: E0913 00:28:36.017670 2745 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:28:36.043019 kubelet[2745]: I0913 00:28:36.042975 2745 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:28:36.043019 kubelet[2745]: I0913 00:28:36.042996 2745 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:28:36.043019 kubelet[2745]: I0913 00:28:36.043017 2745 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:28:36.043369 kubelet[2745]: I0913 00:28:36.043150 2745 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:28:36.043369 kubelet[2745]: I0913 00:28:36.043160 2745 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:28:36.043369 kubelet[2745]: I0913 00:28:36.043179 2745 policy_none.go:49] "None policy: Start" Sep 13 00:28:36.043369 kubelet[2745]: I0913 00:28:36.043197 2745 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:28:36.043369 kubelet[2745]: I0913 00:28:36.043207 2745 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:28:36.043369 kubelet[2745]: I0913 00:28:36.043306 2745 state_mem.go:75] "Updated machine memory state" Sep 13 00:28:36.051727 kubelet[2745]: E0913 00:28:36.051640 2745 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:28:36.051855 kubelet[2745]: I0913 00:28:36.051836 2745 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:28:36.051882 kubelet[2745]: I0913 00:28:36.051849 2745 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:28:36.052124 kubelet[2745]: I0913 00:28:36.052044 2745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:28:36.052971 kubelet[2745]: E0913 00:28:36.052952 2745 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:28:36.119885 kubelet[2745]: I0913 00:28:36.119812 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:36.120066 kubelet[2745]: I0913 00:28:36.119919 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:28:36.120066 kubelet[2745]: I0913 00:28:36.119984 2745 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:36.157126 kubelet[2745]: I0913 00:28:36.157094 2745 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:28:36.192736 kubelet[2745]: I0913 00:28:36.192668 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:28:36.192736 kubelet[2745]: I0913 00:28:36.192712 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71e17dea7b1c51c98ef0362b53a2aef3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"71e17dea7b1c51c98ef0362b53a2aef3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:36.192736 kubelet[2745]: I0913 00:28:36.192735 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71e17dea7b1c51c98ef0362b53a2aef3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"71e17dea7b1c51c98ef0362b53a2aef3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:36.192963 kubelet[2745]: I0913 00:28:36.192756 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71e17dea7b1c51c98ef0362b53a2aef3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"71e17dea7b1c51c98ef0362b53a2aef3\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:36.192963 kubelet[2745]: I0913 00:28:36.192780 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:36.192963 kubelet[2745]: I0913 00:28:36.192797 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:36.192963 kubelet[2745]: I0913 00:28:36.192908 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:36.193078 kubelet[2745]: I0913 00:28:36.192965 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:36.193078 kubelet[2745]: I0913 00:28:36.192992 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:28:36.255910 kubelet[2745]: E0913 00:28:36.255856 2745 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:28:36.259501 kubelet[2745]: I0913 00:28:36.259073 2745 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:28:36.259501 kubelet[2745]: I0913 00:28:36.259190 2745 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:28:36.277428 sudo[2787]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:28:36.277835 sudo[2787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 00:28:36.555536 kubelet[2745]: E0913 00:28:36.555387 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:36.557017 kubelet[2745]: E0913 00:28:36.556005 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:36.557017 kubelet[2745]: E0913 00:28:36.556174 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:36.820233 sudo[2787]: pam_unix(sudo:session): session closed for user root Sep 13 00:28:36.976253 kubelet[2745]: I0913 00:28:36.976184 2745 apiserver.go:52] "Watching apiserver" Sep 13 00:28:36.991249 kubelet[2745]: I0913 00:28:36.991175 2745 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:28:37.035560 kubelet[2745]: E0913 00:28:37.034318 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:37.035560 kubelet[2745]: E0913 00:28:37.034387 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:37.040679 kubelet[2745]: E0913 00:28:37.040624 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:37.068006 kubelet[2745]: I0913 00:28:37.067917 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.06788715 podStartE2EDuration="1.06788715s" podCreationTimestamp="2025-09-13 00:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:28:37.067132394 +0000 UTC m=+1.155868151" watchObservedRunningTime="2025-09-13 00:28:37.06788715 +0000 UTC m=+1.156622907" Sep 13 00:28:37.077914 kubelet[2745]: I0913 00:28:37.077715 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.077664549 podStartE2EDuration="3.077664549s" podCreationTimestamp="2025-09-13 00:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:28:37.077651834 +0000 UTC m=+1.166387591" watchObservedRunningTime="2025-09-13 00:28:37.077664549 +0000 UTC m=+1.166400306" Sep 13 00:28:37.088943 kubelet[2745]: I0913 00:28:37.088864 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.088847192 podStartE2EDuration="1.088847192s" podCreationTimestamp="2025-09-13 00:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:28:37.088829307 +0000 UTC m=+1.177565064" watchObservedRunningTime="2025-09-13 00:28:37.088847192 +0000 UTC m=+1.177582949" Sep 13 00:28:38.037456 kubelet[2745]: E0913 00:28:38.037299 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:38.038271 kubelet[2745]: E0913 00:28:38.038117 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:38.221858 sudo[1804]: pam_unix(sudo:session): session closed for user root Sep 13 00:28:38.223427 sshd[1803]: Connection closed by 10.0.0.1 port 45484 Sep 13 00:28:38.224176 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Sep 13 00:28:38.230885 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:45484.service: Deactivated successfully. Sep 13 00:28:38.234126 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:28:38.234383 systemd[1]: session-7.scope: Consumed 7.633s CPU time, 264M memory peak. Sep 13 00:28:38.235826 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:28:38.237728 systemd-logind[1570]: Removed session 7. Sep 13 00:28:40.764697 kubelet[2745]: E0913 00:28:40.764644 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:41.043069 kubelet[2745]: E0913 00:28:41.042943 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:41.564875 update_engine[1571]: I20250913 00:28:41.564792 1571 update_attempter.cc:509] Updating boot flags... Sep 13 00:28:42.150191 kubelet[2745]: I0913 00:28:42.150142 2745 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:28:42.150747 containerd[1597]: time="2025-09-13T00:28:42.150655075Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:28:42.151055 kubelet[2745]: I0913 00:28:42.150987 2745 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:28:43.206167 systemd[1]: Created slice kubepods-besteffort-podb2c8fa93_e469_4e18_91c5_24c8ef45c5aa.slice - libcontainer container kubepods-besteffort-podb2c8fa93_e469_4e18_91c5_24c8ef45c5aa.slice. Sep 13 00:28:43.221067 systemd[1]: Created slice kubepods-burstable-pod9d87643d_2ddf_49a4_afdb_f8d00e83762f.slice - libcontainer container kubepods-burstable-pod9d87643d_2ddf_49a4_afdb_f8d00e83762f.slice. Sep 13 00:28:43.246322 kubelet[2745]: I0913 00:28:43.246266 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-cgroup\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.246872 kubelet[2745]: I0913 00:28:43.246405 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2c8fa93-e469-4e18-91c5-24c8ef45c5aa-xtables-lock\") pod \"kube-proxy-z7zwg\" (UID: \"b2c8fa93-e469-4e18-91c5-24c8ef45c5aa\") " pod="kube-system/kube-proxy-z7zwg" Sep 13 00:28:43.246872 kubelet[2745]: I0913 00:28:43.246435 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7tn\" (UniqueName: \"kubernetes.io/projected/b2c8fa93-e469-4e18-91c5-24c8ef45c5aa-kube-api-access-hj7tn\") pod \"kube-proxy-z7zwg\" (UID: \"b2c8fa93-e469-4e18-91c5-24c8ef45c5aa\") " pod="kube-system/kube-proxy-z7zwg" Sep 13 00:28:43.246872 kubelet[2745]: I0913 00:28:43.246453 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2c8fa93-e469-4e18-91c5-24c8ef45c5aa-lib-modules\") pod \"kube-proxy-z7zwg\" (UID: \"b2c8fa93-e469-4e18-91c5-24c8ef45c5aa\") " pod="kube-system/kube-proxy-z7zwg" Sep 13 00:28:43.246872 kubelet[2745]: I0913 00:28:43.246468 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-run\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.246872 kubelet[2745]: I0913 00:28:43.246497 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-bpf-maps\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.246872 kubelet[2745]: I0913 00:28:43.246511 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-hostproc\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.247075 kubelet[2745]: I0913 00:28:43.246580 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2c8fa93-e469-4e18-91c5-24c8ef45c5aa-kube-proxy\") pod \"kube-proxy-z7zwg\" (UID: \"b2c8fa93-e469-4e18-91c5-24c8ef45c5aa\") " pod="kube-system/kube-proxy-z7zwg" Sep 13 00:28:43.287344 systemd[1]: Created slice kubepods-besteffort-pode87d86ae_6792_443b_88f5_38bdb041e7b4.slice - libcontainer container kubepods-besteffort-pode87d86ae_6792_443b_88f5_38bdb041e7b4.slice. Sep 13 00:28:43.347705 kubelet[2745]: I0913 00:28:43.347625 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-xtables-lock\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.348540 kubelet[2745]: I0913 00:28:43.348027 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-host-proc-sys-net\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.348540 kubelet[2745]: I0913 00:28:43.348091 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-host-proc-sys-kernel\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.348540 kubelet[2745]: I0913 00:28:43.348108 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d87643d-2ddf-49a4-afdb-f8d00e83762f-hubble-tls\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.348540 kubelet[2745]: I0913 00:28:43.348142 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cni-path\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.348540 kubelet[2745]: I0913 00:28:43.348218 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-etc-cni-netd\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.348540 kubelet[2745]: I0913 00:28:43.348240 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-lib-modules\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.348852 kubelet[2745]: I0913 00:28:43.348277 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rwcm\" (UniqueName: \"kubernetes.io/projected/9d87643d-2ddf-49a4-afdb-f8d00e83762f-kube-api-access-5rwcm\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.348852 kubelet[2745]: I0913 00:28:43.348395 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d87643d-2ddf-49a4-afdb-f8d00e83762f-clustermesh-secrets\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.348852 kubelet[2745]: I0913 00:28:43.348420 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-config-path\") pod \"cilium-f5rzl\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " pod="kube-system/cilium-f5rzl" Sep 13 00:28:43.449816 kubelet[2745]: I0913 00:28:43.449600 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e87d86ae-6792-443b-88f5-38bdb041e7b4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-g4jmb\" (UID: \"e87d86ae-6792-443b-88f5-38bdb041e7b4\") " pod="kube-system/cilium-operator-6c4d7847fc-g4jmb" Sep 13 00:28:43.449816 kubelet[2745]: I0913 00:28:43.449710 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbw4m\" (UniqueName: \"kubernetes.io/projected/e87d86ae-6792-443b-88f5-38bdb041e7b4-kube-api-access-jbw4m\") pod \"cilium-operator-6c4d7847fc-g4jmb\" (UID: \"e87d86ae-6792-443b-88f5-38bdb041e7b4\") " pod="kube-system/cilium-operator-6c4d7847fc-g4jmb" Sep 13 00:28:43.516189 kubelet[2745]: E0913 00:28:43.516048 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:43.516941 containerd[1597]: time="2025-09-13T00:28:43.516882720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7zwg,Uid:b2c8fa93-e469-4e18-91c5-24c8ef45c5aa,Namespace:kube-system,Attempt:0,}" Sep 13 00:28:43.525039 kubelet[2745]: E0913 00:28:43.524992 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:43.525673 containerd[1597]: time="2025-09-13T00:28:43.525622155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f5rzl,Uid:9d87643d-2ddf-49a4-afdb-f8d00e83762f,Namespace:kube-system,Attempt:0,}" Sep 13 00:28:43.573708 containerd[1597]: time="2025-09-13T00:28:43.573594644Z" level=info msg="connecting to shim b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca" address="unix:///run/containerd/s/8ca48ac661a232ab2f0844691a57937897ff23a4f15878035d90b67760acfe7a" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:28:43.575404 containerd[1597]: time="2025-09-13T00:28:43.575361892Z" level=info msg="connecting to shim 585fe91d99071e29ea4b064d71de0d97513c607ec66c95eeadbf5a95521f992a" address="unix:///run/containerd/s/8acf03d190830f278e6b2b98d8acd65f56eae442f4d4a9cbc565f10fb4615cca" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:28:43.590774 kubelet[2745]: E0913 00:28:43.590693 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:43.591657 containerd[1597]: time="2025-09-13T00:28:43.591611064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g4jmb,Uid:e87d86ae-6792-443b-88f5-38bdb041e7b4,Namespace:kube-system,Attempt:0,}" Sep 13 00:28:43.624774 containerd[1597]: time="2025-09-13T00:28:43.622583638Z" level=info msg="connecting to shim 46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725" address="unix:///run/containerd/s/0e98555543ba91cf46b1d17e5cfe16ffbc49b90ed5e358c4248642aae5d4c511" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:28:43.669859 systemd[1]: Started cri-containerd-585fe91d99071e29ea4b064d71de0d97513c607ec66c95eeadbf5a95521f992a.scope - libcontainer container 585fe91d99071e29ea4b064d71de0d97513c607ec66c95eeadbf5a95521f992a. Sep 13 00:28:43.685124 systemd[1]: Started cri-containerd-46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725.scope - libcontainer container 46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725. Sep 13 00:28:43.688098 systemd[1]: Started cri-containerd-b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca.scope - libcontainer container b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca. Sep 13 00:28:43.828575 containerd[1597]: time="2025-09-13T00:28:43.827561826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7zwg,Uid:b2c8fa93-e469-4e18-91c5-24c8ef45c5aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"585fe91d99071e29ea4b064d71de0d97513c607ec66c95eeadbf5a95521f992a\"" Sep 13 00:28:43.829447 kubelet[2745]: E0913 00:28:43.829412 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:43.846333 containerd[1597]: time="2025-09-13T00:28:43.846255289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f5rzl,Uid:9d87643d-2ddf-49a4-afdb-f8d00e83762f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\"" Sep 13 00:28:43.851737 kubelet[2745]: E0913 00:28:43.850224 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:43.852544 containerd[1597]: time="2025-09-13T00:28:43.852380983Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:28:43.861062 containerd[1597]: time="2025-09-13T00:28:43.861000482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g4jmb,Uid:e87d86ae-6792-443b-88f5-38bdb041e7b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725\"" Sep 13 00:28:43.861644 containerd[1597]: time="2025-09-13T00:28:43.861623422Z" level=info msg="CreateContainer within sandbox \"585fe91d99071e29ea4b064d71de0d97513c607ec66c95eeadbf5a95521f992a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:28:43.868080 kubelet[2745]: E0913 00:28:43.863913 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:43.919367 containerd[1597]: time="2025-09-13T00:28:43.919297709Z" level=info msg="Container b37f865421bdbbf3cace1a2f2494e4e13721b508a3c1305eec79effe9685089e: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:28:43.941841 containerd[1597]: time="2025-09-13T00:28:43.941101394Z" level=info msg="CreateContainer within sandbox \"585fe91d99071e29ea4b064d71de0d97513c607ec66c95eeadbf5a95521f992a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b37f865421bdbbf3cace1a2f2494e4e13721b508a3c1305eec79effe9685089e\"" Sep 13 00:28:43.944582 containerd[1597]: time="2025-09-13T00:28:43.942459456Z" level=info msg="StartContainer for \"b37f865421bdbbf3cace1a2f2494e4e13721b508a3c1305eec79effe9685089e\"" Sep 13 00:28:43.949935 containerd[1597]: time="2025-09-13T00:28:43.949612417Z" level=info msg="connecting to shim b37f865421bdbbf3cace1a2f2494e4e13721b508a3c1305eec79effe9685089e" address="unix:///run/containerd/s/8acf03d190830f278e6b2b98d8acd65f56eae442f4d4a9cbc565f10fb4615cca" protocol=ttrpc version=3 Sep 13 00:28:44.004250 systemd[1]: Started cri-containerd-b37f865421bdbbf3cace1a2f2494e4e13721b508a3c1305eec79effe9685089e.scope - libcontainer container b37f865421bdbbf3cace1a2f2494e4e13721b508a3c1305eec79effe9685089e. Sep 13 00:28:44.104801 containerd[1597]: time="2025-09-13T00:28:44.104419140Z" level=info msg="StartContainer for \"b37f865421bdbbf3cace1a2f2494e4e13721b508a3c1305eec79effe9685089e\" returns successfully" Sep 13 00:28:44.546788 kubelet[2745]: E0913 00:28:44.546746 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:45.075689 kubelet[2745]: E0913 00:28:45.074318 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:45.075689 kubelet[2745]: E0913 00:28:45.075058 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:45.139728 kubelet[2745]: I0913 00:28:45.139617 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z7zwg" podStartSLOduration=2.139593624 podStartE2EDuration="2.139593624s" podCreationTimestamp="2025-09-13 00:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:28:45.117900187 +0000 UTC m=+9.206635964" watchObservedRunningTime="2025-09-13 00:28:45.139593624 +0000 UTC m=+9.228329381" Sep 13 00:28:46.026403 kubelet[2745]: E0913 00:28:46.026354 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:46.076356 kubelet[2745]: E0913 00:28:46.076300 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:46.077060 kubelet[2745]: E0913 00:28:46.077034 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:46.077230 kubelet[2745]: E0913 00:28:46.077179 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:47.078577 kubelet[2745]: E0913 00:28:47.078520 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:28:50.688157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1233087830.mount: Deactivated successfully. Sep 13 00:29:00.867093 containerd[1597]: time="2025-09-13T00:29:00.867023140Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:29:00.867833 containerd[1597]: time="2025-09-13T00:29:00.867796365Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 13 00:29:00.869034 containerd[1597]: time="2025-09-13T00:29:00.868991294Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:29:00.870664 containerd[1597]: time="2025-09-13T00:29:00.870631791Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.018207926s" Sep 13 00:29:00.870664 containerd[1597]: time="2025-09-13T00:29:00.870667248Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:29:00.871721 containerd[1597]: time="2025-09-13T00:29:00.871694691Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:29:00.882523 containerd[1597]: time="2025-09-13T00:29:00.882468046Z" level=info msg="CreateContainer within sandbox \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:29:00.898825 containerd[1597]: time="2025-09-13T00:29:00.898768843Z" level=info msg="Container 8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:29:01.391224 containerd[1597]: time="2025-09-13T00:29:01.391028418Z" level=info msg="CreateContainer within sandbox \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\"" Sep 13 00:29:01.393664 containerd[1597]: time="2025-09-13T00:29:01.393540505Z" level=info msg="StartContainer for \"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\"" Sep 13 00:29:01.394709 containerd[1597]: time="2025-09-13T00:29:01.394669951Z" level=info msg="connecting to shim 8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f" address="unix:///run/containerd/s/8ca48ac661a232ab2f0844691a57937897ff23a4f15878035d90b67760acfe7a" protocol=ttrpc version=3 Sep 13 00:29:01.432758 systemd[1]: Started cri-containerd-8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f.scope - libcontainer container 8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f. Sep 13 00:29:01.478685 systemd[1]: cri-containerd-8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f.scope: Deactivated successfully. Sep 13 00:29:01.482137 containerd[1597]: time="2025-09-13T00:29:01.482086370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\" id:\"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\" pid:3191 exited_at:{seconds:1757723341 nanos:481034500}" Sep 13 00:29:01.794592 containerd[1597]: time="2025-09-13T00:29:01.794527025Z" level=info msg="received exit event container_id:\"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\" id:\"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\" pid:3191 exited_at:{seconds:1757723341 nanos:481034500}" Sep 13 00:29:01.795936 containerd[1597]: time="2025-09-13T00:29:01.795880431Z" level=info msg="StartContainer for \"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\" returns successfully" Sep 13 00:29:01.823899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f-rootfs.mount: Deactivated successfully. Sep 13 00:29:02.793683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3899386535.mount: Deactivated successfully. Sep 13 00:29:02.802124 kubelet[2745]: E0913 00:29:02.802082 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:02.809254 containerd[1597]: time="2025-09-13T00:29:02.809074793Z" level=info msg="CreateContainer within sandbox \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:29:02.838122 containerd[1597]: time="2025-09-13T00:29:02.837681978Z" level=info msg="Container 5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:29:02.849225 containerd[1597]: time="2025-09-13T00:29:02.849158768Z" level=info msg="CreateContainer within sandbox \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\"" Sep 13 00:29:02.852147 containerd[1597]: time="2025-09-13T00:29:02.852098999Z" level=info msg="StartContainer for \"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\"" Sep 13 00:29:02.854173 containerd[1597]: time="2025-09-13T00:29:02.854080126Z" level=info msg="connecting to shim 5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151" address="unix:///run/containerd/s/8ca48ac661a232ab2f0844691a57937897ff23a4f15878035d90b67760acfe7a" protocol=ttrpc version=3 Sep 13 00:29:02.880687 systemd[1]: Started cri-containerd-5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151.scope - libcontainer container 5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151. Sep 13 00:29:02.968228 containerd[1597]: time="2025-09-13T00:29:02.968179340Z" level=info msg="StartContainer for \"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\" returns successfully" Sep 13 00:29:02.977112 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:29:02.977453 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:29:02.977731 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:29:02.979707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:29:02.981920 systemd[1]: cri-containerd-5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151.scope: Deactivated successfully. Sep 13 00:29:02.984244 containerd[1597]: time="2025-09-13T00:29:02.984084280Z" level=info msg="received exit event container_id:\"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\" id:\"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\" pid:3244 exited_at:{seconds:1757723342 nanos:983760961}" Sep 13 00:29:02.984244 containerd[1597]: time="2025-09-13T00:29:02.984192984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\" id:\"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\" pid:3244 exited_at:{seconds:1757723342 nanos:983760961}" Sep 13 00:29:03.008641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:29:03.339441 containerd[1597]: time="2025-09-13T00:29:03.339364925Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:29:03.340315 containerd[1597]: time="2025-09-13T00:29:03.340255330Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 13 00:29:03.342426 containerd[1597]: time="2025-09-13T00:29:03.342374986Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:29:03.343947 containerd[1597]: time="2025-09-13T00:29:03.343897460Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.472171209s" Sep 13 00:29:03.343947 containerd[1597]: time="2025-09-13T00:29:03.343933388Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:29:03.353158 containerd[1597]: time="2025-09-13T00:29:03.353089239Z" level=info msg="CreateContainer within sandbox \"46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:29:03.364311 containerd[1597]: time="2025-09-13T00:29:03.364245001Z" level=info msg="Container c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:29:03.382108 containerd[1597]: time="2025-09-13T00:29:03.379210048Z" level=info msg="CreateContainer within sandbox \"46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\"" Sep 13 00:29:03.383034 containerd[1597]: time="2025-09-13T00:29:03.382994176Z" level=info msg="StartContainer for \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\"" Sep 13 00:29:03.384186 containerd[1597]: time="2025-09-13T00:29:03.384155280Z" level=info msg="connecting to shim c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662" address="unix:///run/containerd/s/0e98555543ba91cf46b1d17e5cfe16ffbc49b90ed5e358c4248642aae5d4c511" protocol=ttrpc version=3 Sep 13 00:29:03.423145 systemd[1]: Started cri-containerd-c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662.scope - libcontainer container c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662. Sep 13 00:29:03.515412 containerd[1597]: time="2025-09-13T00:29:03.513519183Z" level=info msg="StartContainer for \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\" returns successfully" Sep 13 00:29:03.814018 kubelet[2745]: E0913 00:29:03.813970 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:03.830849 kubelet[2745]: E0913 00:29:03.830793 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:03.831249 containerd[1597]: time="2025-09-13T00:29:03.831180547Z" level=info msg="CreateContainer within sandbox \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:29:04.012375 kubelet[2745]: I0913 00:29:04.009771 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-g4jmb" podStartSLOduration=1.5348764400000001 podStartE2EDuration="21.009727584s" podCreationTimestamp="2025-09-13 00:28:43 +0000 UTC" firstStartedPulling="2025-09-13 00:28:43.86992033 +0000 UTC m=+7.958656087" lastFinishedPulling="2025-09-13 00:29:03.344771474 +0000 UTC m=+27.433507231" observedRunningTime="2025-09-13 00:29:03.986336472 +0000 UTC m=+28.075072259" watchObservedRunningTime="2025-09-13 00:29:04.009727584 +0000 UTC m=+28.098463342" Sep 13 00:29:04.022525 containerd[1597]: time="2025-09-13T00:29:04.022469466Z" level=info msg="Container 787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:29:04.099240 containerd[1597]: time="2025-09-13T00:29:04.098906107Z" level=info msg="CreateContainer within sandbox \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\"" Sep 13 00:29:04.106380 containerd[1597]: time="2025-09-13T00:29:04.099963946Z" level=info msg="StartContainer for \"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\"" Sep 13 00:29:04.106380 containerd[1597]: time="2025-09-13T00:29:04.103661420Z" level=info msg="connecting to shim 787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0" address="unix:///run/containerd/s/8ca48ac661a232ab2f0844691a57937897ff23a4f15878035d90b67760acfe7a" protocol=ttrpc version=3 Sep 13 00:29:04.163731 systemd[1]: Started cri-containerd-787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0.scope - libcontainer container 787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0. Sep 13 00:29:04.321360 containerd[1597]: time="2025-09-13T00:29:04.317460676Z" level=info msg="StartContainer for \"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\" returns successfully" Sep 13 00:29:04.338612 systemd[1]: cri-containerd-787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0.scope: Deactivated successfully. Sep 13 00:29:04.348651 containerd[1597]: time="2025-09-13T00:29:04.348600225Z" level=info msg="received exit event container_id:\"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\" id:\"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\" pid:3333 exited_at:{seconds:1757723344 nanos:348198669}" Sep 13 00:29:04.348875 containerd[1597]: time="2025-09-13T00:29:04.348852819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\" id:\"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\" pid:3333 exited_at:{seconds:1757723344 nanos:348198669}" Sep 13 00:29:04.429103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0-rootfs.mount: Deactivated successfully. Sep 13 00:29:04.839927 kubelet[2745]: E0913 00:29:04.838128 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:04.839927 kubelet[2745]: E0913 00:29:04.838864 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:06.011585 kubelet[2745]: E0913 00:29:06.009653 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:06.273934 containerd[1597]: time="2025-09-13T00:29:06.270011498Z" level=info msg="CreateContainer within sandbox \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:29:06.510456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548624348.mount: Deactivated successfully. Sep 13 00:29:06.539543 containerd[1597]: time="2025-09-13T00:29:06.537714695Z" level=info msg="Container be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:29:06.576753 containerd[1597]: time="2025-09-13T00:29:06.576441159Z" level=info msg="CreateContainer within sandbox \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\"" Sep 13 00:29:06.577688 containerd[1597]: time="2025-09-13T00:29:06.577503887Z" level=info msg="StartContainer for \"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\"" Sep 13 00:29:06.578854 containerd[1597]: time="2025-09-13T00:29:06.578822977Z" level=info msg="connecting to shim be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86" address="unix:///run/containerd/s/8ca48ac661a232ab2f0844691a57937897ff23a4f15878035d90b67760acfe7a" protocol=ttrpc version=3 Sep 13 00:29:06.646031 systemd[1]: Started cri-containerd-be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86.scope - libcontainer container be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86. Sep 13 00:29:06.774371 systemd[1]: cri-containerd-be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86.scope: Deactivated successfully. Sep 13 00:29:06.781297 containerd[1597]: time="2025-09-13T00:29:06.781081261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\" id:\"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\" pid:3377 exited_at:{seconds:1757723346 nanos:780337523}" Sep 13 00:29:06.797738 containerd[1597]: time="2025-09-13T00:29:06.796373069Z" level=info msg="received exit event container_id:\"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\" id:\"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\" pid:3377 exited_at:{seconds:1757723346 nanos:780337523}" Sep 13 00:29:06.807202 containerd[1597]: time="2025-09-13T00:29:06.800896654Z" level=info msg="StartContainer for \"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\" returns successfully" Sep 13 00:29:06.916140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86-rootfs.mount: Deactivated successfully. Sep 13 00:29:07.035451 kubelet[2745]: E0913 00:29:07.031242 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:07.054626 containerd[1597]: time="2025-09-13T00:29:07.054074650Z" level=info msg="CreateContainer within sandbox \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:29:07.125071 containerd[1597]: time="2025-09-13T00:29:07.123673534Z" level=info msg="Container fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:29:07.173365 containerd[1597]: time="2025-09-13T00:29:07.158932990Z" level=info msg="CreateContainer within sandbox \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\"" Sep 13 00:29:07.173365 containerd[1597]: time="2025-09-13T00:29:07.164459849Z" level=info msg="StartContainer for \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\"" Sep 13 00:29:07.173365 containerd[1597]: time="2025-09-13T00:29:07.172361712Z" level=info msg="connecting to shim fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916" address="unix:///run/containerd/s/8ca48ac661a232ab2f0844691a57937897ff23a4f15878035d90b67760acfe7a" protocol=ttrpc version=3 Sep 13 00:29:07.323532 systemd[1]: Started cri-containerd-fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916.scope - libcontainer container fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916. Sep 13 00:29:07.467758 containerd[1597]: time="2025-09-13T00:29:07.467716335Z" level=info msg="StartContainer for \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" returns successfully" Sep 13 00:29:07.684196 containerd[1597]: time="2025-09-13T00:29:07.681034896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" id:\"902c8b2625729d77f22731b8f30f51f3b4e0c05638f10dc8b06fd68b864b536f\" pid:3443 exited_at:{seconds:1757723347 nanos:680549814}" Sep 13 00:29:07.789316 kubelet[2745]: I0913 00:29:07.789110 2745 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:29:07.977546 systemd[1]: Created slice kubepods-burstable-pod99137ef2_f663_456e_9d3a_97d0dc4563d0.slice - libcontainer container kubepods-burstable-pod99137ef2_f663_456e_9d3a_97d0dc4563d0.slice. Sep 13 00:29:07.997887 systemd[1]: Created slice kubepods-burstable-poddc4a07f0_9960_4e23_8267_53878932facd.slice - libcontainer container kubepods-burstable-poddc4a07f0_9960_4e23_8267_53878932facd.slice. Sep 13 00:29:08.045328 kubelet[2745]: I0913 00:29:08.044732 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tdz5\" (UniqueName: \"kubernetes.io/projected/99137ef2-f663-456e-9d3a-97d0dc4563d0-kube-api-access-6tdz5\") pod \"coredns-674b8bbfcf-tr9lp\" (UID: \"99137ef2-f663-456e-9d3a-97d0dc4563d0\") " pod="kube-system/coredns-674b8bbfcf-tr9lp" Sep 13 00:29:08.045328 kubelet[2745]: I0913 00:29:08.044769 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc4a07f0-9960-4e23-8267-53878932facd-config-volume\") pod \"coredns-674b8bbfcf-z4jps\" (UID: \"dc4a07f0-9960-4e23-8267-53878932facd\") " pod="kube-system/coredns-674b8bbfcf-z4jps" Sep 13 00:29:08.045328 kubelet[2745]: I0913 00:29:08.044788 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99137ef2-f663-456e-9d3a-97d0dc4563d0-config-volume\") pod \"coredns-674b8bbfcf-tr9lp\" (UID: \"99137ef2-f663-456e-9d3a-97d0dc4563d0\") " pod="kube-system/coredns-674b8bbfcf-tr9lp" Sep 13 00:29:08.045328 kubelet[2745]: I0913 00:29:08.044809 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdksw\" (UniqueName: \"kubernetes.io/projected/dc4a07f0-9960-4e23-8267-53878932facd-kube-api-access-xdksw\") pod \"coredns-674b8bbfcf-z4jps\" (UID: \"dc4a07f0-9960-4e23-8267-53878932facd\") " pod="kube-system/coredns-674b8bbfcf-z4jps" Sep 13 00:29:08.073304 kubelet[2745]: E0913 00:29:08.072150 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:08.294757 kubelet[2745]: E0913 00:29:08.292887 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:08.296375 containerd[1597]: time="2025-09-13T00:29:08.296281932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tr9lp,Uid:99137ef2-f663-456e-9d3a-97d0dc4563d0,Namespace:kube-system,Attempt:0,}" Sep 13 00:29:08.322763 kubelet[2745]: E0913 00:29:08.317691 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:08.322899 containerd[1597]: time="2025-09-13T00:29:08.320246091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z4jps,Uid:dc4a07f0-9960-4e23-8267-53878932facd,Namespace:kube-system,Attempt:0,}" Sep 13 00:29:09.110592 kubelet[2745]: E0913 00:29:09.099528 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:10.101308 kubelet[2745]: E0913 00:29:10.100560 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:11.226154 systemd-networkd[1496]: cilium_host: Link UP Sep 13 00:29:11.226402 systemd-networkd[1496]: cilium_net: Link UP Sep 13 00:29:11.226682 systemd-networkd[1496]: cilium_host: Gained carrier Sep 13 00:29:11.226897 systemd-networkd[1496]: cilium_net: Gained carrier Sep 13 00:29:11.339779 systemd-networkd[1496]: cilium_net: Gained IPv6LL Sep 13 00:29:11.619381 systemd-networkd[1496]: cilium_vxlan: Link UP Sep 13 00:29:11.619392 systemd-networkd[1496]: cilium_vxlan: Gained carrier Sep 13 00:29:11.727580 systemd-networkd[1496]: cilium_host: Gained IPv6LL Sep 13 00:29:11.793962 kubelet[2745]: E0913 00:29:11.792237 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:12.213286 kernel: NET: Registered PF_ALG protocol family Sep 13 00:29:13.580335 systemd-networkd[1496]: cilium_vxlan: Gained IPv6LL Sep 13 00:29:14.633624 systemd-networkd[1496]: lxc_health: Link UP Sep 13 00:29:14.674716 systemd-networkd[1496]: lxc_health: Gained carrier Sep 13 00:29:15.025333 systemd-networkd[1496]: lxc96a4e8af1f47: Link UP Sep 13 00:29:15.036560 kernel: eth0: renamed from tmpbbde9 Sep 13 00:29:15.056934 systemd-networkd[1496]: lxc96a4e8af1f47: Gained carrier Sep 13 00:29:15.062190 systemd-networkd[1496]: lxce35199f139d0: Link UP Sep 13 00:29:15.101561 kernel: eth0: renamed from tmp09132 Sep 13 00:29:15.103791 systemd-networkd[1496]: lxce35199f139d0: Gained carrier Sep 13 00:29:15.528961 kubelet[2745]: E0913 00:29:15.528530 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:15.625403 kubelet[2745]: I0913 00:29:15.625338 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f5rzl" podStartSLOduration=15.605871313 podStartE2EDuration="32.625319639s" podCreationTimestamp="2025-09-13 00:28:43 +0000 UTC" firstStartedPulling="2025-09-13 00:28:43.851969073 +0000 UTC m=+7.940704830" lastFinishedPulling="2025-09-13 00:29:00.871417379 +0000 UTC m=+24.960153156" observedRunningTime="2025-09-13 00:29:08.167645038 +0000 UTC m=+32.256380795" watchObservedRunningTime="2025-09-13 00:29:15.625319639 +0000 UTC m=+39.714055396" Sep 13 00:29:16.127562 kubelet[2745]: E0913 00:29:16.126962 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:16.363317 systemd-networkd[1496]: lxc_health: Gained IPv6LL Sep 13 00:29:16.835920 systemd-networkd[1496]: lxc96a4e8af1f47: Gained IPv6LL Sep 13 00:29:16.960803 systemd-networkd[1496]: lxce35199f139d0: Gained IPv6LL Sep 13 00:29:23.301584 containerd[1597]: time="2025-09-13T00:29:23.299828114Z" level=info msg="connecting to shim bbde945a9c17ac27519033efa7bec33d2016e9ff39ba832a9af741245a97ad54" address="unix:///run/containerd/s/d6c4616015eaaae71c7e64a0e0f85f867572d38871b18cb8b2a68e063fe7ba24" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:29:23.306536 containerd[1597]: time="2025-09-13T00:29:23.305579520Z" level=info msg="connecting to shim 091327ce6b6b933b366e52220976d1c7d6f907511a7babbb5bd0e19b81bc2a74" address="unix:///run/containerd/s/73ac8c1897f5783b2dbf5ca6113b684061760a57bac66a4c8d12b2e3257d4b6c" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:29:23.420881 systemd[1]: Started cri-containerd-091327ce6b6b933b366e52220976d1c7d6f907511a7babbb5bd0e19b81bc2a74.scope - libcontainer container 091327ce6b6b933b366e52220976d1c7d6f907511a7babbb5bd0e19b81bc2a74. Sep 13 00:29:23.435679 systemd[1]: Started cri-containerd-bbde945a9c17ac27519033efa7bec33d2016e9ff39ba832a9af741245a97ad54.scope - libcontainer container bbde945a9c17ac27519033efa7bec33d2016e9ff39ba832a9af741245a97ad54. Sep 13 00:29:23.487523 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:29:23.499635 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:29:23.554929 containerd[1597]: time="2025-09-13T00:29:23.554768690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z4jps,Uid:dc4a07f0-9960-4e23-8267-53878932facd,Namespace:kube-system,Attempt:0,} returns sandbox id \"091327ce6b6b933b366e52220976d1c7d6f907511a7babbb5bd0e19b81bc2a74\"" Sep 13 00:29:23.606407 kubelet[2745]: E0913 00:29:23.606321 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:23.616960 containerd[1597]: time="2025-09-13T00:29:23.613812593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tr9lp,Uid:99137ef2-f663-456e-9d3a-97d0dc4563d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbde945a9c17ac27519033efa7bec33d2016e9ff39ba832a9af741245a97ad54\"" Sep 13 00:29:23.617127 kubelet[2745]: E0913 00:29:23.614469 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:23.635560 containerd[1597]: time="2025-09-13T00:29:23.635180332Z" level=info msg="CreateContainer within sandbox \"091327ce6b6b933b366e52220976d1c7d6f907511a7babbb5bd0e19b81bc2a74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:29:23.662937 containerd[1597]: time="2025-09-13T00:29:23.661220143Z" level=info msg="CreateContainer within sandbox \"bbde945a9c17ac27519033efa7bec33d2016e9ff39ba832a9af741245a97ad54\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:29:23.707298 containerd[1597]: time="2025-09-13T00:29:23.707155759Z" level=info msg="Container 6e98121571d619078134508032c2490624e9ef55c3f6de28c87499cf6e2c4eda: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:29:24.277457 containerd[1597]: time="2025-09-13T00:29:24.277183129Z" level=info msg="CreateContainer within sandbox \"091327ce6b6b933b366e52220976d1c7d6f907511a7babbb5bd0e19b81bc2a74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e98121571d619078134508032c2490624e9ef55c3f6de28c87499cf6e2c4eda\"" Sep 13 00:29:24.289824 containerd[1597]: time="2025-09-13T00:29:24.280314156Z" level=info msg="StartContainer for \"6e98121571d619078134508032c2490624e9ef55c3f6de28c87499cf6e2c4eda\"" Sep 13 00:29:24.334470 containerd[1597]: time="2025-09-13T00:29:24.333705163Z" level=info msg="connecting to shim 6e98121571d619078134508032c2490624e9ef55c3f6de28c87499cf6e2c4eda" address="unix:///run/containerd/s/73ac8c1897f5783b2dbf5ca6113b684061760a57bac66a4c8d12b2e3257d4b6c" protocol=ttrpc version=3 Sep 13 00:29:24.409688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073510770.mount: Deactivated successfully. Sep 13 00:29:24.427405 containerd[1597]: time="2025-09-13T00:29:24.426664192Z" level=info msg="Container b7322df6f8d665a35e0c277087a6ffb11665dce6a08053cd31278fa548bdd27c: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:29:24.455743 systemd[1]: Started cri-containerd-6e98121571d619078134508032c2490624e9ef55c3f6de28c87499cf6e2c4eda.scope - libcontainer container 6e98121571d619078134508032c2490624e9ef55c3f6de28c87499cf6e2c4eda. Sep 13 00:29:24.614119 containerd[1597]: time="2025-09-13T00:29:24.611326159Z" level=info msg="StartContainer for \"6e98121571d619078134508032c2490624e9ef55c3f6de28c87499cf6e2c4eda\" returns successfully" Sep 13 00:29:24.623015 containerd[1597]: time="2025-09-13T00:29:24.622616231Z" level=info msg="CreateContainer within sandbox \"bbde945a9c17ac27519033efa7bec33d2016e9ff39ba832a9af741245a97ad54\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b7322df6f8d665a35e0c277087a6ffb11665dce6a08053cd31278fa548bdd27c\"" Sep 13 00:29:24.625075 containerd[1597]: time="2025-09-13T00:29:24.623735873Z" level=info msg="StartContainer for \"b7322df6f8d665a35e0c277087a6ffb11665dce6a08053cd31278fa548bdd27c\"" Sep 13 00:29:24.625075 containerd[1597]: time="2025-09-13T00:29:24.624643196Z" level=info msg="connecting to shim b7322df6f8d665a35e0c277087a6ffb11665dce6a08053cd31278fa548bdd27c" address="unix:///run/containerd/s/d6c4616015eaaae71c7e64a0e0f85f867572d38871b18cb8b2a68e063fe7ba24" protocol=ttrpc version=3 Sep 13 00:29:24.726561 systemd[1]: Started cri-containerd-b7322df6f8d665a35e0c277087a6ffb11665dce6a08053cd31278fa548bdd27c.scope - libcontainer container b7322df6f8d665a35e0c277087a6ffb11665dce6a08053cd31278fa548bdd27c. Sep 13 00:29:24.882310 containerd[1597]: time="2025-09-13T00:29:24.878210652Z" level=info msg="StartContainer for \"b7322df6f8d665a35e0c277087a6ffb11665dce6a08053cd31278fa548bdd27c\" returns successfully" Sep 13 00:29:25.266044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount590745056.mount: Deactivated successfully. Sep 13 00:29:25.285593 kubelet[2745]: E0913 00:29:25.284128 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:25.285593 kubelet[2745]: E0913 00:29:25.285381 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:25.351507 kubelet[2745]: I0913 00:29:25.351324 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tr9lp" podStartSLOduration=42.351304287 podStartE2EDuration="42.351304287s" podCreationTimestamp="2025-09-13 00:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:29:25.317051597 +0000 UTC m=+49.405787354" watchObservedRunningTime="2025-09-13 00:29:25.351304287 +0000 UTC m=+49.440040044" Sep 13 00:29:25.351507 kubelet[2745]: I0913 00:29:25.351420 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z4jps" podStartSLOduration=42.351416649 podStartE2EDuration="42.351416649s" podCreationTimestamp="2025-09-13 00:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:29:25.348327611 +0000 UTC m=+49.437063368" watchObservedRunningTime="2025-09-13 00:29:25.351416649 +0000 UTC m=+49.440152406" Sep 13 00:29:26.270161 kubelet[2745]: E0913 00:29:26.269708 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:26.270706 kubelet[2745]: E0913 00:29:26.270683 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:26.854850 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:43896.service - OpenSSH per-connection server daemon (10.0.0.1:43896). Sep 13 00:29:27.043921 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 43896 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:29:27.044600 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:29:27.072829 systemd-logind[1570]: New session 8 of user core. Sep 13 00:29:27.089850 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:29:27.272101 kubelet[2745]: E0913 00:29:27.271451 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:27.291811 kubelet[2745]: E0913 00:29:27.286425 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:27.859067 sshd[4096]: Connection closed by 10.0.0.1 port 43896 Sep 13 00:29:27.860565 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Sep 13 00:29:27.875349 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:29:27.876413 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:43896.service: Deactivated successfully. Sep 13 00:29:27.886198 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:29:27.900811 systemd-logind[1570]: Removed session 8. Sep 13 00:29:28.278349 kubelet[2745]: E0913 00:29:28.277889 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:28.279367 kubelet[2745]: E0913 00:29:28.279264 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:32.904396 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:39646.service - OpenSSH per-connection server daemon (10.0.0.1:39646). Sep 13 00:29:33.033800 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 39646 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:29:33.037062 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:29:33.057621 systemd-logind[1570]: New session 9 of user core. Sep 13 00:29:33.070833 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:29:33.376605 sshd[4115]: Connection closed by 10.0.0.1 port 39646 Sep 13 00:29:33.378755 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Sep 13 00:29:33.389262 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:39646.service: Deactivated successfully. Sep 13 00:29:33.394801 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:29:33.402368 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:29:33.406857 systemd-logind[1570]: Removed session 9. Sep 13 00:29:38.398426 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:39654.service - OpenSSH per-connection server daemon (10.0.0.1:39654). Sep 13 00:29:38.448419 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 39654 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:29:38.449819 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:29:38.454849 systemd-logind[1570]: New session 10 of user core. Sep 13 00:29:38.466645 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:29:38.585765 sshd[4133]: Connection closed by 10.0.0.1 port 39654 Sep 13 00:29:38.586119 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Sep 13 00:29:38.591219 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:39654.service: Deactivated successfully. Sep 13 00:29:38.593678 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:29:38.594658 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:29:38.596439 systemd-logind[1570]: Removed session 10. Sep 13 00:29:43.613329 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:38356.service - OpenSSH per-connection server daemon (10.0.0.1:38356). Sep 13 00:29:43.744047 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 38356 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:29:43.747131 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:29:43.762161 systemd-logind[1570]: New session 11 of user core. Sep 13 00:29:43.777621 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:29:44.105504 sshd[4150]: Connection closed by 10.0.0.1 port 38356 Sep 13 00:29:44.104904 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Sep 13 00:29:44.116554 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:38356.service: Deactivated successfully. Sep 13 00:29:44.126339 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:29:44.133363 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:29:44.144631 systemd-logind[1570]: Removed session 11. Sep 13 00:29:49.129555 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:38366.service - OpenSSH per-connection server daemon (10.0.0.1:38366). Sep 13 00:29:49.176387 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 38366 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:29:49.178210 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:29:49.184462 systemd-logind[1570]: New session 12 of user core. Sep 13 00:29:49.195624 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:29:49.346292 sshd[4168]: Connection closed by 10.0.0.1 port 38366 Sep 13 00:29:49.346687 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Sep 13 00:29:49.352015 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:38366.service: Deactivated successfully. Sep 13 00:29:49.354739 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:29:49.356028 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:29:49.357798 systemd-logind[1570]: Removed session 12. Sep 13 00:29:54.360371 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:39010.service - OpenSSH per-connection server daemon (10.0.0.1:39010). Sep 13 00:29:54.412465 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 39010 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:29:54.414326 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:29:54.418771 systemd-logind[1570]: New session 13 of user core. Sep 13 00:29:54.432611 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:29:54.678957 sshd[4184]: Connection closed by 10.0.0.1 port 39010 Sep 13 00:29:54.679286 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Sep 13 00:29:54.684470 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:39010.service: Deactivated successfully. Sep 13 00:29:54.686907 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:29:54.687758 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:29:54.689164 systemd-logind[1570]: Removed session 13. Sep 13 00:29:59.020791 kubelet[2745]: E0913 00:29:59.020743 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:29:59.711776 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:39016.service - OpenSSH per-connection server daemon (10.0.0.1:39016). Sep 13 00:29:59.791414 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 39016 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:29:59.793429 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:29:59.803023 systemd-logind[1570]: New session 14 of user core. Sep 13 00:29:59.823583 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:30:00.003461 sshd[4200]: Connection closed by 10.0.0.1 port 39016 Sep 13 00:30:00.006267 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:00.017919 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:39016.service: Deactivated successfully. Sep 13 00:30:00.021458 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:30:00.024705 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:30:00.030989 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:59788.service - OpenSSH per-connection server daemon (10.0.0.1:59788). Sep 13 00:30:00.034263 systemd-logind[1570]: Removed session 14. Sep 13 00:30:00.142169 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 59788 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:00.149153 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:00.179003 systemd-logind[1570]: New session 15 of user core. Sep 13 00:30:00.188858 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:30:00.646025 sshd[4216]: Connection closed by 10.0.0.1 port 59788 Sep 13 00:30:00.646651 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:00.663293 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:59788.service: Deactivated successfully. Sep 13 00:30:00.666697 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:30:00.668762 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:30:00.674383 systemd-logind[1570]: Removed session 15. Sep 13 00:30:00.676378 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:59792.service - OpenSSH per-connection server daemon (10.0.0.1:59792). Sep 13 00:30:00.781499 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 59792 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:00.785197 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:00.818935 systemd-logind[1570]: New session 16 of user core. Sep 13 00:30:00.840397 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:30:01.173637 sshd[4231]: Connection closed by 10.0.0.1 port 59792 Sep 13 00:30:01.168107 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:01.180666 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:59792.service: Deactivated successfully. Sep 13 00:30:01.186437 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:30:01.195943 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:30:01.200413 systemd-logind[1570]: Removed session 16. Sep 13 00:30:06.188499 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:59794.service - OpenSSH per-connection server daemon (10.0.0.1:59794). Sep 13 00:30:06.236929 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 59794 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:06.238603 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:06.243381 systemd-logind[1570]: New session 17 of user core. Sep 13 00:30:06.254724 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:30:06.376159 sshd[4246]: Connection closed by 10.0.0.1 port 59794 Sep 13 00:30:06.376541 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:06.381823 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:59794.service: Deactivated successfully. Sep 13 00:30:06.384220 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:30:06.385157 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:30:06.386371 systemd-logind[1570]: Removed session 17. Sep 13 00:30:07.018390 kubelet[2745]: E0913 00:30:07.018330 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:09.019510 kubelet[2745]: E0913 00:30:09.019384 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:11.393920 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:46112.service - OpenSSH per-connection server daemon (10.0.0.1:46112). Sep 13 00:30:11.451524 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 46112 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:11.453278 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:11.458278 systemd-logind[1570]: New session 18 of user core. Sep 13 00:30:11.466773 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:30:11.580793 sshd[4262]: Connection closed by 10.0.0.1 port 46112 Sep 13 00:30:11.581151 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:11.585157 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:46112.service: Deactivated successfully. Sep 13 00:30:11.587225 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:30:11.588025 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:30:11.589274 systemd-logind[1570]: Removed session 18. Sep 13 00:30:15.018508 kubelet[2745]: E0913 00:30:15.018421 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:16.022025 kubelet[2745]: E0913 00:30:16.021707 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:16.596942 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:46120.service - OpenSSH per-connection server daemon (10.0.0.1:46120). Sep 13 00:30:16.658548 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 46120 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:16.660405 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:16.665125 systemd-logind[1570]: New session 19 of user core. Sep 13 00:30:16.674725 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:30:16.789820 sshd[4280]: Connection closed by 10.0.0.1 port 46120 Sep 13 00:30:16.790567 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:16.803473 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:46120.service: Deactivated successfully. Sep 13 00:30:16.805612 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:30:16.808218 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:30:16.810684 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:46130.service - OpenSSH per-connection server daemon (10.0.0.1:46130). Sep 13 00:30:16.811864 systemd-logind[1570]: Removed session 19. Sep 13 00:30:16.867543 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 46130 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:16.869264 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:16.874142 systemd-logind[1570]: New session 20 of user core. Sep 13 00:30:16.889654 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:30:17.200357 sshd[4295]: Connection closed by 10.0.0.1 port 46130 Sep 13 00:30:17.200677 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:17.216290 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:46130.service: Deactivated successfully. Sep 13 00:30:17.218603 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:30:17.219552 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:30:17.223209 systemd[1]: Started sshd@20-10.0.0.98:22-10.0.0.1:46132.service - OpenSSH per-connection server daemon (10.0.0.1:46132). Sep 13 00:30:17.224032 systemd-logind[1570]: Removed session 20. Sep 13 00:30:17.280430 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 46132 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:17.282547 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:17.288412 systemd-logind[1570]: New session 21 of user core. Sep 13 00:30:17.299683 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:30:20.024362 kubelet[2745]: E0913 00:30:20.020966 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:20.813873 sshd[4308]: Connection closed by 10.0.0.1 port 46132 Sep 13 00:30:20.814224 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:20.825796 systemd[1]: sshd@20-10.0.0.98:22-10.0.0.1:46132.service: Deactivated successfully. Sep 13 00:30:20.828247 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:30:20.829297 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:30:20.836978 systemd[1]: Started sshd@21-10.0.0.98:22-10.0.0.1:39352.service - OpenSSH per-connection server daemon (10.0.0.1:39352). Sep 13 00:30:20.837952 systemd-logind[1570]: Removed session 21. Sep 13 00:30:20.887978 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 39352 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:20.889613 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:20.895896 systemd-logind[1570]: New session 22 of user core. Sep 13 00:30:20.909643 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:30:22.209780 sshd[4329]: Connection closed by 10.0.0.1 port 39352 Sep 13 00:30:22.217916 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:22.229979 systemd[1]: sshd@21-10.0.0.98:22-10.0.0.1:39352.service: Deactivated successfully. Sep 13 00:30:22.233940 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:30:22.241072 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:30:22.247676 systemd[1]: Started sshd@22-10.0.0.98:22-10.0.0.1:39360.service - OpenSSH per-connection server daemon (10.0.0.1:39360). Sep 13 00:30:22.249595 systemd-logind[1570]: Removed session 22. Sep 13 00:30:22.297814 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 39360 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:22.299679 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:22.307106 systemd-logind[1570]: New session 23 of user core. Sep 13 00:30:22.317644 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:30:22.653191 sshd[4342]: Connection closed by 10.0.0.1 port 39360 Sep 13 00:30:22.653525 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:22.656698 systemd[1]: sshd@22-10.0.0.98:22-10.0.0.1:39360.service: Deactivated successfully. Sep 13 00:30:22.658631 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:30:22.661788 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:30:22.663233 systemd-logind[1570]: Removed session 23. Sep 13 00:30:27.667026 systemd[1]: Started sshd@23-10.0.0.98:22-10.0.0.1:39370.service - OpenSSH per-connection server daemon (10.0.0.1:39370). Sep 13 00:30:27.727130 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 39370 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:27.729060 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:27.734052 systemd-logind[1570]: New session 24 of user core. Sep 13 00:30:27.744657 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:30:27.870721 sshd[4358]: Connection closed by 10.0.0.1 port 39370 Sep 13 00:30:27.871087 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:27.875919 systemd[1]: sshd@23-10.0.0.98:22-10.0.0.1:39370.service: Deactivated successfully. Sep 13 00:30:27.878059 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:30:27.878911 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:30:27.880124 systemd-logind[1570]: Removed session 24. Sep 13 00:30:32.889061 systemd[1]: Started sshd@24-10.0.0.98:22-10.0.0.1:41280.service - OpenSSH per-connection server daemon (10.0.0.1:41280). Sep 13 00:30:32.953938 sshd[4371]: Accepted publickey for core from 10.0.0.1 port 41280 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:32.955820 sshd-session[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:32.961057 systemd-logind[1570]: New session 25 of user core. Sep 13 00:30:32.969697 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:30:33.085881 sshd[4373]: Connection closed by 10.0.0.1 port 41280 Sep 13 00:30:33.086290 sshd-session[4371]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:33.091830 systemd[1]: sshd@24-10.0.0.98:22-10.0.0.1:41280.service: Deactivated successfully. Sep 13 00:30:33.095072 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:30:33.096166 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:30:33.097955 systemd-logind[1570]: Removed session 25. Sep 13 00:30:36.021783 kubelet[2745]: E0913 00:30:36.021734 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:38.119956 systemd[1]: Started sshd@25-10.0.0.98:22-10.0.0.1:41294.service - OpenSSH per-connection server daemon (10.0.0.1:41294). Sep 13 00:30:38.195756 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 41294 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:38.198706 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:38.205914 systemd-logind[1570]: New session 26 of user core. Sep 13 00:30:38.217908 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:30:38.362658 sshd[4392]: Connection closed by 10.0.0.1 port 41294 Sep 13 00:30:38.363226 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:38.368424 systemd[1]: sshd@25-10.0.0.98:22-10.0.0.1:41294.service: Deactivated successfully. Sep 13 00:30:38.371288 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:30:38.373633 systemd-logind[1570]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:30:38.376440 systemd-logind[1570]: Removed session 26. Sep 13 00:30:43.375874 systemd[1]: Started sshd@26-10.0.0.98:22-10.0.0.1:60212.service - OpenSSH per-connection server daemon (10.0.0.1:60212). Sep 13 00:30:43.422640 sshd[4405]: Accepted publickey for core from 10.0.0.1 port 60212 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:43.424207 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:43.429076 systemd-logind[1570]: New session 27 of user core. Sep 13 00:30:43.439650 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:30:43.551793 sshd[4408]: Connection closed by 10.0.0.1 port 60212 Sep 13 00:30:43.552168 sshd-session[4405]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:43.561392 systemd[1]: sshd@26-10.0.0.98:22-10.0.0.1:60212.service: Deactivated successfully. Sep 13 00:30:43.563325 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:30:43.564222 systemd-logind[1570]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:30:43.567245 systemd[1]: Started sshd@27-10.0.0.98:22-10.0.0.1:60222.service - OpenSSH per-connection server daemon (10.0.0.1:60222). Sep 13 00:30:43.567927 systemd-logind[1570]: Removed session 27. Sep 13 00:30:43.613904 sshd[4422]: Accepted publickey for core from 10.0.0.1 port 60222 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:43.615637 sshd-session[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:43.621068 systemd-logind[1570]: New session 28 of user core. Sep 13 00:30:43.635660 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 00:30:45.044229 containerd[1597]: time="2025-09-13T00:30:45.044168540Z" level=info msg="StopContainer for \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\" with timeout 30 (s)" Sep 13 00:30:45.060752 containerd[1597]: time="2025-09-13T00:30:45.060667536Z" level=info msg="Stop container \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\" with signal terminated" Sep 13 00:30:45.074383 systemd[1]: cri-containerd-c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662.scope: Deactivated successfully. Sep 13 00:30:45.079064 containerd[1597]: time="2025-09-13T00:30:45.078918487Z" level=info msg="received exit event container_id:\"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\" id:\"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\" pid:3300 exited_at:{seconds:1757723445 nanos:77544284}" Sep 13 00:30:45.079064 containerd[1597]: time="2025-09-13T00:30:45.078954274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\" id:\"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\" pid:3300 exited_at:{seconds:1757723445 nanos:77544284}" Sep 13 00:30:45.097408 containerd[1597]: time="2025-09-13T00:30:45.095795667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" id:\"8baaf22b0b50340faa07cfc0d7033dc0671ac7d09bc8d84169c95ab44f4f3de4\" pid:4448 exited_at:{seconds:1757723445 nanos:94772475}" Sep 13 00:30:45.098428 containerd[1597]: time="2025-09-13T00:30:45.098385413Z" level=info msg="StopContainer for \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" with timeout 2 (s)" Sep 13 00:30:45.099183 containerd[1597]: time="2025-09-13T00:30:45.098956672Z" level=info msg="Stop container \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" with signal terminated" Sep 13 00:30:45.099183 containerd[1597]: time="2025-09-13T00:30:45.099062461Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:30:45.107872 systemd-networkd[1496]: lxc_health: Link DOWN Sep 13 00:30:45.107886 systemd-networkd[1496]: lxc_health: Lost carrier Sep 13 00:30:45.121072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662-rootfs.mount: Deactivated successfully. Sep 13 00:30:45.133144 systemd[1]: cri-containerd-fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916.scope: Deactivated successfully. Sep 13 00:30:45.133523 systemd[1]: cri-containerd-fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916.scope: Consumed 10.568s CPU time, 125.5M memory peak, 628K read from disk, 13.3M written to disk. Sep 13 00:30:45.134531 containerd[1597]: time="2025-09-13T00:30:45.134443199Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" id:\"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" pid:3413 exited_at:{seconds:1757723445 nanos:134067579}" Sep 13 00:30:45.134741 containerd[1597]: time="2025-09-13T00:30:45.134596156Z" level=info msg="received exit event container_id:\"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" id:\"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" pid:3413 exited_at:{seconds:1757723445 nanos:134067579}" Sep 13 00:30:45.139275 containerd[1597]: time="2025-09-13T00:30:45.139244869Z" level=info msg="StopContainer for \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\" returns successfully" Sep 13 00:30:45.140001 containerd[1597]: time="2025-09-13T00:30:45.139967453Z" level=info msg="StopPodSandbox for \"46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725\"" Sep 13 00:30:45.140070 containerd[1597]: time="2025-09-13T00:30:45.140051462Z" level=info msg="Container to stop \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:30:45.149153 systemd[1]: cri-containerd-46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725.scope: Deactivated successfully. Sep 13 00:30:45.153099 containerd[1597]: time="2025-09-13T00:30:45.152978115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725\" id:\"46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725\" pid:2953 exit_status:137 exited_at:{seconds:1757723445 nanos:151662092}" Sep 13 00:30:45.163825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916-rootfs.mount: Deactivated successfully. Sep 13 00:30:45.191125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725-rootfs.mount: Deactivated successfully. Sep 13 00:30:45.204422 containerd[1597]: time="2025-09-13T00:30:45.203994969Z" level=info msg="StopContainer for \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" returns successfully" Sep 13 00:30:45.205053 containerd[1597]: time="2025-09-13T00:30:45.204994316Z" level=info msg="shim disconnected" id=46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725 namespace=k8s.io Sep 13 00:30:45.205053 containerd[1597]: time="2025-09-13T00:30:45.205027548Z" level=warning msg="cleaning up after shim disconnected" id=46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725 namespace=k8s.io Sep 13 00:30:45.225825 containerd[1597]: time="2025-09-13T00:30:45.205036134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:30:45.225825 containerd[1597]: time="2025-09-13T00:30:45.205085167Z" level=info msg="StopPodSandbox for \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\"" Sep 13 00:30:45.226724 containerd[1597]: time="2025-09-13T00:30:45.225906569Z" level=info msg="Container to stop \"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:30:45.226724 containerd[1597]: time="2025-09-13T00:30:45.225920456Z" level=info msg="Container to stop \"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:30:45.226724 containerd[1597]: time="2025-09-13T00:30:45.225932017Z" level=info msg="Container to stop \"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:30:45.226724 containerd[1597]: time="2025-09-13T00:30:45.225943830Z" level=info msg="Container to stop \"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:30:45.226724 containerd[1597]: time="2025-09-13T00:30:45.225952997Z" level=info msg="Container to stop \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:30:45.233244 systemd[1]: cri-containerd-b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca.scope: Deactivated successfully. Sep 13 00:30:45.253386 containerd[1597]: time="2025-09-13T00:30:45.253332715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" id:\"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" pid:2959 exit_status:137 exited_at:{seconds:1757723445 nanos:233847825}" Sep 13 00:30:45.255897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725-shm.mount: Deactivated successfully. Sep 13 00:30:45.260185 containerd[1597]: time="2025-09-13T00:30:45.259719337Z" level=info msg="received exit event sandbox_id:\"46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725\" exit_status:137 exited_at:{seconds:1757723445 nanos:151662092}" Sep 13 00:30:45.266745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca-rootfs.mount: Deactivated successfully. Sep 13 00:30:45.273327 containerd[1597]: time="2025-09-13T00:30:45.273285348Z" level=info msg="TearDown network for sandbox \"46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725\" successfully" Sep 13 00:30:45.273497 containerd[1597]: time="2025-09-13T00:30:45.273330723Z" level=info msg="StopPodSandbox for \"46c16915120af9a151e1303d0e9459f83a58902ddf35391a6543f0267b3f1725\" returns successfully" Sep 13 00:30:45.278000 containerd[1597]: time="2025-09-13T00:30:45.277950100Z" level=info msg="received exit event sandbox_id:\"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" exit_status:137 exited_at:{seconds:1757723445 nanos:233847825}" Sep 13 00:30:45.280619 containerd[1597]: time="2025-09-13T00:30:45.277957083Z" level=info msg="shim disconnected" id=b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca namespace=k8s.io Sep 13 00:30:45.280619 containerd[1597]: time="2025-09-13T00:30:45.278645773Z" level=warning msg="cleaning up after shim disconnected" id=b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca namespace=k8s.io Sep 13 00:30:45.280619 containerd[1597]: time="2025-09-13T00:30:45.278655221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:30:45.280619 containerd[1597]: time="2025-09-13T00:30:45.278932294Z" level=info msg="TearDown network for sandbox \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" successfully" Sep 13 00:30:45.280619 containerd[1597]: time="2025-09-13T00:30:45.278948525Z" level=info msg="StopPodSandbox for \"b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca\" returns successfully" Sep 13 00:30:45.379547 kubelet[2745]: I0913 00:30:45.379386 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-host-proc-sys-net\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.379547 kubelet[2745]: I0913 00:30:45.379443 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-hostproc\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.379547 kubelet[2745]: I0913 00:30:45.379463 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-lib-modules\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.379547 kubelet[2745]: I0913 00:30:45.379515 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e87d86ae-6792-443b-88f5-38bdb041e7b4-cilium-config-path\") pod \"e87d86ae-6792-443b-88f5-38bdb041e7b4\" (UID: \"e87d86ae-6792-443b-88f5-38bdb041e7b4\") " Sep 13 00:30:45.379547 kubelet[2745]: I0913 00:30:45.379536 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-bpf-maps\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.380118 kubelet[2745]: I0913 00:30:45.379537 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:30:45.380118 kubelet[2745]: I0913 00:30:45.379586 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:30:45.380118 kubelet[2745]: I0913 00:30:45.379553 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-run\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.380118 kubelet[2745]: I0913 00:30:45.379607 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:30:45.380118 kubelet[2745]: I0913 00:30:45.379597 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-hostproc" (OuterVolumeSpecName: "hostproc") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:30:45.380270 kubelet[2745]: I0913 00:30:45.379631 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d87643d-2ddf-49a4-afdb-f8d00e83762f-clustermesh-secrets\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.380270 kubelet[2745]: I0913 00:30:45.379743 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbw4m\" (UniqueName: \"kubernetes.io/projected/e87d86ae-6792-443b-88f5-38bdb041e7b4-kube-api-access-jbw4m\") pod \"e87d86ae-6792-443b-88f5-38bdb041e7b4\" (UID: \"e87d86ae-6792-443b-88f5-38bdb041e7b4\") " Sep 13 00:30:45.380270 kubelet[2745]: I0913 00:30:45.379774 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-cgroup\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.380270 kubelet[2745]: I0913 00:30:45.379796 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cni-path\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.380270 kubelet[2745]: I0913 00:30:45.379829 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-host-proc-sys-kernel\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.380270 kubelet[2745]: I0913 00:30:45.379847 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d87643d-2ddf-49a4-afdb-f8d00e83762f-hubble-tls\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.380436 kubelet[2745]: I0913 00:30:45.379865 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rwcm\" (UniqueName: \"kubernetes.io/projected/9d87643d-2ddf-49a4-afdb-f8d00e83762f-kube-api-access-5rwcm\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.380436 kubelet[2745]: I0913 00:30:45.379902 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-config-path\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.380436 kubelet[2745]: I0913 00:30:45.379921 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-xtables-lock\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.380436 kubelet[2745]: I0913 00:30:45.379938 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-etc-cni-netd\") pod \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\" (UID: \"9d87643d-2ddf-49a4-afdb-f8d00e83762f\") " Sep 13 00:30:45.380436 kubelet[2745]: I0913 00:30:45.380011 2745 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.380436 kubelet[2745]: I0913 00:30:45.380022 2745 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.380436 kubelet[2745]: I0913 00:30:45.380032 2745 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.380668 kubelet[2745]: I0913 00:30:45.380057 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.410913 kubelet[2745]: I0913 00:30:45.379916 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cni-path" (OuterVolumeSpecName: "cni-path") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:30:45.410913 kubelet[2745]: I0913 00:30:45.380080 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:30:45.410913 kubelet[2745]: I0913 00:30:45.380092 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:30:45.411111 kubelet[2745]: I0913 00:30:45.406441 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:30:45.411111 kubelet[2745]: I0913 00:30:45.406449 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:30:45.411111 kubelet[2745]: I0913 00:30:45.410012 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e87d86ae-6792-443b-88f5-38bdb041e7b4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e87d86ae-6792-443b-88f5-38bdb041e7b4" (UID: "e87d86ae-6792-443b-88f5-38bdb041e7b4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:30:45.411111 kubelet[2745]: I0913 00:30:45.410667 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d87643d-2ddf-49a4-afdb-f8d00e83762f-kube-api-access-5rwcm" (OuterVolumeSpecName: "kube-api-access-5rwcm") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "kube-api-access-5rwcm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:30:45.411111 kubelet[2745]: I0913 00:30:45.410837 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d87643d-2ddf-49a4-afdb-f8d00e83762f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:30:45.411261 kubelet[2745]: I0913 00:30:45.410873 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:30:45.411261 kubelet[2745]: I0913 00:30:45.411102 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d87643d-2ddf-49a4-afdb-f8d00e83762f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:30:45.413289 kubelet[2745]: I0913 00:30:45.413250 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e87d86ae-6792-443b-88f5-38bdb041e7b4-kube-api-access-jbw4m" (OuterVolumeSpecName: "kube-api-access-jbw4m") pod "e87d86ae-6792-443b-88f5-38bdb041e7b4" (UID: "e87d86ae-6792-443b-88f5-38bdb041e7b4"). InnerVolumeSpecName "kube-api-access-jbw4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:30:45.414642 kubelet[2745]: I0913 00:30:45.414394 2745 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d87643d-2ddf-49a4-afdb-f8d00e83762f" (UID: "9d87643d-2ddf-49a4-afdb-f8d00e83762f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:30:45.481026 kubelet[2745]: I0913 00:30:45.480962 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.481026 kubelet[2745]: I0913 00:30:45.481011 2745 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.481026 kubelet[2745]: I0913 00:30:45.481022 2745 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.481026 kubelet[2745]: I0913 00:30:45.481034 2745 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d87643d-2ddf-49a4-afdb-f8d00e83762f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.481253 kubelet[2745]: I0913 00:30:45.481044 2745 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5rwcm\" (UniqueName: \"kubernetes.io/projected/9d87643d-2ddf-49a4-afdb-f8d00e83762f-kube-api-access-5rwcm\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.481253 kubelet[2745]: I0913 00:30:45.481056 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d87643d-2ddf-49a4-afdb-f8d00e83762f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.481253 kubelet[2745]: I0913 00:30:45.481065 2745 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.481253 kubelet[2745]: I0913 00:30:45.481075 2745 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.481253 kubelet[2745]: I0913 00:30:45.481087 2745 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e87d86ae-6792-443b-88f5-38bdb041e7b4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.481253 kubelet[2745]: I0913 00:30:45.481097 2745 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d87643d-2ddf-49a4-afdb-f8d00e83762f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.481253 kubelet[2745]: I0913 00:30:45.481106 2745 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d87643d-2ddf-49a4-afdb-f8d00e83762f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.481253 kubelet[2745]: I0913 00:30:45.481116 2745 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jbw4m\" (UniqueName: \"kubernetes.io/projected/e87d86ae-6792-443b-88f5-38bdb041e7b4-kube-api-access-jbw4m\") on node \"localhost\" DevicePath \"\"" Sep 13 00:30:45.520312 kubelet[2745]: I0913 00:30:45.520213 2745 scope.go:117] "RemoveContainer" containerID="c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662" Sep 13 00:30:45.523016 containerd[1597]: time="2025-09-13T00:30:45.522959920Z" level=info msg="RemoveContainer for \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\"" Sep 13 00:30:45.527054 systemd[1]: Removed slice kubepods-besteffort-pode87d86ae_6792_443b_88f5_38bdb041e7b4.slice - libcontainer container kubepods-besteffort-pode87d86ae_6792_443b_88f5_38bdb041e7b4.slice. Sep 13 00:30:45.533767 systemd[1]: Removed slice kubepods-burstable-pod9d87643d_2ddf_49a4_afdb_f8d00e83762f.slice - libcontainer container kubepods-burstable-pod9d87643d_2ddf_49a4_afdb_f8d00e83762f.slice. Sep 13 00:30:45.533878 systemd[1]: kubepods-burstable-pod9d87643d_2ddf_49a4_afdb_f8d00e83762f.slice: Consumed 10.735s CPU time, 125.8M memory peak, 636K read from disk, 13.3M written to disk. Sep 13 00:30:45.601135 containerd[1597]: time="2025-09-13T00:30:45.601006722Z" level=info msg="RemoveContainer for \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\" returns successfully" Sep 13 00:30:45.601409 kubelet[2745]: I0913 00:30:45.601373 2745 scope.go:117] "RemoveContainer" containerID="c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662" Sep 13 00:30:45.601963 containerd[1597]: time="2025-09-13T00:30:45.601880181Z" level=error msg="ContainerStatus for \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\": not found" Sep 13 00:30:45.606988 kubelet[2745]: E0913 00:30:45.606942 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\": not found" containerID="c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662" Sep 13 00:30:45.607102 kubelet[2745]: I0913 00:30:45.606992 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662"} err="failed to get container status \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\": rpc error: code = NotFound desc = an error occurred when try to find container \"c411e444cea755f121086b4259314ccbe1b86a8c996e4f66169f370c21028662\": not found" Sep 13 00:30:45.607102 kubelet[2745]: I0913 00:30:45.607039 2745 scope.go:117] "RemoveContainer" containerID="fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916" Sep 13 00:30:45.609080 containerd[1597]: time="2025-09-13T00:30:45.609036294Z" level=info msg="RemoveContainer for \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\"" Sep 13 00:30:45.618876 containerd[1597]: time="2025-09-13T00:30:45.618825849Z" level=info msg="RemoveContainer for \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" returns successfully" Sep 13 00:30:45.619109 kubelet[2745]: I0913 00:30:45.619069 2745 scope.go:117] "RemoveContainer" containerID="be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86" Sep 13 00:30:45.621130 containerd[1597]: time="2025-09-13T00:30:45.621079271Z" level=info msg="RemoveContainer for \"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\"" Sep 13 00:30:45.628271 containerd[1597]: time="2025-09-13T00:30:45.628213564Z" level=info msg="RemoveContainer for \"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\" returns successfully" Sep 13 00:30:45.628549 kubelet[2745]: I0913 00:30:45.628519 2745 scope.go:117] "RemoveContainer" containerID="787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0" Sep 13 00:30:45.631048 containerd[1597]: time="2025-09-13T00:30:45.630965037Z" level=info msg="RemoveContainer for \"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\"" Sep 13 00:30:45.636929 containerd[1597]: time="2025-09-13T00:30:45.636834343Z" level=info msg="RemoveContainer for \"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\" returns successfully" Sep 13 00:30:45.637194 kubelet[2745]: I0913 00:30:45.637150 2745 scope.go:117] "RemoveContainer" containerID="5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151" Sep 13 00:30:45.638751 containerd[1597]: time="2025-09-13T00:30:45.638723558Z" level=info msg="RemoveContainer for \"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\"" Sep 13 00:30:45.643141 containerd[1597]: time="2025-09-13T00:30:45.642992844Z" level=info msg="RemoveContainer for \"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\" returns successfully" Sep 13 00:30:45.643338 kubelet[2745]: I0913 00:30:45.643247 2745 scope.go:117] "RemoveContainer" containerID="8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f" Sep 13 00:30:45.644975 containerd[1597]: time="2025-09-13T00:30:45.644942393Z" level=info msg="RemoveContainer for \"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\"" Sep 13 00:30:45.651012 containerd[1597]: time="2025-09-13T00:30:45.650945703Z" level=info msg="RemoveContainer for \"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\" returns successfully" Sep 13 00:30:45.651309 kubelet[2745]: I0913 00:30:45.651257 2745 scope.go:117] "RemoveContainer" containerID="fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916" Sep 13 00:30:45.651709 containerd[1597]: time="2025-09-13T00:30:45.651654420Z" level=error msg="ContainerStatus for \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\": not found" Sep 13 00:30:45.651855 kubelet[2745]: E0913 00:30:45.651824 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\": not found" containerID="fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916" Sep 13 00:30:45.651900 kubelet[2745]: I0913 00:30:45.651856 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916"} err="failed to get container status \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\": rpc error: code = NotFound desc = an error occurred when try to find container \"fcb8ab1b1073aa9a1e2b1076fc25d001bda0a98aae000bc244f60ba678d0c916\": not found" Sep 13 00:30:45.651900 kubelet[2745]: I0913 00:30:45.651880 2745 scope.go:117] "RemoveContainer" containerID="be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86" Sep 13 00:30:45.652171 containerd[1597]: time="2025-09-13T00:30:45.652126801Z" level=error msg="ContainerStatus for \"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\": not found" Sep 13 00:30:45.652281 kubelet[2745]: E0913 00:30:45.652254 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\": not found" containerID="be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86" Sep 13 00:30:45.652320 kubelet[2745]: I0913 00:30:45.652278 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86"} err="failed to get container status \"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\": rpc error: code = NotFound desc = an error occurred when try to find container \"be49f78219d359559f172c5909b580b2531677c94626a9b3760ed84707a0ab86\": not found" Sep 13 00:30:45.652320 kubelet[2745]: I0913 00:30:45.652295 2745 scope.go:117] "RemoveContainer" containerID="787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0" Sep 13 00:30:45.652491 containerd[1597]: time="2025-09-13T00:30:45.652451013Z" level=error msg="ContainerStatus for \"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\": not found" Sep 13 00:30:45.652690 kubelet[2745]: E0913 00:30:45.652651 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\": not found" containerID="787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0" Sep 13 00:30:45.652739 kubelet[2745]: I0913 00:30:45.652698 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0"} err="failed to get container status \"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"787012f005684acdcbe82635285055d38c448e9eef021c0e84e0f6d3c21581a0\": not found" Sep 13 00:30:45.652739 kubelet[2745]: I0913 00:30:45.652732 2745 scope.go:117] "RemoveContainer" containerID="5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151" Sep 13 00:30:45.653015 containerd[1597]: time="2025-09-13T00:30:45.652930778Z" level=error msg="ContainerStatus for \"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\": not found" Sep 13 00:30:45.653164 kubelet[2745]: E0913 00:30:45.653079 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\": not found" containerID="5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151" Sep 13 00:30:45.653164 kubelet[2745]: I0913 00:30:45.653110 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151"} err="failed to get container status \"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\": rpc error: code = NotFound desc = an error occurred when try to find container \"5322dc3eacfc98e764b486bfc57da87827090f92c2b0ca948434373768719151\": not found" Sep 13 00:30:45.653164 kubelet[2745]: I0913 00:30:45.653136 2745 scope.go:117] "RemoveContainer" containerID="8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f" Sep 13 00:30:45.653385 containerd[1597]: time="2025-09-13T00:30:45.653320424Z" level=error msg="ContainerStatus for \"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\": not found" Sep 13 00:30:45.653523 kubelet[2745]: E0913 00:30:45.653496 2745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\": not found" containerID="8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f" Sep 13 00:30:45.653577 kubelet[2745]: I0913 00:30:45.653523 2745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f"} err="failed to get container status \"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c7086e36e761c6b57ce6d4b76b9012a0f2048157a63f447cc03c6595cfb9b6f\": not found" Sep 13 00:30:46.020699 kubelet[2745]: I0913 00:30:46.020647 2745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d87643d-2ddf-49a4-afdb-f8d00e83762f" path="/var/lib/kubelet/pods/9d87643d-2ddf-49a4-afdb-f8d00e83762f/volumes" Sep 13 00:30:46.021425 kubelet[2745]: I0913 00:30:46.021394 2745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e87d86ae-6792-443b-88f5-38bdb041e7b4" path="/var/lib/kubelet/pods/e87d86ae-6792-443b-88f5-38bdb041e7b4/volumes" Sep 13 00:30:46.125175 systemd[1]: var-lib-kubelet-pods-e87d86ae\x2d6792\x2d443b\x2d88f5\x2d38bdb041e7b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djbw4m.mount: Deactivated successfully. Sep 13 00:30:46.125301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2af0ca64e43b801175b1d8bb545fbf458c044e6f9b5042366b2d01a31f323ca-shm.mount: Deactivated successfully. Sep 13 00:30:46.125391 systemd[1]: var-lib-kubelet-pods-9d87643d\x2d2ddf\x2d49a4\x2dafdb\x2df8d00e83762f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5rwcm.mount: Deactivated successfully. Sep 13 00:30:46.125497 systemd[1]: var-lib-kubelet-pods-9d87643d\x2d2ddf\x2d49a4\x2dafdb\x2df8d00e83762f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:30:46.125591 systemd[1]: var-lib-kubelet-pods-9d87643d\x2d2ddf\x2d49a4\x2dafdb\x2df8d00e83762f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:30:46.167239 kubelet[2745]: E0913 00:30:46.167177 2745 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:30:47.010553 sshd[4424]: Connection closed by 10.0.0.1 port 60222 Sep 13 00:30:47.011121 sshd-session[4422]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:47.021424 systemd[1]: sshd@27-10.0.0.98:22-10.0.0.1:60222.service: Deactivated successfully. Sep 13 00:30:47.023779 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:30:47.024624 systemd-logind[1570]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:30:47.027547 systemd[1]: Started sshd@28-10.0.0.98:22-10.0.0.1:60224.service - OpenSSH per-connection server daemon (10.0.0.1:60224). Sep 13 00:30:47.028316 systemd-logind[1570]: Removed session 28. Sep 13 00:30:47.083334 sshd[4580]: Accepted publickey for core from 10.0.0.1 port 60224 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:47.085187 sshd-session[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:47.090798 systemd-logind[1570]: New session 29 of user core. Sep 13 00:30:47.098635 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 13 00:30:47.850287 sshd[4582]: Connection closed by 10.0.0.1 port 60224 Sep 13 00:30:47.850641 sshd-session[4580]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:47.862548 systemd[1]: sshd@28-10.0.0.98:22-10.0.0.1:60224.service: Deactivated successfully. Sep 13 00:30:47.864921 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 00:30:47.865783 systemd-logind[1570]: Session 29 logged out. Waiting for processes to exit. Sep 13 00:30:47.869502 systemd-logind[1570]: Removed session 29. Sep 13 00:30:47.878832 systemd[1]: Started sshd@29-10.0.0.98:22-10.0.0.1:60238.service - OpenSSH per-connection server daemon (10.0.0.1:60238). Sep 13 00:30:47.898434 systemd[1]: Created slice kubepods-burstable-podb3ed6d90_0ee1_4610_91b8_647d298d583a.slice - libcontainer container kubepods-burstable-podb3ed6d90_0ee1_4610_91b8_647d298d583a.slice. Sep 13 00:30:47.934810 sshd[4593]: Accepted publickey for core from 10.0.0.1 port 60238 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:47.936736 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:47.941799 systemd-logind[1570]: New session 30 of user core. Sep 13 00:30:47.956675 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 13 00:30:47.996498 kubelet[2745]: I0913 00:30:47.996452 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3ed6d90-0ee1-4610-91b8-647d298d583a-hostproc\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.996498 kubelet[2745]: I0913 00:30:47.996507 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3ed6d90-0ee1-4610-91b8-647d298d583a-etc-cni-netd\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.996498 kubelet[2745]: I0913 00:30:47.996542 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3ed6d90-0ee1-4610-91b8-647d298d583a-bpf-maps\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997024 kubelet[2745]: I0913 00:30:47.996561 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3ed6d90-0ee1-4610-91b8-647d298d583a-cilium-config-path\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997024 kubelet[2745]: I0913 00:30:47.996583 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgjmx\" (UniqueName: \"kubernetes.io/projected/b3ed6d90-0ee1-4610-91b8-647d298d583a-kube-api-access-sgjmx\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997024 kubelet[2745]: I0913 00:30:47.996635 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3ed6d90-0ee1-4610-91b8-647d298d583a-cilium-run\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997024 kubelet[2745]: I0913 00:30:47.996718 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3ed6d90-0ee1-4610-91b8-647d298d583a-host-proc-sys-kernel\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997024 kubelet[2745]: I0913 00:30:47.996787 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3ed6d90-0ee1-4610-91b8-647d298d583a-cni-path\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997158 kubelet[2745]: I0913 00:30:47.996810 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3ed6d90-0ee1-4610-91b8-647d298d583a-clustermesh-secrets\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997158 kubelet[2745]: I0913 00:30:47.996835 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3ed6d90-0ee1-4610-91b8-647d298d583a-lib-modules\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997158 kubelet[2745]: I0913 00:30:47.996854 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3ed6d90-0ee1-4610-91b8-647d298d583a-host-proc-sys-net\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997158 kubelet[2745]: I0913 00:30:47.996871 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3ed6d90-0ee1-4610-91b8-647d298d583a-hubble-tls\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997158 kubelet[2745]: I0913 00:30:47.996913 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3ed6d90-0ee1-4610-91b8-647d298d583a-cilium-cgroup\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997158 kubelet[2745]: I0913 00:30:47.996939 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3ed6d90-0ee1-4610-91b8-647d298d583a-xtables-lock\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:47.997315 kubelet[2745]: I0913 00:30:47.996964 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b3ed6d90-0ee1-4610-91b8-647d298d583a-cilium-ipsec-secrets\") pod \"cilium-hc728\" (UID: \"b3ed6d90-0ee1-4610-91b8-647d298d583a\") " pod="kube-system/cilium-hc728" Sep 13 00:30:48.008356 sshd[4595]: Connection closed by 10.0.0.1 port 60238 Sep 13 00:30:48.008809 sshd-session[4593]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:48.022724 systemd[1]: sshd@29-10.0.0.98:22-10.0.0.1:60238.service: Deactivated successfully. Sep 13 00:30:48.024806 systemd[1]: session-30.scope: Deactivated successfully. Sep 13 00:30:48.025846 systemd-logind[1570]: Session 30 logged out. Waiting for processes to exit. Sep 13 00:30:48.030233 systemd[1]: Started sshd@30-10.0.0.98:22-10.0.0.1:60254.service - OpenSSH per-connection server daemon (10.0.0.1:60254). Sep 13 00:30:48.031029 systemd-logind[1570]: Removed session 30. Sep 13 00:30:48.081980 sshd[4602]: Accepted publickey for core from 10.0.0.1 port 60254 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:30:48.083706 sshd-session[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:48.089134 systemd-logind[1570]: New session 31 of user core. Sep 13 00:30:48.098642 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 13 00:30:48.207728 kubelet[2745]: E0913 00:30:48.207383 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:48.208401 containerd[1597]: time="2025-09-13T00:30:48.208104980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hc728,Uid:b3ed6d90-0ee1-4610-91b8-647d298d583a,Namespace:kube-system,Attempt:0,}" Sep 13 00:30:48.231184 containerd[1597]: time="2025-09-13T00:30:48.231127365Z" level=info msg="connecting to shim fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9" address="unix:///run/containerd/s/100f2038c5805872046cb43c44f514729b640cb7346389edfb970cf4bc107a9f" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:30:48.255656 systemd[1]: Started cri-containerd-fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9.scope - libcontainer container fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9. Sep 13 00:30:48.283864 containerd[1597]: time="2025-09-13T00:30:48.283808525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hc728,Uid:b3ed6d90-0ee1-4610-91b8-647d298d583a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9\"" Sep 13 00:30:48.284590 kubelet[2745]: E0913 00:30:48.284557 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:48.296702 containerd[1597]: time="2025-09-13T00:30:48.296654251Z" level=info msg="CreateContainer within sandbox \"fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:30:48.303609 containerd[1597]: time="2025-09-13T00:30:48.303568335Z" level=info msg="Container e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:30:48.312411 containerd[1597]: time="2025-09-13T00:30:48.312371783Z" level=info msg="CreateContainer within sandbox \"fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a\"" Sep 13 00:30:48.313519 containerd[1597]: time="2025-09-13T00:30:48.312980853Z" level=info msg="StartContainer for \"e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a\"" Sep 13 00:30:48.315020 containerd[1597]: time="2025-09-13T00:30:48.314947023Z" level=info msg="connecting to shim e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a" address="unix:///run/containerd/s/100f2038c5805872046cb43c44f514729b640cb7346389edfb970cf4bc107a9f" protocol=ttrpc version=3 Sep 13 00:30:48.341877 systemd[1]: Started cri-containerd-e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a.scope - libcontainer container e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a. Sep 13 00:30:48.382689 containerd[1597]: time="2025-09-13T00:30:48.382609447Z" level=info msg="StartContainer for \"e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a\" returns successfully" Sep 13 00:30:48.393839 systemd[1]: cri-containerd-e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a.scope: Deactivated successfully. Sep 13 00:30:48.396223 containerd[1597]: time="2025-09-13T00:30:48.396192183Z" level=info msg="received exit event container_id:\"e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a\" id:\"e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a\" pid:4675 exited_at:{seconds:1757723448 nanos:395852431}" Sep 13 00:30:48.396455 containerd[1597]: time="2025-09-13T00:30:48.396236456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a\" id:\"e88b928e8a8b69c9394afc04b7c45fa324a4fdd7ff9b5ef3036028559308118a\" pid:4675 exited_at:{seconds:1757723448 nanos:395852431}" Sep 13 00:30:48.536826 kubelet[2745]: E0913 00:30:48.536670 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:48.547979 containerd[1597]: time="2025-09-13T00:30:48.547915370Z" level=info msg="CreateContainer within sandbox \"fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:30:48.564793 containerd[1597]: time="2025-09-13T00:30:48.564743007Z" level=info msg="Container 436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:30:48.573895 containerd[1597]: time="2025-09-13T00:30:48.573852554Z" level=info msg="CreateContainer within sandbox \"fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed\"" Sep 13 00:30:48.574434 containerd[1597]: time="2025-09-13T00:30:48.574397863Z" level=info msg="StartContainer for \"436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed\"" Sep 13 00:30:48.575426 containerd[1597]: time="2025-09-13T00:30:48.575388171Z" level=info msg="connecting to shim 436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed" address="unix:///run/containerd/s/100f2038c5805872046cb43c44f514729b640cb7346389edfb970cf4bc107a9f" protocol=ttrpc version=3 Sep 13 00:30:48.602661 systemd[1]: Started cri-containerd-436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed.scope - libcontainer container 436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed. Sep 13 00:30:48.635584 containerd[1597]: time="2025-09-13T00:30:48.635535748Z" level=info msg="StartContainer for \"436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed\" returns successfully" Sep 13 00:30:48.640536 systemd[1]: cri-containerd-436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed.scope: Deactivated successfully. Sep 13 00:30:48.641421 containerd[1597]: time="2025-09-13T00:30:48.641120685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed\" id:\"436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed\" pid:4720 exited_at:{seconds:1757723448 nanos:640796714}" Sep 13 00:30:48.641421 containerd[1597]: time="2025-09-13T00:30:48.641216806Z" level=info msg="received exit event container_id:\"436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed\" id:\"436a9b9873cf1d2d2ff5222416565b978a3b06b89d4106d0872adf6bdf8678ed\" pid:4720 exited_at:{seconds:1757723448 nanos:640796714}" Sep 13 00:30:49.540375 kubelet[2745]: E0913 00:30:49.540343 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:49.545991 containerd[1597]: time="2025-09-13T00:30:49.545942860Z" level=info msg="CreateContainer within sandbox \"fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:30:49.556308 containerd[1597]: time="2025-09-13T00:30:49.556248532Z" level=info msg="Container 5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:30:49.570949 containerd[1597]: time="2025-09-13T00:30:49.570892157Z" level=info msg="CreateContainer within sandbox \"fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89\"" Sep 13 00:30:49.573523 containerd[1597]: time="2025-09-13T00:30:49.571408200Z" level=info msg="StartContainer for \"5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89\"" Sep 13 00:30:49.573523 containerd[1597]: time="2025-09-13T00:30:49.572829572Z" level=info msg="connecting to shim 5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89" address="unix:///run/containerd/s/100f2038c5805872046cb43c44f514729b640cb7346389edfb970cf4bc107a9f" protocol=ttrpc version=3 Sep 13 00:30:49.596633 systemd[1]: Started cri-containerd-5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89.scope - libcontainer container 5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89. Sep 13 00:30:49.639917 containerd[1597]: time="2025-09-13T00:30:49.639872137Z" level=info msg="StartContainer for \"5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89\" returns successfully" Sep 13 00:30:49.645759 systemd[1]: cri-containerd-5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89.scope: Deactivated successfully. Sep 13 00:30:49.646673 containerd[1597]: time="2025-09-13T00:30:49.646637781Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89\" id:\"5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89\" pid:4764 exited_at:{seconds:1757723449 nanos:646346802}" Sep 13 00:30:49.646744 containerd[1597]: time="2025-09-13T00:30:49.646706320Z" level=info msg="received exit event container_id:\"5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89\" id:\"5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89\" pid:4764 exited_at:{seconds:1757723449 nanos:646346802}" Sep 13 00:30:49.668768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ca703c35230553d9dc89aba2d4a3d927356b9db9af78fc40d464f17f62d3f89-rootfs.mount: Deactivated successfully. Sep 13 00:30:49.728193 kubelet[2745]: I0913 00:30:49.728087 2745 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:30:49Z","lastTransitionTime":"2025-09-13T00:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:30:50.548277 kubelet[2745]: E0913 00:30:50.548205 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:50.670626 containerd[1597]: time="2025-09-13T00:30:50.670409619Z" level=info msg="CreateContainer within sandbox \"fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:30:50.808372 containerd[1597]: time="2025-09-13T00:30:50.808220529Z" level=info msg="Container 2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:30:51.024731 containerd[1597]: time="2025-09-13T00:30:51.024654457Z" level=info msg="CreateContainer within sandbox \"fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93\"" Sep 13 00:30:51.026772 containerd[1597]: time="2025-09-13T00:30:51.026744800Z" level=info msg="StartContainer for \"2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93\"" Sep 13 00:30:51.029905 containerd[1597]: time="2025-09-13T00:30:51.029870966Z" level=info msg="connecting to shim 2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93" address="unix:///run/containerd/s/100f2038c5805872046cb43c44f514729b640cb7346389edfb970cf4bc107a9f" protocol=ttrpc version=3 Sep 13 00:30:51.052639 systemd[1]: Started cri-containerd-2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93.scope - libcontainer container 2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93. Sep 13 00:30:51.083767 systemd[1]: cri-containerd-2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93.scope: Deactivated successfully. Sep 13 00:30:51.084231 containerd[1597]: time="2025-09-13T00:30:51.084186087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93\" id:\"2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93\" pid:4802 exited_at:{seconds:1757723451 nanos:83944581}" Sep 13 00:30:51.168817 kubelet[2745]: E0913 00:30:51.168744 2745 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:30:51.204755 containerd[1597]: time="2025-09-13T00:30:51.204689512Z" level=info msg="received exit event container_id:\"2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93\" id:\"2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93\" pid:4802 exited_at:{seconds:1757723451 nanos:83944581}" Sep 13 00:30:51.213905 containerd[1597]: time="2025-09-13T00:30:51.213876902Z" level=info msg="StartContainer for \"2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93\" returns successfully" Sep 13 00:30:51.228209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b45af906dd0a4d56a50e0e4a55217008dc18da05e919a95e26ea11e85220e93-rootfs.mount: Deactivated successfully. Sep 13 00:30:51.554805 kubelet[2745]: E0913 00:30:51.554755 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:51.564062 containerd[1597]: time="2025-09-13T00:30:51.564007735Z" level=info msg="CreateContainer within sandbox \"fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:30:51.579227 containerd[1597]: time="2025-09-13T00:30:51.579165736Z" level=info msg="Container 39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:30:51.588599 containerd[1597]: time="2025-09-13T00:30:51.588548825Z" level=info msg="CreateContainer within sandbox \"fdb260f59aad19c4e0c5788e07788f6bf6628cbccf1c060e36a1f2906aff12a9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097\"" Sep 13 00:30:51.589390 containerd[1597]: time="2025-09-13T00:30:51.589352651Z" level=info msg="StartContainer for \"39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097\"" Sep 13 00:30:51.595377 containerd[1597]: time="2025-09-13T00:30:51.595301041Z" level=info msg="connecting to shim 39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097" address="unix:///run/containerd/s/100f2038c5805872046cb43c44f514729b640cb7346389edfb970cf4bc107a9f" protocol=ttrpc version=3 Sep 13 00:30:51.617637 systemd[1]: Started cri-containerd-39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097.scope - libcontainer container 39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097. Sep 13 00:30:51.671144 containerd[1597]: time="2025-09-13T00:30:51.671085728Z" level=info msg="StartContainer for \"39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097\" returns successfully" Sep 13 00:30:51.784468 containerd[1597]: time="2025-09-13T00:30:51.784408118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097\" id:\"30ac8561e33455db89f537da61d943006c7924641310885461054614c5d64675\" pid:4870 exited_at:{seconds:1757723451 nanos:783940496}" Sep 13 00:30:52.264518 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 13 00:30:52.561414 kubelet[2745]: E0913 00:30:52.561285 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:52.578877 kubelet[2745]: I0913 00:30:52.578808 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hc728" podStartSLOduration=5.5787901 podStartE2EDuration="5.5787901s" podCreationTimestamp="2025-09-13 00:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:30:52.577743376 +0000 UTC m=+136.666479133" watchObservedRunningTime="2025-09-13 00:30:52.5787901 +0000 UTC m=+136.667525857" Sep 13 00:30:54.209015 kubelet[2745]: E0913 00:30:54.208959 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:54.732989 containerd[1597]: time="2025-09-13T00:30:54.732901370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097\" id:\"edeb2397d0f3c080b9830aa5d836baaa1bf6918a5db80ee2cae05d79aea81ce3\" pid:5151 exit_status:1 exited_at:{seconds:1757723454 nanos:729137130}" Sep 13 00:30:56.256416 systemd-networkd[1496]: lxc_health: Link UP Sep 13 00:30:56.258135 systemd-networkd[1496]: lxc_health: Gained carrier Sep 13 00:30:56.898192 containerd[1597]: time="2025-09-13T00:30:56.898142334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097\" id:\"84f667fc47bb1d8eac813449e8ca439282959548d058509752015ece4b3ca330\" pid:5398 exited_at:{seconds:1757723456 nanos:897455970}" Sep 13 00:30:58.020537 kubelet[2745]: E0913 00:30:58.020434 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:58.209738 systemd-networkd[1496]: lxc_health: Gained IPv6LL Sep 13 00:30:58.211126 kubelet[2745]: E0913 00:30:58.211034 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:58.578305 kubelet[2745]: E0913 00:30:58.578273 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:30:59.017156 containerd[1597]: time="2025-09-13T00:30:59.017074991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097\" id:\"ed2a879710ccc3959d2dd3215a9d861358c75b55ec6f17f1b4175b01ade0afd8\" pid:5434 exited_at:{seconds:1757723459 nanos:16398856}" Sep 13 00:30:59.580653 kubelet[2745]: E0913 00:30:59.580604 2745 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:31:01.124201 containerd[1597]: time="2025-09-13T00:31:01.124023002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097\" id:\"ea71f31289035f9b457e674664361ba5ce59e71721f79f2acc0f2e3d11606d1d\" pid:5467 exited_at:{seconds:1757723461 nanos:123447457}" Sep 13 00:31:03.228156 containerd[1597]: time="2025-09-13T00:31:03.228106427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39d8a3bf1cac348cf19613ecfebb90ba71635407ab9feaccfbdf21d6da50e097\" id:\"bd5e401c855bdb283c51ba7b641644e5df55a75107712e8993edb345fda0ea71\" pid:5492 exited_at:{seconds:1757723463 nanos:227719267}" Sep 13 00:31:03.259035 sshd[4608]: Connection closed by 10.0.0.1 port 60254 Sep 13 00:31:03.259721 sshd-session[4602]: pam_unix(sshd:session): session closed for user core Sep 13 00:31:03.266151 systemd[1]: sshd@30-10.0.0.98:22-10.0.0.1:60254.service: Deactivated successfully. Sep 13 00:31:03.269313 systemd[1]: session-31.scope: Deactivated successfully. Sep 13 00:31:03.270298 systemd-logind[1570]: Session 31 logged out. Waiting for processes to exit. Sep 13 00:31:03.272016 systemd-logind[1570]: Removed session 31.