Sep 4 16:20:05.810236 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 14:31:01 -00 2025 Sep 4 16:20:05.810260 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39929ed91cc8dec12f10b74359379a21a9960032f4b779521fabb4147461485b Sep 4 16:20:05.810269 kernel: BIOS-provided physical RAM map: Sep 4 16:20:05.810276 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 16:20:05.810282 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 16:20:05.810289 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 16:20:05.810297 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 16:20:05.810304 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 16:20:05.810316 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 4 16:20:05.810323 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 4 16:20:05.810330 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 4 16:20:05.810337 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 4 16:20:05.810344 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 4 16:20:05.810351 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 4 16:20:05.810361 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 4 16:20:05.810369 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 16:20:05.810376 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 4 16:20:05.810383 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 4 16:20:05.810391 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 4 16:20:05.810398 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 4 16:20:05.810405 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 4 16:20:05.810413 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 16:20:05.810422 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 16:20:05.810429 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 16:20:05.810436 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 4 16:20:05.810443 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 16:20:05.810451 kernel: NX (Execute Disable) protection: active Sep 4 16:20:05.810458 kernel: APIC: Static calls initialized Sep 4 16:20:05.810465 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 4 16:20:05.810472 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 4 16:20:05.810479 kernel: extended physical RAM map: Sep 4 16:20:05.810487 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 16:20:05.810494 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 16:20:05.810504 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 16:20:05.810511 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 16:20:05.810519 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 16:20:05.810526 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 4 16:20:05.810533 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 4 16:20:05.810541 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 4 16:20:05.810548 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 4 16:20:05.810561 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 4 16:20:05.810568 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 4 16:20:05.810576 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 4 16:20:05.810583 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 4 16:20:05.810591 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 4 16:20:05.810598 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 4 16:20:05.810606 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 4 16:20:05.810616 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 16:20:05.810623 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 4 16:20:05.810631 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 4 16:20:05.810638 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 4 16:20:05.810646 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 4 16:20:05.810653 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 4 16:20:05.810661 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 16:20:05.810668 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 16:20:05.810676 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 16:20:05.810683 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 4 16:20:05.810693 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 16:20:05.810703 kernel: efi: EFI v2.7 by EDK II Sep 4 16:20:05.810711 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 4 16:20:05.810718 kernel: random: crng init done Sep 4 16:20:05.810726 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 4 16:20:05.810747 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 4 16:20:05.810764 kernel: secureboot: Secure boot disabled Sep 4 16:20:05.810772 kernel: SMBIOS 2.8 present. Sep 4 16:20:05.810779 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 4 16:20:05.810787 kernel: DMI: Memory slots populated: 1/1 Sep 4 16:20:05.810794 kernel: Hypervisor detected: KVM Sep 4 16:20:05.810804 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 16:20:05.810812 kernel: kvm-clock: using sched offset of 4231269877 cycles Sep 4 16:20:05.810820 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 16:20:05.810828 kernel: tsc: Detected 2794.748 MHz processor Sep 4 16:20:05.810837 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 16:20:05.810845 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 16:20:05.810852 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 4 16:20:05.810860 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 16:20:05.810871 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 16:20:05.810878 kernel: Using GB pages for direct mapping Sep 4 16:20:05.810886 kernel: ACPI: Early table checksum verification disabled Sep 4 16:20:05.810894 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 4 16:20:05.810903 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 4 16:20:05.810911 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:20:05.810919 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:20:05.810926 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 4 16:20:05.810937 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:20:05.810945 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:20:05.810953 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:20:05.810961 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:20:05.810968 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 16:20:05.810976 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 4 16:20:05.810985 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 4 16:20:05.811002 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 4 16:20:05.811010 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 4 16:20:05.811018 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 4 16:20:05.811026 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 4 16:20:05.811034 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 4 16:20:05.811042 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 4 16:20:05.811050 kernel: No NUMA configuration found Sep 4 16:20:05.811061 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 4 16:20:05.811069 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 4 16:20:05.811076 kernel: Zone ranges: Sep 4 16:20:05.811084 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 16:20:05.811092 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 4 16:20:05.811100 kernel: Normal empty Sep 4 16:20:05.811108 kernel: Device empty Sep 4 16:20:05.811116 kernel: Movable zone start for each node Sep 4 16:20:05.811126 kernel: Early memory node ranges Sep 4 16:20:05.811134 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 16:20:05.811141 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 4 16:20:05.811149 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 4 16:20:05.811157 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 4 16:20:05.811165 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 4 16:20:05.811173 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 4 16:20:05.811182 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 4 16:20:05.811190 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 4 16:20:05.811198 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 4 16:20:05.811206 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 16:20:05.811217 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 16:20:05.811232 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 4 16:20:05.811242 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 16:20:05.811250 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 4 16:20:05.811259 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 4 16:20:05.811267 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 16:20:05.811277 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 4 16:20:05.811285 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 4 16:20:05.811294 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 16:20:05.811302 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 16:20:05.811312 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 16:20:05.811320 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 16:20:05.811328 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 16:20:05.811337 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 16:20:05.811345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 16:20:05.811353 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 16:20:05.811361 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 16:20:05.811372 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 16:20:05.811380 kernel: TSC deadline timer available Sep 4 16:20:05.811388 kernel: CPU topo: Max. logical packages: 1 Sep 4 16:20:05.811396 kernel: CPU topo: Max. logical dies: 1 Sep 4 16:20:05.811404 kernel: CPU topo: Max. dies per package: 1 Sep 4 16:20:05.811412 kernel: CPU topo: Max. threads per core: 1 Sep 4 16:20:05.811420 kernel: CPU topo: Num. cores per package: 4 Sep 4 16:20:05.811430 kernel: CPU topo: Num. threads per package: 4 Sep 4 16:20:05.811439 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 4 16:20:05.811447 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 16:20:05.811455 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 16:20:05.811463 kernel: kvm-guest: setup PV sched yield Sep 4 16:20:05.811471 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 4 16:20:05.811479 kernel: Booting paravirtualized kernel on KVM Sep 4 16:20:05.811487 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 16:20:05.811498 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 16:20:05.811506 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 4 16:20:05.811514 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 4 16:20:05.811522 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 16:20:05.811530 kernel: kvm-guest: PV spinlocks enabled Sep 4 16:20:05.811538 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 16:20:05.811548 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39929ed91cc8dec12f10b74359379a21a9960032f4b779521fabb4147461485b Sep 4 16:20:05.811559 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 16:20:05.811567 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 16:20:05.811576 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 16:20:05.811584 kernel: Fallback order for Node 0: 0 Sep 4 16:20:05.811592 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 4 16:20:05.811600 kernel: Policy zone: DMA32 Sep 4 16:20:05.811610 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 16:20:05.811619 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 16:20:05.811627 kernel: ftrace: allocating 40102 entries in 157 pages Sep 4 16:20:05.811635 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 16:20:05.811643 kernel: Dynamic Preempt: voluntary Sep 4 16:20:05.811651 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 16:20:05.811660 kernel: rcu: RCU event tracing is enabled. Sep 4 16:20:05.811670 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 16:20:05.811679 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 16:20:05.811687 kernel: Rude variant of Tasks RCU enabled. Sep 4 16:20:05.811695 kernel: Tracing variant of Tasks RCU enabled. Sep 4 16:20:05.811703 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 16:20:05.811714 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 16:20:05.811722 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 16:20:05.811731 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 16:20:05.811757 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 16:20:05.811766 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 16:20:05.811774 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 16:20:05.811782 kernel: Console: colour dummy device 80x25 Sep 4 16:20:05.811790 kernel: printk: legacy console [ttyS0] enabled Sep 4 16:20:05.811798 kernel: ACPI: Core revision 20240827 Sep 4 16:20:05.811807 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 16:20:05.811817 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 16:20:05.811826 kernel: x2apic enabled Sep 4 16:20:05.811834 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 16:20:05.811842 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 16:20:05.811850 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 16:20:05.811859 kernel: kvm-guest: setup PV IPIs Sep 4 16:20:05.811867 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 16:20:05.811878 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 4 16:20:05.811886 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 4 16:20:05.811895 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 16:20:05.811903 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 16:20:05.811911 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 16:20:05.811919 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 16:20:05.811927 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 16:20:05.811938 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 16:20:05.811946 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 16:20:05.811954 kernel: active return thunk: retbleed_return_thunk Sep 4 16:20:05.811962 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 16:20:05.811971 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 16:20:05.811979 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 16:20:05.811987 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 16:20:05.812006 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 16:20:05.812015 kernel: active return thunk: srso_return_thunk Sep 4 16:20:05.812023 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 16:20:05.812031 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 16:20:05.812040 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 16:20:05.812048 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 16:20:05.812056 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 16:20:05.812067 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 16:20:05.812075 kernel: Freeing SMP alternatives memory: 32K Sep 4 16:20:05.812083 kernel: pid_max: default: 32768 minimum: 301 Sep 4 16:20:05.812091 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 16:20:05.812099 kernel: landlock: Up and running. Sep 4 16:20:05.812108 kernel: SELinux: Initializing. Sep 4 16:20:05.812116 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 16:20:05.812126 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 16:20:05.812134 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 16:20:05.812143 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 16:20:05.812151 kernel: ... version: 0 Sep 4 16:20:05.812159 kernel: ... bit width: 48 Sep 4 16:20:05.812170 kernel: ... generic registers: 6 Sep 4 16:20:05.812181 kernel: ... value mask: 0000ffffffffffff Sep 4 16:20:05.812192 kernel: ... max period: 00007fffffffffff Sep 4 16:20:05.812201 kernel: ... fixed-purpose events: 0 Sep 4 16:20:05.812209 kernel: ... event mask: 000000000000003f Sep 4 16:20:05.812217 kernel: signal: max sigframe size: 1776 Sep 4 16:20:05.812225 kernel: rcu: Hierarchical SRCU implementation. Sep 4 16:20:05.812233 kernel: rcu: Max phase no-delay instances is 400. Sep 4 16:20:05.812244 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 16:20:05.812255 kernel: smp: Bringing up secondary CPUs ... Sep 4 16:20:05.812263 kernel: smpboot: x86: Booting SMP configuration: Sep 4 16:20:05.812271 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 16:20:05.812279 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 16:20:05.812287 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 4 16:20:05.812296 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54288K init, 2680K bss, 137196K reserved, 0K cma-reserved) Sep 4 16:20:05.812304 kernel: devtmpfs: initialized Sep 4 16:20:05.812315 kernel: x86/mm: Memory block size: 128MB Sep 4 16:20:05.812323 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 4 16:20:05.812331 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 4 16:20:05.812340 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 4 16:20:05.812348 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 4 16:20:05.812356 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 4 16:20:05.812364 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 4 16:20:05.812375 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 16:20:05.812383 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 16:20:05.812391 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 16:20:05.812399 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 16:20:05.812408 kernel: audit: initializing netlink subsys (disabled) Sep 4 16:20:05.812416 kernel: audit: type=2000 audit(1757002804.171:1): state=initialized audit_enabled=0 res=1 Sep 4 16:20:05.812424 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 16:20:05.812434 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 16:20:05.812443 kernel: cpuidle: using governor menu Sep 4 16:20:05.812451 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 16:20:05.812459 kernel: dca service started, version 1.12.1 Sep 4 16:20:05.812467 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 4 16:20:05.812475 kernel: PCI: Using configuration type 1 for base access Sep 4 16:20:05.812484 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 16:20:05.812494 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 16:20:05.812502 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 16:20:05.812510 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 16:20:05.812518 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 16:20:05.812527 kernel: ACPI: Added _OSI(Module Device) Sep 4 16:20:05.812535 kernel: ACPI: Added _OSI(Processor Device) Sep 4 16:20:05.812543 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 16:20:05.812553 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 16:20:05.812561 kernel: ACPI: Interpreter enabled Sep 4 16:20:05.812569 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 16:20:05.812578 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 16:20:05.812586 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 16:20:05.812594 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 16:20:05.812602 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 16:20:05.812613 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 16:20:05.812890 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 16:20:05.813069 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 16:20:05.813233 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 16:20:05.813244 kernel: PCI host bridge to bus 0000:00 Sep 4 16:20:05.813418 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 16:20:05.813574 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 16:20:05.813886 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 16:20:05.814072 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 4 16:20:05.814222 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 4 16:20:05.814369 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 4 16:20:05.814516 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 16:20:05.814723 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 4 16:20:05.814952 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 16:20:05.815124 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 4 16:20:05.815285 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 4 16:20:05.815455 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 4 16:20:05.815618 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 16:20:05.815846 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 16:20:05.816021 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 4 16:20:05.816183 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 4 16:20:05.816342 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 4 16:20:05.816511 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 4 16:20:05.816679 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 4 16:20:05.816856 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 4 16:20:05.817026 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 4 16:20:05.817205 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 16:20:05.817372 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 4 16:20:05.817538 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 4 16:20:05.817700 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 4 16:20:05.817927 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 4 16:20:05.818112 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 4 16:20:05.818272 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 16:20:05.818443 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 4 16:20:05.818608 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 4 16:20:05.818780 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 4 16:20:05.818957 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 4 16:20:05.819124 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 4 16:20:05.819137 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 16:20:05.819146 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 16:20:05.819159 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 16:20:05.819168 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 16:20:05.819177 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 16:20:05.819186 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 16:20:05.819194 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 16:20:05.819203 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 16:20:05.819212 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 16:20:05.819223 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 16:20:05.819231 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 16:20:05.819240 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 16:20:05.819249 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 16:20:05.819257 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 16:20:05.819266 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 16:20:05.819275 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 16:20:05.819286 kernel: iommu: Default domain type: Translated Sep 4 16:20:05.819296 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 16:20:05.819305 kernel: efivars: Registered efivars operations Sep 4 16:20:05.819313 kernel: PCI: Using ACPI for IRQ routing Sep 4 16:20:05.819322 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 16:20:05.819331 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 4 16:20:05.819339 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 4 16:20:05.819350 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 4 16:20:05.819359 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 4 16:20:05.819367 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 4 16:20:05.819376 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 4 16:20:05.819384 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 4 16:20:05.819393 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 4 16:20:05.819552 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 16:20:05.819714 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 16:20:05.819890 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 16:20:05.819902 kernel: vgaarb: loaded Sep 4 16:20:05.819911 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 16:20:05.819920 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 16:20:05.819929 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 16:20:05.819942 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 16:20:05.819951 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 16:20:05.819960 kernel: pnp: PnP ACPI init Sep 4 16:20:05.820180 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 4 16:20:05.820197 kernel: pnp: PnP ACPI: found 6 devices Sep 4 16:20:05.820206 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 16:20:05.820215 kernel: NET: Registered PF_INET protocol family Sep 4 16:20:05.820227 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 16:20:05.820236 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 16:20:05.820245 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 16:20:05.820254 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 16:20:05.820263 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 16:20:05.820273 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 16:20:05.820282 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 16:20:05.820293 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 16:20:05.820305 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 16:20:05.820314 kernel: NET: Registered PF_XDP protocol family Sep 4 16:20:05.820474 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 4 16:20:05.820636 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 4 16:20:05.820811 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 16:20:05.820967 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 16:20:05.821124 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 16:20:05.821273 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 4 16:20:05.821427 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 4 16:20:05.821574 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 4 16:20:05.821586 kernel: PCI: CLS 0 bytes, default 64 Sep 4 16:20:05.821595 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 4 16:20:05.821609 kernel: Initialise system trusted keyrings Sep 4 16:20:05.821620 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 16:20:05.821630 kernel: Key type asymmetric registered Sep 4 16:20:05.821639 kernel: Asymmetric key parser 'x509' registered Sep 4 16:20:05.821650 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 16:20:05.821659 kernel: io scheduler mq-deadline registered Sep 4 16:20:05.821668 kernel: io scheduler kyber registered Sep 4 16:20:05.821677 kernel: io scheduler bfq registered Sep 4 16:20:05.821686 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 16:20:05.821696 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 16:20:05.821705 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 16:20:05.821717 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 16:20:05.821725 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 16:20:05.821761 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 16:20:05.821770 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 16:20:05.821780 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 16:20:05.821788 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 16:20:05.821960 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 16:20:05.821978 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 16:20:05.822138 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 16:20:05.822292 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T16:20:05 UTC (1757002805) Sep 4 16:20:05.822444 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 4 16:20:05.822456 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 16:20:05.822466 kernel: efifb: probing for efifb Sep 4 16:20:05.822475 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 4 16:20:05.822488 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 4 16:20:05.822497 kernel: efifb: scrolling: redraw Sep 4 16:20:05.822506 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 16:20:05.822515 kernel: Console: switching to colour frame buffer device 160x50 Sep 4 16:20:05.822524 kernel: fb0: EFI VGA frame buffer device Sep 4 16:20:05.822533 kernel: pstore: Using crash dump compression: deflate Sep 4 16:20:05.822542 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 16:20:05.822553 kernel: NET: Registered PF_INET6 protocol family Sep 4 16:20:05.822562 kernel: Segment Routing with IPv6 Sep 4 16:20:05.822571 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 16:20:05.822580 kernel: NET: Registered PF_PACKET protocol family Sep 4 16:20:05.822589 kernel: Key type dns_resolver registered Sep 4 16:20:05.822598 kernel: IPI shorthand broadcast: enabled Sep 4 16:20:05.822607 kernel: sched_clock: Marking stable (2840002403, 151539643)->(3009656360, -18114314) Sep 4 16:20:05.822618 kernel: registered taskstats version 1 Sep 4 16:20:05.822627 kernel: Loading compiled-in X.509 certificates Sep 4 16:20:05.822637 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 250d2bafae7fa56c92cf187a0b8b7b2cdd349fc7' Sep 4 16:20:05.822646 kernel: Demotion targets for Node 0: null Sep 4 16:20:05.822654 kernel: Key type .fscrypt registered Sep 4 16:20:05.822663 kernel: Key type fscrypt-provisioning registered Sep 4 16:20:05.822672 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 16:20:05.822684 kernel: ima: Allocated hash algorithm: sha1 Sep 4 16:20:05.822693 kernel: ima: No architecture policies found Sep 4 16:20:05.822701 kernel: clk: Disabling unused clocks Sep 4 16:20:05.822710 kernel: Warning: unable to open an initial console. Sep 4 16:20:05.822720 kernel: Freeing unused kernel image (initmem) memory: 54288K Sep 4 16:20:05.822729 kernel: Write protecting the kernel read-only data: 24576k Sep 4 16:20:05.822757 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 4 16:20:05.822769 kernel: Run /init as init process Sep 4 16:20:05.822778 kernel: with arguments: Sep 4 16:20:05.822789 kernel: /init Sep 4 16:20:05.822798 kernel: with environment: Sep 4 16:20:05.822807 kernel: HOME=/ Sep 4 16:20:05.822815 kernel: TERM=linux Sep 4 16:20:05.822825 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 16:20:05.822837 systemd[1]: Successfully made /usr/ read-only. Sep 4 16:20:05.822851 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 16:20:05.822862 systemd[1]: Detected virtualization kvm. Sep 4 16:20:05.822871 systemd[1]: Detected architecture x86-64. Sep 4 16:20:05.822880 systemd[1]: Running in initrd. Sep 4 16:20:05.822890 systemd[1]: No hostname configured, using default hostname. Sep 4 16:20:05.822902 systemd[1]: Hostname set to . Sep 4 16:20:05.822912 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 16:20:05.822921 systemd[1]: Queued start job for default target initrd.target. Sep 4 16:20:05.822930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:20:05.822940 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:20:05.822950 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 16:20:05.822960 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 16:20:05.822972 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 16:20:05.822982 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 16:20:05.823001 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 16:20:05.823012 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 16:20:05.823021 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:20:05.823033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:20:05.823042 systemd[1]: Reached target paths.target - Path Units. Sep 4 16:20:05.823052 systemd[1]: Reached target slices.target - Slice Units. Sep 4 16:20:05.823061 systemd[1]: Reached target swap.target - Swaps. Sep 4 16:20:05.823071 systemd[1]: Reached target timers.target - Timer Units. Sep 4 16:20:05.823080 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 16:20:05.823090 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 16:20:05.823102 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 16:20:05.823111 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 16:20:05.823121 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:20:05.823130 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 16:20:05.823140 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:20:05.823149 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 16:20:05.823159 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 16:20:05.823171 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 16:20:05.823181 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 16:20:05.823191 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 16:20:05.823201 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 16:20:05.823210 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 16:20:05.823220 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 16:20:05.823232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:20:05.823241 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 16:20:05.823251 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:20:05.823261 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 16:20:05.823273 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 16:20:05.823303 systemd-journald[218]: Collecting audit messages is disabled. Sep 4 16:20:05.823327 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 16:20:05.823340 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:20:05.823350 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 16:20:05.823359 systemd-journald[218]: Journal started Sep 4 16:20:05.823378 systemd-journald[218]: Runtime Journal (/run/log/journal/b12b51077ada4c00b0d60b687f3cbfe2) is 6M, max 48.5M, 42.4M free. Sep 4 16:20:05.805066 systemd-modules-load[221]: Inserted module 'overlay' Sep 4 16:20:05.827800 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 16:20:05.831773 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 16:20:05.831820 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 16:20:05.833511 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 4 16:20:05.834431 kernel: Bridge firewalling registered Sep 4 16:20:05.839974 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 16:20:05.843814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:20:05.846872 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 16:20:05.849619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:20:05.856855 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:20:05.857374 systemd-tmpfiles[246]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 16:20:05.861877 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 16:20:05.863253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 16:20:05.866249 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:20:05.875880 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 16:20:05.892639 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39929ed91cc8dec12f10b74359379a21a9960032f4b779521fabb4147461485b Sep 4 16:20:05.907231 systemd-resolved[261]: Positive Trust Anchors: Sep 4 16:20:05.907245 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 16:20:05.907249 systemd-resolved[261]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 16:20:05.907279 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 16:20:05.909682 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 4 16:20:05.910698 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 16:20:05.911194 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:20:05.994768 kernel: SCSI subsystem initialized Sep 4 16:20:06.002783 kernel: Loading iSCSI transport class v2.0-870. Sep 4 16:20:06.013773 kernel: iscsi: registered transport (tcp) Sep 4 16:20:06.034773 kernel: iscsi: registered transport (qla4xxx) Sep 4 16:20:06.034794 kernel: QLogic iSCSI HBA Driver Sep 4 16:20:06.053651 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 16:20:06.083959 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:20:06.085204 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 16:20:06.138381 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 16:20:06.140375 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 16:20:06.201772 kernel: raid6: avx2x4 gen() 30140 MB/s Sep 4 16:20:06.218762 kernel: raid6: avx2x2 gen() 30821 MB/s Sep 4 16:20:06.235841 kernel: raid6: avx2x1 gen() 25769 MB/s Sep 4 16:20:06.235870 kernel: raid6: using algorithm avx2x2 gen() 30821 MB/s Sep 4 16:20:06.253872 kernel: raid6: .... xor() 19932 MB/s, rmw enabled Sep 4 16:20:06.253908 kernel: raid6: using avx2x2 recovery algorithm Sep 4 16:20:06.273766 kernel: xor: automatically using best checksumming function avx Sep 4 16:20:06.455792 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 16:20:06.465456 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 16:20:06.468320 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:20:06.493840 systemd-udevd[474]: Using default interface naming scheme 'v257'. Sep 4 16:20:06.499236 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:20:06.502830 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 16:20:06.531313 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Sep 4 16:20:06.562957 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 16:20:06.566530 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 16:20:06.645229 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:20:06.646484 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 16:20:06.685783 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 16:20:06.688282 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 16:20:06.693078 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 16:20:06.693102 kernel: GPT:9289727 != 19775487 Sep 4 16:20:06.693113 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 16:20:06.693124 kernel: GPT:9289727 != 19775487 Sep 4 16:20:06.693134 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 16:20:06.693145 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:20:06.707610 kernel: libata version 3.00 loaded. Sep 4 16:20:06.717356 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 16:20:06.717598 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 16:20:06.717612 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 16:20:06.722095 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 4 16:20:06.722302 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 4 16:20:06.722496 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 16:20:06.723443 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:20:06.723619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:20:06.726706 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:20:06.730758 kernel: scsi host0: ahci Sep 4 16:20:06.732795 kernel: AES CTR mode by8 optimization enabled Sep 4 16:20:06.734751 kernel: scsi host1: ahci Sep 4 16:20:06.733619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:20:06.738987 kernel: scsi host2: ahci Sep 4 16:20:06.739209 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 4 16:20:06.746796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:20:06.747420 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:20:06.750724 kernel: scsi host3: ahci Sep 4 16:20:06.753876 kernel: scsi host4: ahci Sep 4 16:20:06.755788 kernel: scsi host5: ahci Sep 4 16:20:06.760002 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 1 Sep 4 16:20:06.760025 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 1 Sep 4 16:20:06.760037 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 1 Sep 4 16:20:06.763022 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 1 Sep 4 16:20:06.763044 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 1 Sep 4 16:20:06.763055 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 1 Sep 4 16:20:06.777478 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 16:20:06.800669 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 16:20:06.809345 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 16:20:06.817307 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 16:20:06.817588 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 16:20:06.822981 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 16:20:06.824125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:20:06.846423 disk-uuid[638]: Primary Header is updated. Sep 4 16:20:06.846423 disk-uuid[638]: Secondary Entries is updated. Sep 4 16:20:06.846423 disk-uuid[638]: Secondary Header is updated. Sep 4 16:20:06.850770 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:20:06.854759 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:20:07.495856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:20:07.588773 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 16:20:07.588853 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 16:20:07.589762 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 16:20:07.589798 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 16:20:07.591003 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 16:20:07.591059 kernel: ata3.00: applying bridge limits Sep 4 16:20:07.596301 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 16:20:07.596760 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 16:20:07.597771 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 16:20:07.598880 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 16:20:07.598900 kernel: ata3.00: configured for UDMA/100 Sep 4 16:20:07.599771 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 16:20:07.653305 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 16:20:07.653557 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 16:20:07.671801 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 16:20:07.856452 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:20:07.857405 disk-uuid[641]: The operation has completed successfully. Sep 4 16:20:08.083031 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 16:20:08.083202 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 16:20:08.085544 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 16:20:08.095981 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 16:20:08.098984 sh[664]: Success Sep 4 16:20:08.099124 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 16:20:08.100804 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:20:08.103185 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 16:20:08.106315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 16:20:08.118770 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 16:20:08.118818 kernel: device-mapper: uevent: version 1.0.3 Sep 4 16:20:08.118831 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 16:20:08.128762 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 16:20:08.134975 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 16:20:08.163182 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 16:20:08.166147 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 16:20:08.180580 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 16:20:08.186798 kernel: BTRFS: device fsid ac7b5b49-8d71-4968-afd7-5e4410595bf4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (686) Sep 4 16:20:08.186834 kernel: BTRFS info (device dm-0): first mount of filesystem ac7b5b49-8d71-4968-afd7-5e4410595bf4 Sep 4 16:20:08.186846 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:20:08.193186 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 16:20:08.193214 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 16:20:08.194383 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 16:20:08.195425 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 16:20:08.196470 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 16:20:08.198179 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 16:20:08.199335 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 16:20:08.224768 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (716) Sep 4 16:20:08.226777 kernel: BTRFS info (device vda6): first mount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:20:08.226823 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:20:08.229807 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:20:08.229835 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:20:08.267778 kernel: BTRFS info (device vda6): last unmount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:20:08.268771 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 16:20:08.272324 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 16:20:08.338515 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 16:20:08.342314 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 16:20:08.386398 systemd-networkd[855]: lo: Link UP Sep 4 16:20:08.387201 systemd-networkd[855]: lo: Gained carrier Sep 4 16:20:08.389076 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 16:20:08.389239 ignition[797]: Ignition 2.22.0 Sep 4 16:20:08.390176 systemd-networkd[855]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:20:08.389247 ignition[797]: Stage: fetch-offline Sep 4 16:20:08.390183 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 16:20:08.389289 ignition[797]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:20:08.390908 systemd-networkd[855]: eth0: Link UP Sep 4 16:20:08.389298 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:20:08.393367 systemd[1]: Reached target network.target - Network. Sep 4 16:20:08.389390 ignition[797]: parsed url from cmdline: "" Sep 4 16:20:08.394141 systemd-networkd[855]: eth0: Gained carrier Sep 4 16:20:08.389394 ignition[797]: no config URL provided Sep 4 16:20:08.394154 systemd-networkd[855]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:20:08.389399 ignition[797]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 16:20:08.389407 ignition[797]: no config at "/usr/lib/ignition/user.ign" Sep 4 16:20:08.389429 ignition[797]: op(1): [started] loading QEMU firmware config module Sep 4 16:20:08.389433 ignition[797]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 16:20:08.400287 ignition[797]: op(1): [finished] loading QEMU firmware config module Sep 4 16:20:08.416814 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 16:20:08.454549 ignition[797]: parsing config with SHA512: 4c5657f54dca3e5709768b01256c368230c4216e039b2a6855f294191bd69f0e91632ace01550714b07f83c0f060b4b156ae2273a7a060187e47acbe245a3a6d Sep 4 16:20:08.460952 unknown[797]: fetched base config from "system" Sep 4 16:20:08.460965 unknown[797]: fetched user config from "qemu" Sep 4 16:20:08.461327 ignition[797]: fetch-offline: fetch-offline passed Sep 4 16:20:08.461378 ignition[797]: Ignition finished successfully Sep 4 16:20:08.464487 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 16:20:08.465820 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 16:20:08.468269 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 16:20:08.507063 ignition[869]: Ignition 2.22.0 Sep 4 16:20:08.507595 ignition[869]: Stage: kargs Sep 4 16:20:08.507760 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:20:08.507770 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:20:08.508794 ignition[869]: kargs: kargs passed Sep 4 16:20:08.508839 ignition[869]: Ignition finished successfully Sep 4 16:20:08.513334 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 16:20:08.515612 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 16:20:08.555154 ignition[877]: Ignition 2.22.0 Sep 4 16:20:08.555165 ignition[877]: Stage: disks Sep 4 16:20:08.555317 ignition[877]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:20:08.555327 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:20:08.557902 ignition[877]: disks: disks passed Sep 4 16:20:08.557957 ignition[877]: Ignition finished successfully Sep 4 16:20:08.562432 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 16:20:08.564473 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 16:20:08.565046 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 16:20:08.565368 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 16:20:08.565690 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 16:20:08.566179 systemd[1]: Reached target basic.target - Basic System. Sep 4 16:20:08.567481 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 16:20:08.597625 systemd-fsck[887]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 16:20:08.606167 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 16:20:08.607811 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 16:20:08.719756 kernel: EXT4-fs (vda9): mounted filesystem 5b9a7850-c07f-470b-a91c-362c3904243c r/w with ordered data mode. Quota mode: none. Sep 4 16:20:08.720281 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 16:20:08.722263 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 16:20:08.725845 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 16:20:08.727483 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 16:20:08.730242 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 16:20:08.730308 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 16:20:08.732282 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 16:20:08.741266 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 16:20:08.745035 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 16:20:08.750752 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (895) Sep 4 16:20:08.750778 kernel: BTRFS info (device vda6): first mount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:20:08.750790 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:20:08.754498 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:20:08.754525 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:20:08.757059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 16:20:08.788498 initrd-setup-root[919]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 16:20:08.792706 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Sep 4 16:20:08.797123 initrd-setup-root[933]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 16:20:08.802896 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 16:20:08.905451 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 16:20:08.908683 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 16:20:08.910413 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 16:20:08.935817 kernel: BTRFS info (device vda6): last unmount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:20:08.951257 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 16:20:08.971854 ignition[1008]: INFO : Ignition 2.22.0 Sep 4 16:20:08.971854 ignition[1008]: INFO : Stage: mount Sep 4 16:20:08.973721 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:20:08.973721 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:20:08.973721 ignition[1008]: INFO : mount: mount passed Sep 4 16:20:08.973721 ignition[1008]: INFO : Ignition finished successfully Sep 4 16:20:08.979712 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 16:20:08.982093 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 16:20:09.185406 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 16:20:09.187053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 16:20:09.217259 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1021) Sep 4 16:20:09.217303 kernel: BTRFS info (device vda6): first mount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:20:09.217316 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:20:09.221070 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:20:09.221137 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:20:09.222797 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 16:20:09.266923 ignition[1038]: INFO : Ignition 2.22.0 Sep 4 16:20:09.266923 ignition[1038]: INFO : Stage: files Sep 4 16:20:09.268720 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:20:09.268720 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:20:09.268720 ignition[1038]: DEBUG : files: compiled without relabeling support, skipping Sep 4 16:20:09.272276 ignition[1038]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 16:20:09.272276 ignition[1038]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 16:20:09.276283 ignition[1038]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 16:20:09.277662 ignition[1038]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 16:20:09.279216 unknown[1038]: wrote ssh authorized keys file for user: core Sep 4 16:20:09.280351 ignition[1038]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 16:20:09.281759 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 16:20:09.281759 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 16:20:09.353295 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 16:20:09.498971 systemd-networkd[855]: eth0: Gained IPv6LL Sep 4 16:20:09.763796 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 16:20:09.766122 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 16:20:09.766122 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 16:20:09.887264 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 16:20:10.351355 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 16:20:10.351355 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 16:20:10.355654 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 16:20:10.355654 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 16:20:10.355654 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 16:20:10.355654 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 16:20:10.355654 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 16:20:10.355654 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 16:20:10.355654 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 16:20:10.368002 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 16:20:10.368002 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 16:20:10.368002 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 16:20:10.368002 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 16:20:10.368002 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 16:20:10.368002 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 16:20:11.089087 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 16:20:11.609340 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 16:20:11.609340 ignition[1038]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 16:20:11.613230 ignition[1038]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 16:20:11.615677 ignition[1038]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 16:20:11.615677 ignition[1038]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 16:20:11.615677 ignition[1038]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 16:20:11.620193 ignition[1038]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 16:20:11.620193 ignition[1038]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 16:20:11.620193 ignition[1038]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 16:20:11.620193 ignition[1038]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 16:20:11.651235 ignition[1038]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 16:20:11.659918 ignition[1038]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 16:20:11.661720 ignition[1038]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 16:20:11.661720 ignition[1038]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 16:20:11.661720 ignition[1038]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 16:20:11.661720 ignition[1038]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 16:20:11.661720 ignition[1038]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 16:20:11.661720 ignition[1038]: INFO : files: files passed Sep 4 16:20:11.661720 ignition[1038]: INFO : Ignition finished successfully Sep 4 16:20:11.671036 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 16:20:11.675227 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 16:20:11.676887 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 16:20:11.710338 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 16:20:11.711281 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 16:20:11.713902 initrd-setup-root-after-ignition[1067]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 16:20:11.716560 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:20:11.718473 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:20:11.718961 initrd-setup-root-after-ignition[1069]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:20:11.718913 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 16:20:11.719648 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 16:20:11.726178 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 16:20:11.757485 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 16:20:11.757644 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 16:20:11.759617 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 16:20:11.761483 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 16:20:11.762277 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 16:20:11.763290 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 16:20:11.789437 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 16:20:11.790973 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 16:20:11.815657 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:20:11.816163 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:20:11.818570 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 16:20:11.819095 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 16:20:11.819202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 16:20:11.824374 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 16:20:11.827268 systemd[1]: Stopped target basic.target - Basic System. Sep 4 16:20:11.827551 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 16:20:11.829506 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 16:20:11.830046 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 16:20:11.834042 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 16:20:11.836050 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 16:20:11.838231 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 16:20:11.838588 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 16:20:11.839133 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 16:20:11.844393 systemd[1]: Stopped target swap.target - Swaps. Sep 4 16:20:11.846328 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 16:20:11.846452 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 16:20:11.849590 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:20:11.851608 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:20:11.852181 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 16:20:11.855078 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:20:11.855668 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 16:20:11.855785 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 16:20:11.860463 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 16:20:11.860575 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 16:20:11.862613 systemd[1]: Stopped target paths.target - Path Units. Sep 4 16:20:11.864361 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 16:20:11.868829 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:20:11.869308 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 16:20:11.872072 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 16:20:11.874060 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 16:20:11.874143 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 16:20:11.875730 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 16:20:11.875835 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 16:20:11.877531 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 16:20:11.877645 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 16:20:11.879439 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 16:20:11.879539 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 16:20:11.884343 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 16:20:11.885035 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 16:20:11.885141 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:20:11.889897 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 16:20:11.890156 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 16:20:11.890256 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:20:11.890580 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 16:20:11.890674 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:20:11.894980 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 16:20:11.895077 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 16:20:11.904324 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 16:20:11.904441 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 16:20:11.928919 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 16:20:11.989468 ignition[1093]: INFO : Ignition 2.22.0 Sep 4 16:20:11.989468 ignition[1093]: INFO : Stage: umount Sep 4 16:20:11.991486 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:20:11.991486 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:20:11.991486 ignition[1093]: INFO : umount: umount passed Sep 4 16:20:11.991486 ignition[1093]: INFO : Ignition finished successfully Sep 4 16:20:11.994031 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 16:20:11.994169 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 16:20:11.995262 systemd[1]: Stopped target network.target - Network. Sep 4 16:20:11.997714 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 16:20:11.997855 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 16:20:11.998190 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 16:20:11.998259 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 16:20:11.998532 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 16:20:11.998613 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 16:20:11.999024 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 16:20:11.999077 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 16:20:11.999461 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 16:20:12.006929 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 16:20:12.013614 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 16:20:12.013788 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 16:20:12.019522 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 16:20:12.019653 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 16:20:12.024913 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 16:20:12.027131 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 16:20:12.027191 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:20:12.028725 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 16:20:12.030476 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 16:20:12.030538 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 16:20:12.031192 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 16:20:12.031249 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:20:12.031473 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 16:20:12.031518 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 16:20:12.031970 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:20:12.065432 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 16:20:12.067915 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:20:12.068727 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 16:20:12.068785 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 16:20:12.070790 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 16:20:12.070842 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:20:12.071237 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 16:20:12.071282 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 16:20:12.075812 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 16:20:12.075886 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 16:20:12.076729 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 16:20:12.076801 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 16:20:12.082188 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 16:20:12.082780 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 16:20:12.082866 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:20:12.083198 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 16:20:12.083253 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:20:12.083506 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:20:12.083569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:20:12.091449 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 16:20:12.091609 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 16:20:12.109319 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 16:20:12.109466 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 16:20:12.164252 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 16:20:12.164425 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 16:20:12.165840 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 16:20:12.168874 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 16:20:12.168978 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 16:20:12.170647 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 16:20:12.191152 systemd[1]: Switching root. Sep 4 16:20:12.223442 systemd-journald[218]: Journal stopped Sep 4 16:20:13.443691 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 4 16:20:13.443798 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 16:20:13.443814 kernel: SELinux: policy capability open_perms=1 Sep 4 16:20:13.443837 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 16:20:13.443853 kernel: SELinux: policy capability always_check_network=0 Sep 4 16:20:13.443866 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 16:20:13.443881 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 16:20:13.443896 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 16:20:13.443908 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 16:20:13.443925 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 16:20:13.443937 kernel: audit: type=1403 audit(1757002812.653:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 16:20:13.443953 systemd[1]: Successfully loaded SELinux policy in 114.928ms. Sep 4 16:20:13.443977 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.487ms. Sep 4 16:20:13.443993 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 16:20:13.444007 systemd[1]: Detected virtualization kvm. Sep 4 16:20:13.444020 systemd[1]: Detected architecture x86-64. Sep 4 16:20:13.444032 systemd[1]: Detected first boot. Sep 4 16:20:13.444045 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 16:20:13.444058 zram_generator::config[1141]: No configuration found. Sep 4 16:20:13.444074 kernel: Guest personality initialized and is inactive Sep 4 16:20:13.444095 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 16:20:13.444107 kernel: Initialized host personality Sep 4 16:20:13.444119 kernel: NET: Registered PF_VSOCK protocol family Sep 4 16:20:13.444131 systemd[1]: Populated /etc with preset unit settings. Sep 4 16:20:13.444146 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 16:20:13.444159 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 16:20:13.444172 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 16:20:13.444190 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 16:20:13.444203 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 16:20:13.444217 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 16:20:13.444229 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 16:20:13.444242 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 16:20:13.444255 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 16:20:13.444271 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 16:20:13.444283 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 16:20:13.444296 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:20:13.444309 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:20:13.444322 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 16:20:13.444335 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 16:20:13.444349 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 16:20:13.444364 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 16:20:13.444380 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 16:20:13.444392 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:20:13.444408 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:20:13.444425 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 16:20:13.444438 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 16:20:13.444453 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 16:20:13.444466 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 16:20:13.444479 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:20:13.444492 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 16:20:13.444505 systemd[1]: Reached target slices.target - Slice Units. Sep 4 16:20:13.444517 systemd[1]: Reached target swap.target - Swaps. Sep 4 16:20:13.444530 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 16:20:13.444543 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 16:20:13.444558 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 16:20:13.444571 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:20:13.444584 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 16:20:13.444597 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:20:13.444610 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 16:20:13.444624 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 16:20:13.444637 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 16:20:13.444652 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 16:20:13.444665 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:20:13.444678 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 16:20:13.444691 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 16:20:13.444704 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 16:20:13.444717 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 16:20:13.444730 systemd[1]: Reached target machines.target - Containers. Sep 4 16:20:13.444800 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 16:20:13.444813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:20:13.444827 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 16:20:13.444842 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 16:20:13.444855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:20:13.444867 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 16:20:13.444883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:20:13.444896 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 16:20:13.444909 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 16:20:13.444921 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 16:20:13.444934 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 16:20:13.444949 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 16:20:13.444963 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 16:20:13.444978 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 16:20:13.444991 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:20:13.445007 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 16:20:13.445020 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 16:20:13.445033 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 16:20:13.445048 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 16:20:13.445060 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 16:20:13.445073 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 16:20:13.445086 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 16:20:13.445099 kernel: loop: module loaded Sep 4 16:20:13.445112 systemd[1]: Stopped verity-setup.service. Sep 4 16:20:13.445127 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:20:13.445140 kernel: fuse: init (API version 7.41) Sep 4 16:20:13.445153 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 16:20:13.445166 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 16:20:13.445179 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 16:20:13.445192 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 16:20:13.445205 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 16:20:13.445220 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 16:20:13.445233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:20:13.445246 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 16:20:13.445259 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 16:20:13.445275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:20:13.445291 kernel: ACPI: bus type drm_connector registered Sep 4 16:20:13.445304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:20:13.445317 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 16:20:13.445329 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 16:20:13.445342 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 16:20:13.445355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:20:13.445370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:20:13.445399 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 16:20:13.445430 systemd-journald[1209]: Collecting audit messages is disabled. Sep 4 16:20:13.445452 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 16:20:13.445465 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 16:20:13.445478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 16:20:13.445494 systemd-journald[1209]: Journal started Sep 4 16:20:13.445517 systemd-journald[1209]: Runtime Journal (/run/log/journal/b12b51077ada4c00b0d60b687f3cbfe2) is 6M, max 48.5M, 42.4M free. Sep 4 16:20:13.184932 systemd[1]: Queued start job for default target multi-user.target. Sep 4 16:20:13.207800 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 16:20:13.208309 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 16:20:13.448840 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 16:20:13.451393 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 16:20:13.453185 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:20:13.455913 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 16:20:13.457599 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 16:20:13.472222 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 16:20:13.473721 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Sep 4 16:20:13.475011 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 16:20:13.475040 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 16:20:13.476950 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 16:20:13.478503 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:20:13.480055 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 16:20:13.482019 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 16:20:13.483187 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 16:20:13.485903 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 16:20:13.487065 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 16:20:13.493193 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:20:13.497911 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 16:20:13.500285 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 16:20:13.501517 systemd-journald[1209]: Time spent on flushing to /var/log/journal/b12b51077ada4c00b0d60b687f3cbfe2 is 17.893ms for 1064 entries. Sep 4 16:20:13.501517 systemd-journald[1209]: System Journal (/var/log/journal/b12b51077ada4c00b0d60b687f3cbfe2) is 8M, max 195.6M, 187.6M free. Sep 4 16:20:13.534587 systemd-journald[1209]: Received client request to flush runtime journal. Sep 4 16:20:13.534644 kernel: loop0: detected capacity change from 0 to 128016 Sep 4 16:20:13.502234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:20:13.504181 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 16:20:13.506701 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 16:20:13.513340 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 16:20:13.538867 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 16:20:13.543242 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:20:13.548302 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 16:20:13.548787 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 16:20:13.552920 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 16:20:13.556971 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 16:20:13.560933 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 16:20:13.564766 kernel: loop1: detected capacity change from 0 to 111000 Sep 4 16:20:13.574871 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 16:20:13.592197 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 4 16:20:13.592215 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 4 16:20:13.592786 kernel: loop2: detected capacity change from 0 to 224512 Sep 4 16:20:13.597930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:20:13.622222 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 16:20:13.627252 kernel: loop3: detected capacity change from 0 to 128016 Sep 4 16:20:13.633806 kernel: loop4: detected capacity change from 0 to 111000 Sep 4 16:20:13.642682 kernel: loop5: detected capacity change from 0 to 224512 Sep 4 16:20:13.646076 (sd-merge)[1280]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Sep 4 16:20:13.649688 (sd-merge)[1280]: Merged extensions into '/usr'. Sep 4 16:20:13.654318 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 16:20:13.654337 systemd[1]: Reloading... Sep 4 16:20:13.737790 zram_generator::config[1310]: No configuration found. Sep 4 16:20:13.779249 systemd-resolved[1270]: Positive Trust Anchors: Sep 4 16:20:13.779491 systemd-resolved[1270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 16:20:13.779497 systemd-resolved[1270]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 16:20:13.779539 systemd-resolved[1270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 16:20:13.786489 systemd-resolved[1270]: Defaulting to hostname 'linux'. Sep 4 16:20:13.970011 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 16:20:13.970272 systemd[1]: Reloading finished in 315 ms. Sep 4 16:20:14.002791 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 16:20:14.004688 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 16:20:14.009452 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:20:14.023554 systemd[1]: Starting ensure-sysext.service... Sep 4 16:20:14.026012 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 16:20:14.039565 systemd[1]: Reload requested from client PID 1347 ('systemctl') (unit ensure-sysext.service)... Sep 4 16:20:14.039584 systemd[1]: Reloading... Sep 4 16:20:14.051352 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 16:20:14.051397 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 16:20:14.051803 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 16:20:14.052135 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 16:20:14.053146 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 16:20:14.053453 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Sep 4 16:20:14.053548 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Sep 4 16:20:14.060645 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 16:20:14.060655 systemd-tmpfiles[1349]: Skipping /boot Sep 4 16:20:14.072687 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 16:20:14.072700 systemd-tmpfiles[1349]: Skipping /boot Sep 4 16:20:14.112786 zram_generator::config[1382]: No configuration found. Sep 4 16:20:14.295159 systemd[1]: Reloading finished in 255 ms. Sep 4 16:20:14.320948 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 16:20:14.354202 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:20:14.366047 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 16:20:14.368840 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 16:20:14.371373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 16:20:14.390506 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 16:20:14.395019 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 16:20:14.400907 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 16:20:14.411304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:20:14.415920 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 16:20:14.422469 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 16:20:14.424271 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 16:20:14.433587 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:20:14.434083 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:20:14.436868 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:20:14.440962 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:20:14.445141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 16:20:14.446567 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:20:14.446796 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:20:14.446929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:20:14.453459 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:20:14.453779 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:20:14.457261 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 16:20:14.458646 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:20:14.458822 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:20:14.458951 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:20:14.462251 systemd[1]: Finished ensure-sysext.service. Sep 4 16:20:14.470310 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 16:20:14.472574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:20:14.472846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:20:14.474449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:20:14.474657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:20:14.476273 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 16:20:14.476487 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 16:20:14.478033 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 16:20:14.478240 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 16:20:14.483170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 16:20:14.483244 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 16:20:14.495925 systemd-udevd[1430]: Using default interface naming scheme 'v257'. Sep 4 16:20:14.523671 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 16:20:14.570542 augenrules[1460]: No rules Sep 4 16:20:14.573052 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 16:20:14.575575 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 16:20:14.576148 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 16:20:14.578355 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 16:20:14.581310 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 16:20:14.584255 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:20:14.590999 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 16:20:14.661822 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 16:20:14.663673 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 16:20:14.789010 systemd-networkd[1476]: lo: Link UP Sep 4 16:20:14.789365 systemd-networkd[1476]: lo: Gained carrier Sep 4 16:20:14.792954 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 16:20:14.794337 systemd[1]: Reached target network.target - Network. Sep 4 16:20:14.798236 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 16:20:14.802856 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 16:20:14.809906 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 16:20:14.839252 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 16:20:14.867150 systemd-networkd[1476]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:20:14.867162 systemd-networkd[1476]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 16:20:14.868197 systemd-networkd[1476]: eth0: Link UP Sep 4 16:20:14.869189 systemd-networkd[1476]: eth0: Gained carrier Sep 4 16:20:14.869204 systemd-networkd[1476]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:20:14.900819 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 16:20:14.908041 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 16:20:14.912824 systemd-networkd[1476]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 16:20:14.914578 systemd-timesyncd[1449]: Network configuration changed, trying to establish connection. Sep 4 16:20:16.358415 systemd-timesyncd[1449]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 16:20:16.358452 systemd-timesyncd[1449]: Initial clock synchronization to Thu 2025-09-04 16:20:16.358344 UTC. Sep 4 16:20:16.358865 systemd-resolved[1270]: Clock change detected. Flushing caches. Sep 4 16:20:16.363207 kernel: ACPI: button: Power Button [PWRF] Sep 4 16:20:16.370730 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 16:20:16.375371 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 16:20:16.381060 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 16:20:16.742749 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 4 16:20:16.743162 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 16:20:16.743383 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 16:20:16.774217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:20:16.783569 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:20:16.783878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:20:16.788741 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:20:16.911873 kernel: kvm_amd: TSC scaling supported Sep 4 16:20:16.911983 kernel: kvm_amd: Nested Virtualization enabled Sep 4 16:20:16.911999 kernel: kvm_amd: Nested Paging enabled Sep 4 16:20:16.912012 kernel: kvm_amd: LBR virtualization supported Sep 4 16:20:16.913023 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 16:20:16.913046 kernel: kvm_amd: Virtual GIF supported Sep 4 16:20:16.928686 ldconfig[1422]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 16:20:16.941124 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 16:20:16.944846 kernel: EDAC MC: Ver: 3.0.0 Sep 4 16:20:16.946616 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 16:20:16.967146 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:20:16.980066 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 16:20:16.981503 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 16:20:16.982720 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 16:20:16.983983 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 16:20:16.985249 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 16:20:16.986572 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 16:20:16.987768 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 16:20:16.989023 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 16:20:16.990284 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 16:20:16.990324 systemd[1]: Reached target paths.target - Path Units. Sep 4 16:20:16.991217 systemd[1]: Reached target timers.target - Timer Units. Sep 4 16:20:16.993088 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 16:20:16.996067 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 16:20:16.999553 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 16:20:17.001029 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 16:20:17.002272 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 16:20:17.006151 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 16:20:17.007514 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 16:20:17.009380 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 16:20:17.011241 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 16:20:17.012211 systemd[1]: Reached target basic.target - Basic System. Sep 4 16:20:17.013167 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 16:20:17.013197 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 16:20:17.014251 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 16:20:17.016291 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 16:20:17.018174 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 16:20:17.024650 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 16:20:17.027839 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 16:20:17.028866 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 16:20:17.030134 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 16:20:17.032834 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 16:20:17.034556 jq[1548]: false Sep 4 16:20:17.035034 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 16:20:17.038181 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 16:20:17.042125 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 16:20:17.048782 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 16:20:17.050804 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 16:20:17.051811 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 16:20:17.052930 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 16:20:17.056171 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 16:20:17.057092 oslogin_cache_refresh[1550]: Refreshing passwd entry cache Sep 4 16:20:17.060146 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Refreshing passwd entry cache Sep 4 16:20:17.063709 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 16:20:17.065807 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 16:20:17.066082 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 16:20:17.068362 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 16:20:17.068696 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 16:20:17.070537 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Failure getting users, quitting Sep 4 16:20:17.070528 oslogin_cache_refresh[1550]: Failure getting users, quitting Sep 4 16:20:17.071822 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 16:20:17.071822 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Refreshing group entry cache Sep 4 16:20:17.070554 oslogin_cache_refresh[1550]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 16:20:17.070623 oslogin_cache_refresh[1550]: Refreshing group entry cache Sep 4 16:20:17.074031 extend-filesystems[1549]: Found /dev/vda6 Sep 4 16:20:17.078617 oslogin_cache_refresh[1550]: Failure getting groups, quitting Sep 4 16:20:17.078773 extend-filesystems[1549]: Found /dev/vda9 Sep 4 16:20:17.079881 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Failure getting groups, quitting Sep 4 16:20:17.079881 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 16:20:17.078628 oslogin_cache_refresh[1550]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 16:20:17.086035 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 16:20:17.089576 extend-filesystems[1549]: Checking size of /dev/vda9 Sep 4 16:20:17.086344 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 16:20:17.091539 tar[1569]: linux-amd64/LICENSE Sep 4 16:20:17.091539 tar[1569]: linux-amd64/helm Sep 4 16:20:17.093580 update_engine[1560]: I20250904 16:20:17.093505 1560 main.cc:92] Flatcar Update Engine starting Sep 4 16:20:17.093977 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 16:20:17.094800 jq[1563]: true Sep 4 16:20:17.096353 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 16:20:17.096705 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 16:20:17.106318 extend-filesystems[1549]: Resized partition /dev/vda9 Sep 4 16:20:17.110548 extend-filesystems[1596]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 16:20:17.114617 jq[1592]: true Sep 4 16:20:17.125376 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 16:20:17.159430 dbus-daemon[1546]: [system] SELinux support is enabled Sep 4 16:20:17.159724 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 16:20:17.163682 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 16:20:17.166762 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 16:20:17.166802 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 16:20:17.168215 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 16:20:17.198973 update_engine[1560]: I20250904 16:20:17.175956 1560 update_check_scheduler.cc:74] Next update check in 2m4s Sep 4 16:20:17.168236 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 16:20:17.179258 systemd[1]: Started update-engine.service - Update Engine. Sep 4 16:20:17.184627 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 16:20:17.200216 extend-filesystems[1596]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 16:20:17.200216 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 16:20:17.200216 extend-filesystems[1596]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 16:20:17.204351 extend-filesystems[1549]: Resized filesystem in /dev/vda9 Sep 4 16:20:17.202708 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 16:20:17.204030 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 16:20:17.205587 systemd-logind[1558]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 16:20:17.205898 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 16:20:17.207777 systemd-logind[1558]: New seat seat0. Sep 4 16:20:17.219571 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 16:20:17.226311 bash[1614]: Updated "/home/core/.ssh/authorized_keys" Sep 4 16:20:17.266454 locksmithd[1615]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 16:20:17.369351 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 16:20:17.628747 containerd[1583]: time="2025-09-04T16:20:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 16:20:17.630405 containerd[1583]: time="2025-09-04T16:20:17.629754266Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 4 16:20:17.633426 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 16:20:17.638247 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 16:20:17.649544 containerd[1583]: time="2025-09-04T16:20:17.649488805Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.309µs" Sep 4 16:20:17.649544 containerd[1583]: time="2025-09-04T16:20:17.649530172Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 16:20:17.649618 containerd[1583]: time="2025-09-04T16:20:17.649553586Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 16:20:17.649853 containerd[1583]: time="2025-09-04T16:20:17.649821579Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 16:20:17.649853 containerd[1583]: time="2025-09-04T16:20:17.649845784Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 16:20:17.649898 containerd[1583]: time="2025-09-04T16:20:17.649880930Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 16:20:17.649977 containerd[1583]: time="2025-09-04T16:20:17.649958385Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 16:20:17.649977 containerd[1583]: time="2025-09-04T16:20:17.649973634Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 16:20:17.650310 containerd[1583]: time="2025-09-04T16:20:17.650261514Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 16:20:17.650310 containerd[1583]: time="2025-09-04T16:20:17.650282864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 16:20:17.650310 containerd[1583]: time="2025-09-04T16:20:17.650293895Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 16:20:17.650310 containerd[1583]: time="2025-09-04T16:20:17.650302150Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 16:20:17.650434 containerd[1583]: time="2025-09-04T16:20:17.650415002Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 16:20:17.651042 containerd[1583]: time="2025-09-04T16:20:17.650748888Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 16:20:17.651042 containerd[1583]: time="2025-09-04T16:20:17.650787761Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 16:20:17.651042 containerd[1583]: time="2025-09-04T16:20:17.650797369Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 16:20:17.651042 containerd[1583]: time="2025-09-04T16:20:17.650840750Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 16:20:17.651238 containerd[1583]: time="2025-09-04T16:20:17.651148247Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 16:20:17.651238 containerd[1583]: time="2025-09-04T16:20:17.651229449Z" level=info msg="metadata content store policy set" policy=shared Sep 4 16:20:17.656587 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 16:20:17.663498 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 16:20:17.682909 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 16:20:17.683234 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 16:20:17.686470 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 16:20:17.742350 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 16:20:17.745178 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 16:20:17.747432 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 16:20:17.748765 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 16:20:17.854641 containerd[1583]: time="2025-09-04T16:20:17.854570476Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 16:20:17.854641 containerd[1583]: time="2025-09-04T16:20:17.854653682Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 16:20:17.854808 containerd[1583]: time="2025-09-04T16:20:17.854698616Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 16:20:17.854808 containerd[1583]: time="2025-09-04T16:20:17.854712723Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 16:20:17.854808 containerd[1583]: time="2025-09-04T16:20:17.854724806Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 16:20:17.854808 containerd[1583]: time="2025-09-04T16:20:17.854737018Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 16:20:17.854808 containerd[1583]: time="2025-09-04T16:20:17.854749051Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 16:20:17.854808 containerd[1583]: time="2025-09-04T16:20:17.854759881Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 16:20:17.854808 containerd[1583]: time="2025-09-04T16:20:17.854772766Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 16:20:17.854808 containerd[1583]: time="2025-09-04T16:20:17.854782323Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 16:20:17.854808 containerd[1583]: time="2025-09-04T16:20:17.854790960Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 16:20:17.854808 containerd[1583]: time="2025-09-04T16:20:17.854804615Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 16:20:17.855001 containerd[1583]: time="2025-09-04T16:20:17.854983350Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 16:20:17.855022 containerd[1583]: time="2025-09-04T16:20:17.855008828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 16:20:17.855069 containerd[1583]: time="2025-09-04T16:20:17.855037242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 16:20:17.855069 containerd[1583]: time="2025-09-04T16:20:17.855056087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 16:20:17.855107 containerd[1583]: time="2025-09-04T16:20:17.855076245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 16:20:17.855107 containerd[1583]: time="2025-09-04T16:20:17.855094950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 16:20:17.855148 containerd[1583]: time="2025-09-04T16:20:17.855108495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 16:20:17.855148 containerd[1583]: time="2025-09-04T16:20:17.855120167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 16:20:17.855148 containerd[1583]: time="2025-09-04T16:20:17.855130426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 16:20:17.855148 containerd[1583]: time="2025-09-04T16:20:17.855139794Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 16:20:17.855225 containerd[1583]: time="2025-09-04T16:20:17.855183706Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 16:20:17.855313 containerd[1583]: time="2025-09-04T16:20:17.855288122Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 16:20:17.855313 containerd[1583]: time="2025-09-04T16:20:17.855311776Z" level=info msg="Start snapshots syncer" Sep 4 16:20:17.855367 containerd[1583]: time="2025-09-04T16:20:17.855349587Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 16:20:17.856528 containerd[1583]: time="2025-09-04T16:20:17.855865024Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 16:20:17.856528 containerd[1583]: time="2025-09-04T16:20:17.856240318Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 16:20:17.858500 containerd[1583]: time="2025-09-04T16:20:17.858473135Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 16:20:17.858638 containerd[1583]: time="2025-09-04T16:20:17.858608459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 16:20:17.858703 containerd[1583]: time="2025-09-04T16:20:17.858684752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 16:20:17.858725 containerd[1583]: time="2025-09-04T16:20:17.858714117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 16:20:17.858745 containerd[1583]: time="2025-09-04T16:20:17.858726360Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 16:20:17.858764 containerd[1583]: time="2025-09-04T16:20:17.858744434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 16:20:17.858764 containerd[1583]: time="2025-09-04T16:20:17.858759482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 16:20:17.858810 containerd[1583]: time="2025-09-04T16:20:17.858774701Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 16:20:17.858810 containerd[1583]: time="2025-09-04T16:20:17.858807392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 16:20:17.858847 containerd[1583]: time="2025-09-04T16:20:17.858819595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 16:20:17.858847 containerd[1583]: time="2025-09-04T16:20:17.858833761Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 16:20:17.858896 containerd[1583]: time="2025-09-04T16:20:17.858881170Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 16:20:17.858918 containerd[1583]: time="2025-09-04T16:20:17.858901328Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 16:20:17.858918 containerd[1583]: time="2025-09-04T16:20:17.858910215Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 16:20:17.858964 containerd[1583]: time="2025-09-04T16:20:17.858922788Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 16:20:17.858964 containerd[1583]: time="2025-09-04T16:20:17.858934761Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 16:20:17.858964 containerd[1583]: time="2025-09-04T16:20:17.858944369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 16:20:17.858964 containerd[1583]: time="2025-09-04T16:20:17.858960599Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 16:20:17.859077 containerd[1583]: time="2025-09-04T16:20:17.859060196Z" level=info msg="runtime interface created" Sep 4 16:20:17.859077 containerd[1583]: time="2025-09-04T16:20:17.859071347Z" level=info msg="created NRI interface" Sep 4 16:20:17.859122 containerd[1583]: time="2025-09-04T16:20:17.859083119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 16:20:17.859122 containerd[1583]: time="2025-09-04T16:20:17.859095532Z" level=info msg="Connect containerd service" Sep 4 16:20:17.859160 containerd[1583]: time="2025-09-04T16:20:17.859125488Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 16:20:17.864520 containerd[1583]: time="2025-09-04T16:20:17.864454552Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 16:20:17.971804 tar[1569]: linux-amd64/README.md Sep 4 16:20:17.999268 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 16:20:18.067455 containerd[1583]: time="2025-09-04T16:20:18.067390810Z" level=info msg="Start subscribing containerd event" Sep 4 16:20:18.067590 containerd[1583]: time="2025-09-04T16:20:18.067479216Z" level=info msg="Start recovering state" Sep 4 16:20:18.067590 containerd[1583]: time="2025-09-04T16:20:18.067562292Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 16:20:18.067688 containerd[1583]: time="2025-09-04T16:20:18.067639797Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 16:20:18.067688 containerd[1583]: time="2025-09-04T16:20:18.067643624Z" level=info msg="Start event monitor" Sep 4 16:20:18.067750 containerd[1583]: time="2025-09-04T16:20:18.067731680Z" level=info msg="Start cni network conf syncer for default" Sep 4 16:20:18.067796 containerd[1583]: time="2025-09-04T16:20:18.067751707Z" level=info msg="Start streaming server" Sep 4 16:20:18.067796 containerd[1583]: time="2025-09-04T16:20:18.067762868Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 16:20:18.067796 containerd[1583]: time="2025-09-04T16:20:18.067770262Z" level=info msg="runtime interface starting up..." Sep 4 16:20:18.067796 containerd[1583]: time="2025-09-04T16:20:18.067775912Z" level=info msg="starting plugins..." Sep 4 16:20:18.067796 containerd[1583]: time="2025-09-04T16:20:18.067796521Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 16:20:18.068173 containerd[1583]: time="2025-09-04T16:20:18.068058923Z" level=info msg="containerd successfully booted in 0.490963s" Sep 4 16:20:18.068190 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 16:20:18.301872 systemd-networkd[1476]: eth0: Gained IPv6LL Sep 4 16:20:18.305699 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 16:20:18.307710 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 16:20:18.310456 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 16:20:18.312794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:20:18.315093 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 16:20:18.350902 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 16:20:18.352634 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 16:20:18.352986 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 16:20:18.356195 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 16:20:19.922910 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 16:20:19.926047 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:36498.service - OpenSSH per-connection server daemon (10.0.0.1:36498). Sep 4 16:20:19.930606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:20:19.932380 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 16:20:19.933098 systemd[1]: Startup finished in 2.900s (kernel) + 6.962s (initrd) + 5.949s (userspace) = 15.811s. Sep 4 16:20:19.942097 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 16:20:20.005144 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 36498 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:20:20.007097 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:20:20.021262 systemd-logind[1558]: New session 1 of user core. Sep 4 16:20:20.022858 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 16:20:20.024362 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 16:20:20.051103 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 16:20:20.055281 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 16:20:20.072061 (systemd)[1699]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 16:20:20.075776 systemd-logind[1558]: New session c1 of user core. Sep 4 16:20:20.223567 systemd[1699]: Queued start job for default target default.target. Sep 4 16:20:20.270712 systemd[1699]: Created slice app.slice - User Application Slice. Sep 4 16:20:20.270743 systemd[1699]: Reached target paths.target - Paths. Sep 4 16:20:20.270787 systemd[1699]: Reached target timers.target - Timers. Sep 4 16:20:20.275107 systemd[1699]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 16:20:20.293103 systemd[1699]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 16:20:20.293259 systemd[1699]: Reached target sockets.target - Sockets. Sep 4 16:20:20.293306 systemd[1699]: Reached target basic.target - Basic System. Sep 4 16:20:20.293356 systemd[1699]: Reached target default.target - Main User Target. Sep 4 16:20:20.293399 systemd[1699]: Startup finished in 208ms. Sep 4 16:20:20.293582 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 16:20:20.295922 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 16:20:20.365313 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:36508.service - OpenSSH per-connection server daemon (10.0.0.1:36508). Sep 4 16:20:20.452856 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 36508 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:20:20.454871 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:20:20.461490 systemd-logind[1558]: New session 2 of user core. Sep 4 16:20:20.474854 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 16:20:20.533446 sshd[1718]: Connection closed by 10.0.0.1 port 36508 Sep 4 16:20:20.533923 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Sep 4 16:20:20.546197 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:36508.service: Deactivated successfully. Sep 4 16:20:20.547924 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 16:20:20.548583 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Sep 4 16:20:20.551030 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:36514.service - OpenSSH per-connection server daemon (10.0.0.1:36514). Sep 4 16:20:20.552356 systemd-logind[1558]: Removed session 2. Sep 4 16:20:20.625419 kubelet[1689]: E0904 16:20:20.625360 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 16:20:20.629413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 16:20:20.629609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 16:20:20.629980 systemd[1]: kubelet.service: Consumed 2.115s CPU time, 265.9M memory peak. Sep 4 16:20:20.658011 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 36514 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:20:20.659399 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:20:20.663575 systemd-logind[1558]: New session 3 of user core. Sep 4 16:20:20.673878 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 16:20:20.723268 sshd[1728]: Connection closed by 10.0.0.1 port 36514 Sep 4 16:20:20.723734 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Sep 4 16:20:20.738210 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:36514.service: Deactivated successfully. Sep 4 16:20:20.740095 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 16:20:20.740807 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Sep 4 16:20:20.743419 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:36516.service - OpenSSH per-connection server daemon (10.0.0.1:36516). Sep 4 16:20:20.744078 systemd-logind[1558]: Removed session 3. Sep 4 16:20:20.803033 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 36516 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:20:20.804521 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:20:20.809843 systemd-logind[1558]: New session 4 of user core. Sep 4 16:20:20.823785 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 16:20:20.878358 sshd[1737]: Connection closed by 10.0.0.1 port 36516 Sep 4 16:20:20.878774 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Sep 4 16:20:20.889161 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:36516.service: Deactivated successfully. Sep 4 16:20:20.891155 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 16:20:20.891835 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Sep 4 16:20:20.894492 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:36532.service - OpenSSH per-connection server daemon (10.0.0.1:36532). Sep 4 16:20:20.895065 systemd-logind[1558]: Removed session 4. Sep 4 16:20:20.946746 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 36532 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:20:20.947942 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:20:20.952019 systemd-logind[1558]: New session 5 of user core. Sep 4 16:20:20.965795 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 16:20:21.028888 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 16:20:21.029251 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:20:21.052442 sudo[1748]: pam_unix(sudo:session): session closed for user root Sep 4 16:20:21.054584 sshd[1747]: Connection closed by 10.0.0.1 port 36532 Sep 4 16:20:21.055026 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 4 16:20:21.064247 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:36532.service: Deactivated successfully. Sep 4 16:20:21.066060 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 16:20:21.066879 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Sep 4 16:20:21.069512 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:36534.service - OpenSSH per-connection server daemon (10.0.0.1:36534). Sep 4 16:20:21.070297 systemd-logind[1558]: Removed session 5. Sep 4 16:20:21.118606 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 36534 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:20:21.119956 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:20:21.124270 systemd-logind[1558]: New session 6 of user core. Sep 4 16:20:21.133841 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 16:20:21.189185 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 16:20:21.189486 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:20:21.274070 sudo[1759]: pam_unix(sudo:session): session closed for user root Sep 4 16:20:21.282131 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 16:20:21.282493 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:20:21.293749 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 16:20:21.345210 augenrules[1781]: No rules Sep 4 16:20:21.346919 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 16:20:21.347236 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 16:20:21.348573 sudo[1758]: pam_unix(sudo:session): session closed for user root Sep 4 16:20:21.350295 sshd[1757]: Connection closed by 10.0.0.1 port 36534 Sep 4 16:20:21.350701 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Sep 4 16:20:21.363211 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:36534.service: Deactivated successfully. Sep 4 16:20:21.365096 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 16:20:21.365806 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Sep 4 16:20:21.368649 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:36550.service - OpenSSH per-connection server daemon (10.0.0.1:36550). Sep 4 16:20:21.369145 systemd-logind[1558]: Removed session 6. Sep 4 16:20:21.418855 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 36550 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:20:21.420101 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:20:21.424124 systemd-logind[1558]: New session 7 of user core. Sep 4 16:20:21.437770 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 16:20:21.490871 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 16:20:21.491180 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:20:22.200274 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 16:20:22.217942 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 16:20:22.644854 dockerd[1814]: time="2025-09-04T16:20:22.644759981Z" level=info msg="Starting up" Sep 4 16:20:22.645657 dockerd[1814]: time="2025-09-04T16:20:22.645606278Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 16:20:22.690846 dockerd[1814]: time="2025-09-04T16:20:22.690778192Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 4 16:20:23.917860 dockerd[1814]: time="2025-09-04T16:20:23.917762896Z" level=info msg="Loading containers: start." Sep 4 16:20:23.934713 kernel: Initializing XFRM netlink socket Sep 4 16:20:24.206243 systemd-networkd[1476]: docker0: Link UP Sep 4 16:20:24.212964 dockerd[1814]: time="2025-09-04T16:20:24.212911866Z" level=info msg="Loading containers: done." Sep 4 16:20:24.232401 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2770179851-merged.mount: Deactivated successfully. Sep 4 16:20:24.234390 dockerd[1814]: time="2025-09-04T16:20:24.234338549Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 16:20:24.234517 dockerd[1814]: time="2025-09-04T16:20:24.234498148Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 4 16:20:24.234638 dockerd[1814]: time="2025-09-04T16:20:24.234614426Z" level=info msg="Initializing buildkit" Sep 4 16:20:24.266619 dockerd[1814]: time="2025-09-04T16:20:24.266556078Z" level=info msg="Completed buildkit initialization" Sep 4 16:20:24.276282 dockerd[1814]: time="2025-09-04T16:20:24.276211856Z" level=info msg="Daemon has completed initialization" Sep 4 16:20:24.276472 dockerd[1814]: time="2025-09-04T16:20:24.276381915Z" level=info msg="API listen on /run/docker.sock" Sep 4 16:20:24.276562 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 16:20:25.217342 containerd[1583]: time="2025-09-04T16:20:25.217251080Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 16:20:25.967219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3830747433.mount: Deactivated successfully. Sep 4 16:20:27.874111 containerd[1583]: time="2025-09-04T16:20:27.874041300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:27.876937 containerd[1583]: time="2025-09-04T16:20:27.876910732Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 4 16:20:27.880598 containerd[1583]: time="2025-09-04T16:20:27.880525230Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:27.883589 containerd[1583]: time="2025-09-04T16:20:27.883541236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:27.884466 containerd[1583]: time="2025-09-04T16:20:27.884415406Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.66708175s" Sep 4 16:20:27.884511 containerd[1583]: time="2025-09-04T16:20:27.884469186Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 16:20:27.885764 containerd[1583]: time="2025-09-04T16:20:27.885708721Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 16:20:29.633359 containerd[1583]: time="2025-09-04T16:20:29.633276064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:29.634027 containerd[1583]: time="2025-09-04T16:20:29.633984312Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 4 16:20:29.635181 containerd[1583]: time="2025-09-04T16:20:29.635147774Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:29.637484 containerd[1583]: time="2025-09-04T16:20:29.637437267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:29.638336 containerd[1583]: time="2025-09-04T16:20:29.638276922Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.752538626s" Sep 4 16:20:29.638336 containerd[1583]: time="2025-09-04T16:20:29.638324221Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 16:20:29.638884 containerd[1583]: time="2025-09-04T16:20:29.638848394Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 16:20:30.795503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 16:20:30.797106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:20:31.450228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:20:31.454026 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 16:20:31.542373 containerd[1583]: time="2025-09-04T16:20:31.542320243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:31.543190 containerd[1583]: time="2025-09-04T16:20:31.543105746Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 4 16:20:31.544345 containerd[1583]: time="2025-09-04T16:20:31.544303592Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:31.547264 containerd[1583]: time="2025-09-04T16:20:31.547218769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:31.548219 containerd[1583]: time="2025-09-04T16:20:31.548172879Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.909291413s" Sep 4 16:20:31.548219 containerd[1583]: time="2025-09-04T16:20:31.548205820Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 16:20:31.549118 containerd[1583]: time="2025-09-04T16:20:31.549089137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 16:20:31.575336 kubelet[2103]: E0904 16:20:31.575273 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 16:20:31.581575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 16:20:31.581794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 16:20:31.582160 systemd[1]: kubelet.service: Consumed 284ms CPU time, 110.7M memory peak. Sep 4 16:20:32.442259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083960543.mount: Deactivated successfully. Sep 4 16:20:33.087040 containerd[1583]: time="2025-09-04T16:20:33.086985421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:33.087714 containerd[1583]: time="2025-09-04T16:20:33.087683190Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 4 16:20:33.088764 containerd[1583]: time="2025-09-04T16:20:33.088710577Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:33.090417 containerd[1583]: time="2025-09-04T16:20:33.090381410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:33.090899 containerd[1583]: time="2025-09-04T16:20:33.090868664Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.541747577s" Sep 4 16:20:33.090940 containerd[1583]: time="2025-09-04T16:20:33.090898700Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 16:20:33.091314 containerd[1583]: time="2025-09-04T16:20:33.091291397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 16:20:33.716082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146066695.mount: Deactivated successfully. Sep 4 16:20:35.023243 containerd[1583]: time="2025-09-04T16:20:35.023149260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:35.024937 containerd[1583]: time="2025-09-04T16:20:35.024861290Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 16:20:35.026338 containerd[1583]: time="2025-09-04T16:20:35.026302744Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:35.030594 containerd[1583]: time="2025-09-04T16:20:35.030538858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:35.031896 containerd[1583]: time="2025-09-04T16:20:35.031826002Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.940506112s" Sep 4 16:20:35.031896 containerd[1583]: time="2025-09-04T16:20:35.031877198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 16:20:35.032437 containerd[1583]: time="2025-09-04T16:20:35.032402423Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 16:20:35.525197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070724493.mount: Deactivated successfully. Sep 4 16:20:35.531581 containerd[1583]: time="2025-09-04T16:20:35.531527510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:20:35.532227 containerd[1583]: time="2025-09-04T16:20:35.532178521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 16:20:35.533349 containerd[1583]: time="2025-09-04T16:20:35.533311456Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:20:35.535251 containerd[1583]: time="2025-09-04T16:20:35.535199477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:20:35.535792 containerd[1583]: time="2025-09-04T16:20:35.535743838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 503.311709ms" Sep 4 16:20:35.535792 containerd[1583]: time="2025-09-04T16:20:35.535781628Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 16:20:35.536511 containerd[1583]: time="2025-09-04T16:20:35.536300471Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 16:20:36.276420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184514840.mount: Deactivated successfully. Sep 4 16:20:39.115426 containerd[1583]: time="2025-09-04T16:20:39.115299773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:39.116075 containerd[1583]: time="2025-09-04T16:20:39.115778681Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 4 16:20:39.117144 containerd[1583]: time="2025-09-04T16:20:39.117106311Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:39.120473 containerd[1583]: time="2025-09-04T16:20:39.120397873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:20:39.121724 containerd[1583]: time="2025-09-04T16:20:39.121682493Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.585324243s" Sep 4 16:20:39.121724 containerd[1583]: time="2025-09-04T16:20:39.121732747Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 16:20:41.787610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 16:20:41.789488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:20:41.802447 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 16:20:41.802545 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 16:20:41.802842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:20:41.805312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:20:41.830094 systemd[1]: Reload requested from client PID 2260 ('systemctl') (unit session-7.scope)... Sep 4 16:20:41.830110 systemd[1]: Reloading... Sep 4 16:20:41.920709 zram_generator::config[2309]: No configuration found. Sep 4 16:20:42.228978 systemd[1]: Reloading finished in 398 ms. Sep 4 16:20:42.310436 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 16:20:42.310574 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 16:20:42.311031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:20:42.311096 systemd[1]: kubelet.service: Consumed 225ms CPU time, 98.4M memory peak. Sep 4 16:20:42.313221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:20:42.479316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:20:42.492934 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 16:20:42.530364 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:20:42.530364 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 16:20:42.530364 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:20:42.530774 kubelet[2351]: I0904 16:20:42.530440 2351 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 16:20:42.755594 kubelet[2351]: I0904 16:20:42.755480 2351 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 16:20:42.755594 kubelet[2351]: I0904 16:20:42.755508 2351 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 16:20:42.755794 kubelet[2351]: I0904 16:20:42.755773 2351 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 16:20:42.782398 kubelet[2351]: I0904 16:20:42.782362 2351 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 16:20:42.782858 kubelet[2351]: E0904 16:20:42.782809 2351 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:42.790386 kubelet[2351]: I0904 16:20:42.790358 2351 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 16:20:42.795595 kubelet[2351]: I0904 16:20:42.795566 2351 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 16:20:42.796747 kubelet[2351]: I0904 16:20:42.796692 2351 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 16:20:42.796960 kubelet[2351]: I0904 16:20:42.796732 2351 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 16:20:42.797107 kubelet[2351]: I0904 16:20:42.796970 2351 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 16:20:42.797107 kubelet[2351]: I0904 16:20:42.796980 2351 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 16:20:42.797155 kubelet[2351]: I0904 16:20:42.797146 2351 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:20:42.799872 kubelet[2351]: I0904 16:20:42.799841 2351 kubelet.go:446] "Attempting to sync node with API server" Sep 4 16:20:42.799926 kubelet[2351]: I0904 16:20:42.799917 2351 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 16:20:42.799971 kubelet[2351]: I0904 16:20:42.799957 2351 kubelet.go:352] "Adding apiserver pod source" Sep 4 16:20:42.800004 kubelet[2351]: I0904 16:20:42.799978 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 16:20:42.803745 kubelet[2351]: I0904 16:20:42.803708 2351 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 16:20:42.804692 kubelet[2351]: I0904 16:20:42.804232 2351 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 16:20:42.804692 kubelet[2351]: W0904 16:20:42.804208 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:42.804692 kubelet[2351]: E0904 16:20:42.804373 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:42.804692 kubelet[2351]: W0904 16:20:42.804221 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:42.804692 kubelet[2351]: E0904 16:20:42.804408 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:42.804837 kubelet[2351]: W0904 16:20:42.804799 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 16:20:42.806992 kubelet[2351]: I0904 16:20:42.806965 2351 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 16:20:42.807039 kubelet[2351]: I0904 16:20:42.807014 2351 server.go:1287] "Started kubelet" Sep 4 16:20:42.808701 kubelet[2351]: I0904 16:20:42.808685 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 16:20:42.809367 kubelet[2351]: I0904 16:20:42.808749 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 16:20:42.809367 kubelet[2351]: I0904 16:20:42.809133 2351 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 16:20:42.809367 kubelet[2351]: I0904 16:20:42.809199 2351 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 16:20:42.810222 kubelet[2351]: I0904 16:20:42.810196 2351 server.go:479] "Adding debug handlers to kubelet server" Sep 4 16:20:42.811198 kubelet[2351]: I0904 16:20:42.811171 2351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 16:20:42.811247 kubelet[2351]: E0904 16:20:42.811214 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:42.811247 kubelet[2351]: I0904 16:20:42.811244 2351 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 16:20:42.811448 kubelet[2351]: I0904 16:20:42.811432 2351 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 16:20:42.811494 kubelet[2351]: I0904 16:20:42.811486 2351 reconciler.go:26] "Reconciler: start to sync state" Sep 4 16:20:42.812748 kubelet[2351]: W0904 16:20:42.812043 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:42.812748 kubelet[2351]: E0904 16:20:42.812084 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:42.813651 kubelet[2351]: E0904 16:20:42.813085 2351 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 16:20:42.813796 kubelet[2351]: E0904 16:20:42.812592 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186220c6cadc9f6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 16:20:42.806984559 +0000 UTC m=+0.309678067,LastTimestamp:2025-09-04 16:20:42.806984559 +0000 UTC m=+0.309678067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 16:20:42.814325 kubelet[2351]: E0904 16:20:42.814197 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Sep 4 16:20:42.814704 kubelet[2351]: I0904 16:20:42.814642 2351 factory.go:221] Registration of the containerd container factory successfully Sep 4 16:20:42.814704 kubelet[2351]: I0904 16:20:42.814687 2351 factory.go:221] Registration of the systemd container factory successfully Sep 4 16:20:42.814858 kubelet[2351]: I0904 16:20:42.814843 2351 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 16:20:42.830450 kubelet[2351]: I0904 16:20:42.830423 2351 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 16:20:42.830450 kubelet[2351]: I0904 16:20:42.830438 2351 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 16:20:42.830450 kubelet[2351]: I0904 16:20:42.830455 2351 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:20:42.832807 kubelet[2351]: I0904 16:20:42.832764 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 16:20:42.834217 kubelet[2351]: I0904 16:20:42.834190 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 16:20:42.834317 kubelet[2351]: I0904 16:20:42.834235 2351 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 16:20:42.834317 kubelet[2351]: I0904 16:20:42.834265 2351 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 16:20:42.834317 kubelet[2351]: I0904 16:20:42.834275 2351 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 16:20:42.834386 kubelet[2351]: E0904 16:20:42.834335 2351 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 16:20:42.835249 kubelet[2351]: W0904 16:20:42.835192 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:42.835295 kubelet[2351]: E0904 16:20:42.835262 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:42.912008 kubelet[2351]: E0904 16:20:42.911949 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:42.935389 kubelet[2351]: E0904 16:20:42.935345 2351 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 16:20:43.012995 kubelet[2351]: E0904 16:20:43.012876 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:43.015590 kubelet[2351]: E0904 16:20:43.015553 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Sep 4 16:20:43.114041 kubelet[2351]: E0904 16:20:43.114007 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:43.136170 kubelet[2351]: E0904 16:20:43.136129 2351 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 16:20:43.214582 kubelet[2351]: E0904 16:20:43.214535 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:43.314740 kubelet[2351]: E0904 16:20:43.314709 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:43.415337 kubelet[2351]: E0904 16:20:43.415281 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:43.416882 kubelet[2351]: E0904 16:20:43.416839 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Sep 4 16:20:43.516151 kubelet[2351]: E0904 16:20:43.516094 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:43.536343 kubelet[2351]: E0904 16:20:43.536306 2351 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 16:20:43.616955 kubelet[2351]: E0904 16:20:43.616852 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:43.717588 kubelet[2351]: E0904 16:20:43.717536 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:43.817937 kubelet[2351]: E0904 16:20:43.817902 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:43.918699 kubelet[2351]: E0904 16:20:43.918557 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:44.019140 kubelet[2351]: E0904 16:20:44.019099 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:44.026804 kubelet[2351]: W0904 16:20:44.026720 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:44.026856 kubelet[2351]: E0904 16:20:44.026804 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:44.053423 kubelet[2351]: W0904 16:20:44.053384 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:44.053423 kubelet[2351]: E0904 16:20:44.053409 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:44.102989 kubelet[2351]: W0904 16:20:44.102947 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:44.102989 kubelet[2351]: E0904 16:20:44.102981 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:44.119679 kubelet[2351]: E0904 16:20:44.119642 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:44.217553 kubelet[2351]: E0904 16:20:44.217459 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" Sep 4 16:20:44.220601 kubelet[2351]: E0904 16:20:44.220564 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:44.256098 kubelet[2351]: W0904 16:20:44.256031 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:44.256141 kubelet[2351]: E0904 16:20:44.256113 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:44.315416 kubelet[2351]: I0904 16:20:44.315377 2351 policy_none.go:49] "None policy: Start" Sep 4 16:20:44.315473 kubelet[2351]: I0904 16:20:44.315418 2351 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 16:20:44.315473 kubelet[2351]: I0904 16:20:44.315442 2351 state_mem.go:35] "Initializing new in-memory state store" Sep 4 16:20:44.321376 kubelet[2351]: E0904 16:20:44.321343 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:44.332984 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 16:20:44.336929 kubelet[2351]: E0904 16:20:44.336894 2351 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 16:20:44.347345 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 16:20:44.351493 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 16:20:44.370579 kubelet[2351]: I0904 16:20:44.370253 2351 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 16:20:44.370579 kubelet[2351]: I0904 16:20:44.370565 2351 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 16:20:44.370715 kubelet[2351]: I0904 16:20:44.370577 2351 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 16:20:44.372582 kubelet[2351]: I0904 16:20:44.370907 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 16:20:44.372992 kubelet[2351]: E0904 16:20:44.372799 2351 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 16:20:44.372992 kubelet[2351]: E0904 16:20:44.372852 2351 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 16:20:44.472369 kubelet[2351]: I0904 16:20:44.472226 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:20:44.472792 kubelet[2351]: E0904 16:20:44.472756 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 4 16:20:44.677849 kubelet[2351]: I0904 16:20:44.677776 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:20:44.678915 kubelet[2351]: E0904 16:20:44.678829 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 4 16:20:44.919707 kubelet[2351]: E0904 16:20:44.919616 2351 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:45.081304 kubelet[2351]: I0904 16:20:45.081257 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:20:45.081798 kubelet[2351]: E0904 16:20:45.081739 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 4 16:20:45.818683 kubelet[2351]: E0904 16:20:45.818608 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="3.2s" Sep 4 16:20:45.822227 kubelet[2351]: W0904 16:20:45.822198 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:45.822302 kubelet[2351]: E0904 16:20:45.822231 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:45.883530 kubelet[2351]: I0904 16:20:45.883504 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:20:45.883856 kubelet[2351]: E0904 16:20:45.883813 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 4 16:20:45.945998 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 4 16:20:45.961522 kubelet[2351]: E0904 16:20:45.961478 2351 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:20:45.964113 systemd[1]: Created slice kubepods-burstable-pod076ee8f303ca1c8f59ece29b4b4a0615.slice - libcontainer container kubepods-burstable-pod076ee8f303ca1c8f59ece29b4b4a0615.slice. Sep 4 16:20:45.971875 kubelet[2351]: E0904 16:20:45.971838 2351 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:20:45.974517 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 4 16:20:45.976298 kubelet[2351]: E0904 16:20:45.976271 2351 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:20:46.032869 kubelet[2351]: I0904 16:20:46.032834 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/076ee8f303ca1c8f59ece29b4b4a0615-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"076ee8f303ca1c8f59ece29b4b4a0615\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:20:46.032975 kubelet[2351]: I0904 16:20:46.032883 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:46.032975 kubelet[2351]: I0904 16:20:46.032909 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:46.032975 kubelet[2351]: I0904 16:20:46.032931 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:46.032975 kubelet[2351]: I0904 16:20:46.032947 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:46.032975 kubelet[2351]: I0904 16:20:46.032970 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 16:20:46.033101 kubelet[2351]: I0904 16:20:46.032994 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/076ee8f303ca1c8f59ece29b4b4a0615-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"076ee8f303ca1c8f59ece29b4b4a0615\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:20:46.033101 kubelet[2351]: I0904 16:20:46.033013 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/076ee8f303ca1c8f59ece29b4b4a0615-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"076ee8f303ca1c8f59ece29b4b4a0615\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:20:46.033101 kubelet[2351]: I0904 16:20:46.033031 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:46.036271 kubelet[2351]: W0904 16:20:46.036245 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:46.036340 kubelet[2351]: E0904 16:20:46.036279 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:46.080058 kubelet[2351]: W0904 16:20:46.079961 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:46.080058 kubelet[2351]: E0904 16:20:46.080009 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:46.262636 kubelet[2351]: E0904 16:20:46.262582 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:46.263504 containerd[1583]: time="2025-09-04T16:20:46.263427083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 4 16:20:46.272458 kubelet[2351]: E0904 16:20:46.272435 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:46.272887 containerd[1583]: time="2025-09-04T16:20:46.272830498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:076ee8f303ca1c8f59ece29b4b4a0615,Namespace:kube-system,Attempt:0,}" Sep 4 16:20:46.277036 kubelet[2351]: E0904 16:20:46.277008 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:46.277269 containerd[1583]: time="2025-09-04T16:20:46.277237713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 4 16:20:46.719247 containerd[1583]: time="2025-09-04T16:20:46.719158272Z" level=info msg="connecting to shim a0379f44bdc76897ffd069a883128625fde2c481c88787bcae72ce7e613ee8c6" address="unix:///run/containerd/s/0c98e4ed9330b7f2f2f840611a608002cc7bd3a1b6ccf61f78579433a39a41f3" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:20:46.725022 containerd[1583]: time="2025-09-04T16:20:46.724972796Z" level=info msg="connecting to shim 5710e3e0ecc21ca0a18873f6e260c299401157aa168fbe97eb56af54c1398b38" address="unix:///run/containerd/s/b1580fb907fc2bc383054ffa14149b9325379748554095063ffeed8d73b4a65d" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:20:46.813952 containerd[1583]: time="2025-09-04T16:20:46.813790000Z" level=info msg="connecting to shim 0a53e4154e40871cf3e8c4f6c438ca98efeae8eb1deb3286e54e9610c0ff9118" address="unix:///run/containerd/s/d6a68c540626cd23f77da2ac70ca3ad22cda21a4c928b0fa80bb33f9297dc164" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:20:46.832917 kubelet[2351]: W0904 16:20:46.832888 2351 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Sep 4 16:20:46.833329 kubelet[2351]: E0904 16:20:46.833261 2351 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:20:46.836853 systemd[1]: Started cri-containerd-5710e3e0ecc21ca0a18873f6e260c299401157aa168fbe97eb56af54c1398b38.scope - libcontainer container 5710e3e0ecc21ca0a18873f6e260c299401157aa168fbe97eb56af54c1398b38. Sep 4 16:20:46.838950 systemd[1]: Started cri-containerd-a0379f44bdc76897ffd069a883128625fde2c481c88787bcae72ce7e613ee8c6.scope - libcontainer container a0379f44bdc76897ffd069a883128625fde2c481c88787bcae72ce7e613ee8c6. Sep 4 16:20:46.850823 systemd[1]: Started cri-containerd-0a53e4154e40871cf3e8c4f6c438ca98efeae8eb1deb3286e54e9610c0ff9118.scope - libcontainer container 0a53e4154e40871cf3e8c4f6c438ca98efeae8eb1deb3286e54e9610c0ff9118. Sep 4 16:20:46.900292 containerd[1583]: time="2025-09-04T16:20:46.900243601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:076ee8f303ca1c8f59ece29b4b4a0615,Namespace:kube-system,Attempt:0,} returns sandbox id \"5710e3e0ecc21ca0a18873f6e260c299401157aa168fbe97eb56af54c1398b38\"" Sep 4 16:20:46.901558 containerd[1583]: time="2025-09-04T16:20:46.901523832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0379f44bdc76897ffd069a883128625fde2c481c88787bcae72ce7e613ee8c6\"" Sep 4 16:20:46.903624 kubelet[2351]: E0904 16:20:46.903564 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:46.903861 kubelet[2351]: E0904 16:20:46.903842 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:46.906687 containerd[1583]: time="2025-09-04T16:20:46.906280713Z" level=info msg="CreateContainer within sandbox \"5710e3e0ecc21ca0a18873f6e260c299401157aa168fbe97eb56af54c1398b38\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 16:20:46.906768 containerd[1583]: time="2025-09-04T16:20:46.906746336Z" level=info msg="CreateContainer within sandbox \"a0379f44bdc76897ffd069a883128625fde2c481c88787bcae72ce7e613ee8c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 16:20:46.914861 containerd[1583]: time="2025-09-04T16:20:46.914830617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a53e4154e40871cf3e8c4f6c438ca98efeae8eb1deb3286e54e9610c0ff9118\"" Sep 4 16:20:46.915400 kubelet[2351]: E0904 16:20:46.915382 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:46.916523 containerd[1583]: time="2025-09-04T16:20:46.916501220Z" level=info msg="CreateContainer within sandbox \"0a53e4154e40871cf3e8c4f6c438ca98efeae8eb1deb3286e54e9610c0ff9118\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 16:20:46.921247 containerd[1583]: time="2025-09-04T16:20:46.921209339Z" level=info msg="Container c7505cd1c1c7234c93fdf6cb237410afbdbe685e95b1960130beb32c0aeff36a: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:20:46.924299 containerd[1583]: time="2025-09-04T16:20:46.924264148Z" level=info msg="Container 0ec0f24b92f2291f24070d22951c487bc950a338e98b41dcdd7478ad0b718c4d: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:20:46.936100 containerd[1583]: time="2025-09-04T16:20:46.936068826Z" level=info msg="CreateContainer within sandbox \"a0379f44bdc76897ffd069a883128625fde2c481c88787bcae72ce7e613ee8c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ec0f24b92f2291f24070d22951c487bc950a338e98b41dcdd7478ad0b718c4d\"" Sep 4 16:20:46.936566 containerd[1583]: time="2025-09-04T16:20:46.936536012Z" level=info msg="StartContainer for \"0ec0f24b92f2291f24070d22951c487bc950a338e98b41dcdd7478ad0b718c4d\"" Sep 4 16:20:46.937679 containerd[1583]: time="2025-09-04T16:20:46.937511822Z" level=info msg="connecting to shim 0ec0f24b92f2291f24070d22951c487bc950a338e98b41dcdd7478ad0b718c4d" address="unix:///run/containerd/s/0c98e4ed9330b7f2f2f840611a608002cc7bd3a1b6ccf61f78579433a39a41f3" protocol=ttrpc version=3 Sep 4 16:20:46.937679 containerd[1583]: time="2025-09-04T16:20:46.937607121Z" level=info msg="CreateContainer within sandbox \"5710e3e0ecc21ca0a18873f6e260c299401157aa168fbe97eb56af54c1398b38\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c7505cd1c1c7234c93fdf6cb237410afbdbe685e95b1960130beb32c0aeff36a\"" Sep 4 16:20:46.937956 containerd[1583]: time="2025-09-04T16:20:46.937936870Z" level=info msg="Container 84b8ebf65de7120bcf964ff35caac47d6b6308459fe69060e496c9bdab7b05d9: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:20:46.938347 containerd[1583]: time="2025-09-04T16:20:46.938299179Z" level=info msg="StartContainer for \"c7505cd1c1c7234c93fdf6cb237410afbdbe685e95b1960130beb32c0aeff36a\"" Sep 4 16:20:46.940600 containerd[1583]: time="2025-09-04T16:20:46.940552405Z" level=info msg="connecting to shim c7505cd1c1c7234c93fdf6cb237410afbdbe685e95b1960130beb32c0aeff36a" address="unix:///run/containerd/s/b1580fb907fc2bc383054ffa14149b9325379748554095063ffeed8d73b4a65d" protocol=ttrpc version=3 Sep 4 16:20:46.957172 containerd[1583]: time="2025-09-04T16:20:46.957055113Z" level=info msg="CreateContainer within sandbox \"0a53e4154e40871cf3e8c4f6c438ca98efeae8eb1deb3286e54e9610c0ff9118\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"84b8ebf65de7120bcf964ff35caac47d6b6308459fe69060e496c9bdab7b05d9\"" Sep 4 16:20:46.958193 containerd[1583]: time="2025-09-04T16:20:46.958134167Z" level=info msg="StartContainer for \"84b8ebf65de7120bcf964ff35caac47d6b6308459fe69060e496c9bdab7b05d9\"" Sep 4 16:20:46.960725 containerd[1583]: time="2025-09-04T16:20:46.960641839Z" level=info msg="connecting to shim 84b8ebf65de7120bcf964ff35caac47d6b6308459fe69060e496c9bdab7b05d9" address="unix:///run/containerd/s/d6a68c540626cd23f77da2ac70ca3ad22cda21a4c928b0fa80bb33f9297dc164" protocol=ttrpc version=3 Sep 4 16:20:46.962845 systemd[1]: Started cri-containerd-0ec0f24b92f2291f24070d22951c487bc950a338e98b41dcdd7478ad0b718c4d.scope - libcontainer container 0ec0f24b92f2291f24070d22951c487bc950a338e98b41dcdd7478ad0b718c4d. Sep 4 16:20:46.976439 systemd[1]: Started cri-containerd-c7505cd1c1c7234c93fdf6cb237410afbdbe685e95b1960130beb32c0aeff36a.scope - libcontainer container c7505cd1c1c7234c93fdf6cb237410afbdbe685e95b1960130beb32c0aeff36a. Sep 4 16:20:47.002840 systemd[1]: Started cri-containerd-84b8ebf65de7120bcf964ff35caac47d6b6308459fe69060e496c9bdab7b05d9.scope - libcontainer container 84b8ebf65de7120bcf964ff35caac47d6b6308459fe69060e496c9bdab7b05d9. Sep 4 16:20:47.046317 containerd[1583]: time="2025-09-04T16:20:47.046274301Z" level=info msg="StartContainer for \"0ec0f24b92f2291f24070d22951c487bc950a338e98b41dcdd7478ad0b718c4d\" returns successfully" Sep 4 16:20:47.090567 containerd[1583]: time="2025-09-04T16:20:47.090520598Z" level=info msg="StartContainer for \"84b8ebf65de7120bcf964ff35caac47d6b6308459fe69060e496c9bdab7b05d9\" returns successfully" Sep 4 16:20:47.110102 containerd[1583]: time="2025-09-04T16:20:47.109988518Z" level=info msg="StartContainer for \"c7505cd1c1c7234c93fdf6cb237410afbdbe685e95b1960130beb32c0aeff36a\" returns successfully" Sep 4 16:20:47.485748 kubelet[2351]: I0904 16:20:47.485317 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:20:47.855336 kubelet[2351]: E0904 16:20:47.855291 2351 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:20:47.857709 kubelet[2351]: E0904 16:20:47.857685 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:47.858267 kubelet[2351]: E0904 16:20:47.858245 2351 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:20:47.858346 kubelet[2351]: E0904 16:20:47.858326 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:47.862088 kubelet[2351]: E0904 16:20:47.862064 2351 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:20:47.862177 kubelet[2351]: E0904 16:20:47.862158 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:48.708790 kubelet[2351]: I0904 16:20:48.708727 2351 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 16:20:48.708790 kubelet[2351]: E0904 16:20:48.708783 2351 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 4 16:20:48.719112 kubelet[2351]: E0904 16:20:48.719075 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:48.819556 kubelet[2351]: E0904 16:20:48.819454 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:48.864416 kubelet[2351]: E0904 16:20:48.864381 2351 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:20:48.864881 kubelet[2351]: E0904 16:20:48.864469 2351 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:20:48.864881 kubelet[2351]: E0904 16:20:48.864520 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:48.864881 kubelet[2351]: E0904 16:20:48.864589 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:48.920291 kubelet[2351]: E0904 16:20:48.920250 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:49.020924 kubelet[2351]: E0904 16:20:49.020805 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:49.121184 kubelet[2351]: E0904 16:20:49.121116 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:49.221912 kubelet[2351]: E0904 16:20:49.221869 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:49.322630 kubelet[2351]: E0904 16:20:49.322584 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:49.423391 kubelet[2351]: E0904 16:20:49.423345 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:49.524287 kubelet[2351]: E0904 16:20:49.524242 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:49.625186 kubelet[2351]: E0904 16:20:49.625057 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:49.726258 kubelet[2351]: E0904 16:20:49.726039 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:49.920705 kubelet[2351]: E0904 16:20:49.920538 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:50.020937 kubelet[2351]: E0904 16:20:50.020872 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:50.121998 kubelet[2351]: E0904 16:20:50.121940 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:50.314791 kubelet[2351]: I0904 16:20:50.314732 2351 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 16:20:50.323843 kubelet[2351]: I0904 16:20:50.323816 2351 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:50.328271 kubelet[2351]: I0904 16:20:50.328217 2351 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 16:20:50.808619 kubelet[2351]: I0904 16:20:50.808571 2351 apiserver.go:52] "Watching apiserver" Sep 4 16:20:50.810962 kubelet[2351]: E0904 16:20:50.810886 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:50.810962 kubelet[2351]: E0904 16:20:50.810929 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:50.811203 kubelet[2351]: E0904 16:20:50.811182 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:50.812316 kubelet[2351]: I0904 16:20:50.812289 2351 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 16:20:50.925601 systemd[1]: Reload requested from client PID 2630 ('systemctl') (unit session-7.scope)... Sep 4 16:20:50.925622 systemd[1]: Reloading... Sep 4 16:20:51.048713 zram_generator::config[2674]: No configuration found. Sep 4 16:20:51.361360 systemd[1]: Reloading finished in 435 ms. Sep 4 16:20:51.394956 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:20:51.420944 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 16:20:51.421229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:20:51.421290 systemd[1]: kubelet.service: Consumed 791ms CPU time, 133.5M memory peak. Sep 4 16:20:51.423298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:20:51.659917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:20:51.666131 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 16:20:51.714283 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:20:51.714283 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 16:20:51.714283 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:20:51.714731 kubelet[2719]: I0904 16:20:51.714329 2719 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 16:20:51.722203 kubelet[2719]: I0904 16:20:51.722160 2719 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 16:20:51.722203 kubelet[2719]: I0904 16:20:51.722183 2719 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 16:20:51.722460 kubelet[2719]: I0904 16:20:51.722420 2719 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 16:20:51.723580 kubelet[2719]: I0904 16:20:51.723554 2719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 16:20:51.725935 kubelet[2719]: I0904 16:20:51.725893 2719 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 16:20:51.730425 kubelet[2719]: I0904 16:20:51.730395 2719 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 16:20:51.735018 kubelet[2719]: I0904 16:20:51.734983 2719 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 16:20:51.735280 kubelet[2719]: I0904 16:20:51.735240 2719 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 16:20:51.735577 kubelet[2719]: I0904 16:20:51.735269 2719 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 16:20:51.735694 kubelet[2719]: I0904 16:20:51.735582 2719 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 16:20:51.735694 kubelet[2719]: I0904 16:20:51.735593 2719 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 16:20:51.735761 kubelet[2719]: I0904 16:20:51.735744 2719 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:20:51.736191 kubelet[2719]: I0904 16:20:51.736161 2719 kubelet.go:446] "Attempting to sync node with API server" Sep 4 16:20:51.736224 kubelet[2719]: I0904 16:20:51.736207 2719 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 16:20:51.736250 kubelet[2719]: I0904 16:20:51.736230 2719 kubelet.go:352] "Adding apiserver pod source" Sep 4 16:20:51.736250 kubelet[2719]: I0904 16:20:51.736241 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 16:20:51.737465 kubelet[2719]: I0904 16:20:51.737441 2719 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 16:20:51.737892 kubelet[2719]: I0904 16:20:51.737870 2719 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 16:20:51.738341 kubelet[2719]: I0904 16:20:51.738316 2719 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 16:20:51.738378 kubelet[2719]: I0904 16:20:51.738366 2719 server.go:1287] "Started kubelet" Sep 4 16:20:51.739156 kubelet[2719]: I0904 16:20:51.739037 2719 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 16:20:51.739969 kubelet[2719]: I0904 16:20:51.739952 2719 server.go:479] "Adding debug handlers to kubelet server" Sep 4 16:20:51.743700 kubelet[2719]: I0904 16:20:51.743168 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 16:20:51.743700 kubelet[2719]: I0904 16:20:51.743404 2719 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 16:20:51.745190 kubelet[2719]: I0904 16:20:51.745163 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 16:20:51.746169 kubelet[2719]: I0904 16:20:51.746128 2719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 16:20:51.746169 kubelet[2719]: E0904 16:20:51.746164 2719 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 16:20:51.746263 kubelet[2719]: E0904 16:20:51.746238 2719 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:20:51.746392 kubelet[2719]: I0904 16:20:51.746352 2719 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 16:20:51.748062 kubelet[2719]: I0904 16:20:51.748024 2719 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 16:20:51.748332 kubelet[2719]: I0904 16:20:51.748254 2719 reconciler.go:26] "Reconciler: start to sync state" Sep 4 16:20:51.748563 kubelet[2719]: I0904 16:20:51.748532 2719 factory.go:221] Registration of the systemd container factory successfully Sep 4 16:20:51.748789 kubelet[2719]: I0904 16:20:51.748688 2719 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 16:20:51.754649 kubelet[2719]: I0904 16:20:51.754552 2719 factory.go:221] Registration of the containerd container factory successfully Sep 4 16:20:51.758207 kubelet[2719]: I0904 16:20:51.758155 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 16:20:51.759320 kubelet[2719]: I0904 16:20:51.759297 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 16:20:51.759369 kubelet[2719]: I0904 16:20:51.759337 2719 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 16:20:51.759369 kubelet[2719]: I0904 16:20:51.759364 2719 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 16:20:51.759434 kubelet[2719]: I0904 16:20:51.759419 2719 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 16:20:51.759529 kubelet[2719]: E0904 16:20:51.759487 2719 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 16:20:51.793127 kubelet[2719]: I0904 16:20:51.793098 2719 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 16:20:51.793127 kubelet[2719]: I0904 16:20:51.793114 2719 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 16:20:51.793127 kubelet[2719]: I0904 16:20:51.793136 2719 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:20:51.793348 kubelet[2719]: I0904 16:20:51.793332 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 16:20:51.793372 kubelet[2719]: I0904 16:20:51.793345 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 16:20:51.793372 kubelet[2719]: I0904 16:20:51.793366 2719 policy_none.go:49] "None policy: Start" Sep 4 16:20:51.793437 kubelet[2719]: I0904 16:20:51.793380 2719 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 16:20:51.793437 kubelet[2719]: I0904 16:20:51.793393 2719 state_mem.go:35] "Initializing new in-memory state store" Sep 4 16:20:51.793501 kubelet[2719]: I0904 16:20:51.793488 2719 state_mem.go:75] "Updated machine memory state" Sep 4 16:20:51.797718 kubelet[2719]: I0904 16:20:51.797693 2719 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 16:20:51.797965 kubelet[2719]: I0904 16:20:51.797950 2719 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 16:20:51.798013 kubelet[2719]: I0904 16:20:51.797965 2719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 16:20:51.798210 kubelet[2719]: I0904 16:20:51.798189 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 16:20:51.799507 kubelet[2719]: E0904 16:20:51.799346 2719 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 16:20:51.860898 kubelet[2719]: I0904 16:20:51.860857 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 16:20:51.861116 kubelet[2719]: I0904 16:20:51.861076 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:51.861266 kubelet[2719]: I0904 16:20:51.860913 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 16:20:51.866750 kubelet[2719]: E0904 16:20:51.866715 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 16:20:51.867534 kubelet[2719]: E0904 16:20:51.867489 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:51.868019 kubelet[2719]: E0904 16:20:51.867991 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 16:20:51.904234 kubelet[2719]: I0904 16:20:51.904188 2719 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:20:51.912706 kubelet[2719]: I0904 16:20:51.912407 2719 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 16:20:51.912706 kubelet[2719]: I0904 16:20:51.912537 2719 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 16:20:51.935382 sudo[2753]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 16:20:51.935763 sudo[2753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 16:20:51.950307 kubelet[2719]: I0904 16:20:51.950240 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:51.950307 kubelet[2719]: I0904 16:20:51.950297 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:51.950461 kubelet[2719]: I0904 16:20:51.950324 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 16:20:51.950461 kubelet[2719]: I0904 16:20:51.950340 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/076ee8f303ca1c8f59ece29b4b4a0615-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"076ee8f303ca1c8f59ece29b4b4a0615\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:20:51.950461 kubelet[2719]: I0904 16:20:51.950356 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/076ee8f303ca1c8f59ece29b4b4a0615-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"076ee8f303ca1c8f59ece29b4b4a0615\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:20:51.950461 kubelet[2719]: I0904 16:20:51.950369 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/076ee8f303ca1c8f59ece29b4b4a0615-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"076ee8f303ca1c8f59ece29b4b4a0615\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:20:51.950461 kubelet[2719]: I0904 16:20:51.950401 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:51.950583 kubelet[2719]: I0904 16:20:51.950414 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:51.950583 kubelet[2719]: I0904 16:20:51.950432 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:52.168167 kubelet[2719]: E0904 16:20:52.168045 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:52.168167 kubelet[2719]: E0904 16:20:52.168108 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:52.168312 kubelet[2719]: E0904 16:20:52.168260 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:52.228822 sudo[2753]: pam_unix(sudo:session): session closed for user root Sep 4 16:20:52.736933 kubelet[2719]: I0904 16:20:52.736891 2719 apiserver.go:52] "Watching apiserver" Sep 4 16:20:52.748732 kubelet[2719]: I0904 16:20:52.748687 2719 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 16:20:52.779690 kubelet[2719]: I0904 16:20:52.779510 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:52.779690 kubelet[2719]: I0904 16:20:52.779583 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 16:20:52.780903 kubelet[2719]: I0904 16:20:52.780885 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 16:20:52.786125 kubelet[2719]: E0904 16:20:52.786095 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:20:52.787338 kubelet[2719]: E0904 16:20:52.786256 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:52.787338 kubelet[2719]: E0904 16:20:52.786726 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 16:20:52.787338 kubelet[2719]: E0904 16:20:52.786977 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 16:20:52.787338 kubelet[2719]: E0904 16:20:52.787111 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:52.787338 kubelet[2719]: E0904 16:20:52.787208 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:52.807070 kubelet[2719]: I0904 16:20:52.807006 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.806977058 podStartE2EDuration="2.806977058s" podCreationTimestamp="2025-09-04 16:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:20:52.800083993 +0000 UTC m=+1.127886705" watchObservedRunningTime="2025-09-04 16:20:52.806977058 +0000 UTC m=+1.134779770" Sep 4 16:20:52.816500 kubelet[2719]: I0904 16:20:52.816439 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.816421731 podStartE2EDuration="2.816421731s" podCreationTimestamp="2025-09-04 16:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:20:52.815975053 +0000 UTC m=+1.143777765" watchObservedRunningTime="2025-09-04 16:20:52.816421731 +0000 UTC m=+1.144224433" Sep 4 16:20:52.816733 kubelet[2719]: I0904 16:20:52.816535 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.816529578 podStartE2EDuration="2.816529578s" podCreationTimestamp="2025-09-04 16:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:20:52.806737549 +0000 UTC m=+1.134540261" watchObservedRunningTime="2025-09-04 16:20:52.816529578 +0000 UTC m=+1.144332290" Sep 4 16:20:53.780806 kubelet[2719]: E0904 16:20:53.780771 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:53.781968 kubelet[2719]: E0904 16:20:53.781945 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:53.782157 kubelet[2719]: E0904 16:20:53.782138 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:53.814238 sudo[1794]: pam_unix(sudo:session): session closed for user root Sep 4 16:20:53.816087 sshd[1793]: Connection closed by 10.0.0.1 port 36550 Sep 4 16:20:53.816756 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Sep 4 16:20:53.821345 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:36550.service: Deactivated successfully. Sep 4 16:20:53.823821 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 16:20:53.824105 systemd[1]: session-7.scope: Consumed 5.175s CPU time, 264.5M memory peak. Sep 4 16:20:53.825484 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Sep 4 16:20:53.826747 systemd-logind[1558]: Removed session 7. Sep 4 16:20:56.111902 kubelet[2719]: E0904 16:20:56.111834 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:57.455283 kubelet[2719]: I0904 16:20:57.455224 2719 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 16:20:57.456197 containerd[1583]: time="2025-09-04T16:20:57.456152733Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 16:20:57.456512 kubelet[2719]: I0904 16:20:57.456361 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 16:20:58.478517 systemd[1]: Created slice kubepods-besteffort-podf3347b2c_d60b_4b35_bbc7_fd7e488f6953.slice - libcontainer container kubepods-besteffort-podf3347b2c_d60b_4b35_bbc7_fd7e488f6953.slice. Sep 4 16:20:58.491697 kubelet[2719]: I0904 16:20:58.491646 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg9vn\" (UniqueName: \"kubernetes.io/projected/4d35702a-8372-4170-a8a7-0a3606772f13-kube-api-access-dg9vn\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492256 kubelet[2719]: I0904 16:20:58.491720 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3347b2c-d60b-4b35-bbc7-fd7e488f6953-lib-modules\") pod \"kube-proxy-6xz48\" (UID: \"f3347b2c-d60b-4b35-bbc7-fd7e488f6953\") " pod="kube-system/kube-proxy-6xz48" Sep 4 16:20:58.492256 kubelet[2719]: I0904 16:20:58.491743 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-xtables-lock\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492256 kubelet[2719]: I0904 16:20:58.491759 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-host-proc-sys-kernel\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492256 kubelet[2719]: I0904 16:20:58.491782 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d35702a-8372-4170-a8a7-0a3606772f13-clustermesh-secrets\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492256 kubelet[2719]: I0904 16:20:58.491797 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3347b2c-d60b-4b35-bbc7-fd7e488f6953-xtables-lock\") pod \"kube-proxy-6xz48\" (UID: \"f3347b2c-d60b-4b35-bbc7-fd7e488f6953\") " pod="kube-system/kube-proxy-6xz48" Sep 4 16:20:58.492177 systemd[1]: Created slice kubepods-burstable-pod4d35702a_8372_4170_a8a7_0a3606772f13.slice - libcontainer container kubepods-burstable-pod4d35702a_8372_4170_a8a7_0a3606772f13.slice. Sep 4 16:20:58.492525 kubelet[2719]: I0904 16:20:58.491811 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-etc-cni-netd\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492525 kubelet[2719]: I0904 16:20:58.491824 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-run\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492525 kubelet[2719]: I0904 16:20:58.491845 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cni-path\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492525 kubelet[2719]: I0904 16:20:58.491867 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-lib-modules\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492525 kubelet[2719]: I0904 16:20:58.491915 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-host-proc-sys-net\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492768 kubelet[2719]: I0904 16:20:58.492743 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-bpf-maps\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492821 kubelet[2719]: I0904 16:20:58.492799 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-hostproc\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492850 kubelet[2719]: I0904 16:20:58.492830 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-cgroup\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492942 kubelet[2719]: I0904 16:20:58.492878 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-config-path\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.492942 kubelet[2719]: I0904 16:20:58.492923 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d35702a-8372-4170-a8a7-0a3606772f13-hubble-tls\") pod \"cilium-c855k\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " pod="kube-system/cilium-c855k" Sep 4 16:20:58.493025 kubelet[2719]: I0904 16:20:58.492964 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3347b2c-d60b-4b35-bbc7-fd7e488f6953-kube-proxy\") pod \"kube-proxy-6xz48\" (UID: \"f3347b2c-d60b-4b35-bbc7-fd7e488f6953\") " pod="kube-system/kube-proxy-6xz48" Sep 4 16:20:58.493083 kubelet[2719]: I0904 16:20:58.493044 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwcbk\" (UniqueName: \"kubernetes.io/projected/f3347b2c-d60b-4b35-bbc7-fd7e488f6953-kube-api-access-dwcbk\") pod \"kube-proxy-6xz48\" (UID: \"f3347b2c-d60b-4b35-bbc7-fd7e488f6953\") " pod="kube-system/kube-proxy-6xz48" Sep 4 16:20:58.581927 systemd[1]: Created slice kubepods-besteffort-pod0f4f0f13_693c_4bfb_bc08_0461c93591e0.slice - libcontainer container kubepods-besteffort-pod0f4f0f13_693c_4bfb_bc08_0461c93591e0.slice. Sep 4 16:20:58.594045 kubelet[2719]: I0904 16:20:58.593990 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6nbg\" (UniqueName: \"kubernetes.io/projected/0f4f0f13-693c-4bfb-bc08-0461c93591e0-kube-api-access-z6nbg\") pod \"cilium-operator-6c4d7847fc-tg58q\" (UID: \"0f4f0f13-693c-4bfb-bc08-0461c93591e0\") " pod="kube-system/cilium-operator-6c4d7847fc-tg58q" Sep 4 16:20:58.594193 kubelet[2719]: I0904 16:20:58.594172 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f4f0f13-693c-4bfb-bc08-0461c93591e0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tg58q\" (UID: \"0f4f0f13-693c-4bfb-bc08-0461c93591e0\") " pod="kube-system/cilium-operator-6c4d7847fc-tg58q" Sep 4 16:20:58.792706 kubelet[2719]: E0904 16:20:58.792551 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:58.793430 containerd[1583]: time="2025-09-04T16:20:58.793265249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xz48,Uid:f3347b2c-d60b-4b35-bbc7-fd7e488f6953,Namespace:kube-system,Attempt:0,}" Sep 4 16:20:58.795496 kubelet[2719]: E0904 16:20:58.795475 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:58.795883 containerd[1583]: time="2025-09-04T16:20:58.795832459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c855k,Uid:4d35702a-8372-4170-a8a7-0a3606772f13,Namespace:kube-system,Attempt:0,}" Sep 4 16:20:58.817041 containerd[1583]: time="2025-09-04T16:20:58.816994339Z" level=info msg="connecting to shim ef83aac1a1e3279740027ed22c622372b0a7a265ea4a96891b8b32e43c494554" address="unix:///run/containerd/s/60456bf2ddf4cd8469079fea3bd0ac965165e7c9c45f74d30fdaf5cc628fce3b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:20:58.823596 containerd[1583]: time="2025-09-04T16:20:58.823524798Z" level=info msg="connecting to shim 6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026" address="unix:///run/containerd/s/db05b8684b3d9bba28bd033abd2832fb57133ad27cd8740af237e2702c9a5e30" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:20:58.884875 systemd[1]: Started cri-containerd-6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026.scope - libcontainer container 6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026. Sep 4 16:20:58.886498 kubelet[2719]: E0904 16:20:58.885990 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:58.887503 containerd[1583]: time="2025-09-04T16:20:58.887466525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tg58q,Uid:0f4f0f13-693c-4bfb-bc08-0461c93591e0,Namespace:kube-system,Attempt:0,}" Sep 4 16:20:58.889775 systemd[1]: Started cri-containerd-ef83aac1a1e3279740027ed22c622372b0a7a265ea4a96891b8b32e43c494554.scope - libcontainer container ef83aac1a1e3279740027ed22c622372b0a7a265ea4a96891b8b32e43c494554. Sep 4 16:20:58.916800 containerd[1583]: time="2025-09-04T16:20:58.916748171Z" level=info msg="connecting to shim 75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338" address="unix:///run/containerd/s/465d74a511309092c56337e05c70b6c33187fb479513e3d70dbf52087e8bba2c" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:20:58.924096 containerd[1583]: time="2025-09-04T16:20:58.924051904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c855k,Uid:4d35702a-8372-4170-a8a7-0a3606772f13,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\"" Sep 4 16:20:58.926298 kubelet[2719]: E0904 16:20:58.925364 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:58.927184 containerd[1583]: time="2025-09-04T16:20:58.927155505Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 16:20:58.932717 containerd[1583]: time="2025-09-04T16:20:58.932166049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xz48,Uid:f3347b2c-d60b-4b35-bbc7-fd7e488f6953,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef83aac1a1e3279740027ed22c622372b0a7a265ea4a96891b8b32e43c494554\"" Sep 4 16:20:58.933335 kubelet[2719]: E0904 16:20:58.933309 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:58.939285 containerd[1583]: time="2025-09-04T16:20:58.939231468Z" level=info msg="CreateContainer within sandbox \"ef83aac1a1e3279740027ed22c622372b0a7a265ea4a96891b8b32e43c494554\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 16:20:58.950476 containerd[1583]: time="2025-09-04T16:20:58.950423376Z" level=info msg="Container 44361e5858dd8a2277ca05b56f081a2bea2c099b2408fbed131e243f1e5ca31c: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:20:58.955822 systemd[1]: Started cri-containerd-75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338.scope - libcontainer container 75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338. Sep 4 16:20:58.958720 containerd[1583]: time="2025-09-04T16:20:58.958591535Z" level=info msg="CreateContainer within sandbox \"ef83aac1a1e3279740027ed22c622372b0a7a265ea4a96891b8b32e43c494554\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"44361e5858dd8a2277ca05b56f081a2bea2c099b2408fbed131e243f1e5ca31c\"" Sep 4 16:20:58.959879 containerd[1583]: time="2025-09-04T16:20:58.959846636Z" level=info msg="StartContainer for \"44361e5858dd8a2277ca05b56f081a2bea2c099b2408fbed131e243f1e5ca31c\"" Sep 4 16:20:58.961360 containerd[1583]: time="2025-09-04T16:20:58.961328639Z" level=info msg="connecting to shim 44361e5858dd8a2277ca05b56f081a2bea2c099b2408fbed131e243f1e5ca31c" address="unix:///run/containerd/s/60456bf2ddf4cd8469079fea3bd0ac965165e7c9c45f74d30fdaf5cc628fce3b" protocol=ttrpc version=3 Sep 4 16:20:58.985795 systemd[1]: Started cri-containerd-44361e5858dd8a2277ca05b56f081a2bea2c099b2408fbed131e243f1e5ca31c.scope - libcontainer container 44361e5858dd8a2277ca05b56f081a2bea2c099b2408fbed131e243f1e5ca31c. Sep 4 16:20:59.002368 containerd[1583]: time="2025-09-04T16:20:59.002303177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tg58q,Uid:0f4f0f13-693c-4bfb-bc08-0461c93591e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338\"" Sep 4 16:20:59.002879 kubelet[2719]: E0904 16:20:59.002849 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:20:59.035304 containerd[1583]: time="2025-09-04T16:20:59.035260495Z" level=info msg="StartContainer for \"44361e5858dd8a2277ca05b56f081a2bea2c099b2408fbed131e243f1e5ca31c\" returns successfully" Sep 4 16:20:59.792694 kubelet[2719]: E0904 16:20:59.792382 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:01.587520 kubelet[2719]: E0904 16:21:01.587452 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:01.606061 kubelet[2719]: I0904 16:21:01.605949 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6xz48" podStartSLOduration=3.605925954 podStartE2EDuration="3.605925954s" podCreationTimestamp="2025-09-04 16:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:20:59.801334395 +0000 UTC m=+8.129137117" watchObservedRunningTime="2025-09-04 16:21:01.605925954 +0000 UTC m=+9.933728666" Sep 4 16:21:01.795965 kubelet[2719]: E0904 16:21:01.795929 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:02.124951 update_engine[1560]: I20250904 16:21:02.124840 1560 update_attempter.cc:509] Updating boot flags... Sep 4 16:21:02.645811 kubelet[2719]: E0904 16:21:02.645772 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:03.614957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665443295.mount: Deactivated successfully. Sep 4 16:21:06.116256 kubelet[2719]: E0904 16:21:06.116211 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:06.804411 kubelet[2719]: E0904 16:21:06.804379 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:08.040176 containerd[1583]: time="2025-09-04T16:21:08.040090924Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:21:08.040996 containerd[1583]: time="2025-09-04T16:21:08.040943696Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 16:21:08.042119 containerd[1583]: time="2025-09-04T16:21:08.042089152Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:21:08.043482 containerd[1583]: time="2025-09-04T16:21:08.043450156Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.116253252s" Sep 4 16:21:08.043540 containerd[1583]: time="2025-09-04T16:21:08.043484239Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 16:21:08.044497 containerd[1583]: time="2025-09-04T16:21:08.044467349Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 16:21:08.048548 containerd[1583]: time="2025-09-04T16:21:08.048505043Z" level=info msg="CreateContainer within sandbox \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 16:21:08.055889 containerd[1583]: time="2025-09-04T16:21:08.055834231Z" level=info msg="Container 911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:21:08.064696 containerd[1583]: time="2025-09-04T16:21:08.064625745Z" level=info msg="CreateContainer within sandbox \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5\"" Sep 4 16:21:08.065192 containerd[1583]: time="2025-09-04T16:21:08.065161898Z" level=info msg="StartContainer for \"911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5\"" Sep 4 16:21:08.066190 containerd[1583]: time="2025-09-04T16:21:08.066157150Z" level=info msg="connecting to shim 911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5" address="unix:///run/containerd/s/db05b8684b3d9bba28bd033abd2832fb57133ad27cd8740af237e2702c9a5e30" protocol=ttrpc version=3 Sep 4 16:21:08.121833 systemd[1]: Started cri-containerd-911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5.scope - libcontainer container 911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5. Sep 4 16:21:08.163737 systemd[1]: cri-containerd-911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5.scope: Deactivated successfully. Sep 4 16:21:08.164242 containerd[1583]: time="2025-09-04T16:21:08.164188038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5\" id:\"911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5\" pid:3155 exited_at:{seconds:1757002868 nanos:163604053}" Sep 4 16:21:08.368156 containerd[1583]: time="2025-09-04T16:21:08.368075279Z" level=info msg="received exit event container_id:\"911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5\" id:\"911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5\" pid:3155 exited_at:{seconds:1757002868 nanos:163604053}" Sep 4 16:21:08.369125 containerd[1583]: time="2025-09-04T16:21:08.369092382Z" level=info msg="StartContainer for \"911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5\" returns successfully" Sep 4 16:21:08.391758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5-rootfs.mount: Deactivated successfully. Sep 4 16:21:08.809971 kubelet[2719]: E0904 16:21:08.809869 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:08.812472 containerd[1583]: time="2025-09-04T16:21:08.812419910Z" level=info msg="CreateContainer within sandbox \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 16:21:08.820332 containerd[1583]: time="2025-09-04T16:21:08.820270133Z" level=info msg="Container 300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:21:08.828163 containerd[1583]: time="2025-09-04T16:21:08.828071885Z" level=info msg="CreateContainer within sandbox \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90\"" Sep 4 16:21:08.828862 containerd[1583]: time="2025-09-04T16:21:08.828816662Z" level=info msg="StartContainer for \"300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90\"" Sep 4 16:21:08.829834 containerd[1583]: time="2025-09-04T16:21:08.829809009Z" level=info msg="connecting to shim 300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90" address="unix:///run/containerd/s/db05b8684b3d9bba28bd033abd2832fb57133ad27cd8740af237e2702c9a5e30" protocol=ttrpc version=3 Sep 4 16:21:08.852896 systemd[1]: Started cri-containerd-300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90.scope - libcontainer container 300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90. Sep 4 16:21:08.885760 containerd[1583]: time="2025-09-04T16:21:08.885648140Z" level=info msg="StartContainer for \"300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90\" returns successfully" Sep 4 16:21:08.901921 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 16:21:08.902308 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:21:08.902452 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:21:08.904302 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:21:08.907553 containerd[1583]: time="2025-09-04T16:21:08.906295190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90\" id:\"300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90\" pid:3202 exited_at:{seconds:1757002868 nanos:905909661}" Sep 4 16:21:08.906759 systemd[1]: cri-containerd-300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90.scope: Deactivated successfully. Sep 4 16:21:08.907802 containerd[1583]: time="2025-09-04T16:21:08.907534824Z" level=info msg="received exit event container_id:\"300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90\" id:\"300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90\" pid:3202 exited_at:{seconds:1757002868 nanos:905909661}" Sep 4 16:21:08.931750 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:21:09.813225 kubelet[2719]: E0904 16:21:09.813184 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:09.815206 containerd[1583]: time="2025-09-04T16:21:09.815159953Z" level=info msg="CreateContainer within sandbox \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 16:21:09.960498 containerd[1583]: time="2025-09-04T16:21:09.960438247Z" level=info msg="Container f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:21:09.964554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount932450814.mount: Deactivated successfully. Sep 4 16:21:09.973598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153869887.mount: Deactivated successfully. Sep 4 16:21:09.978093 containerd[1583]: time="2025-09-04T16:21:09.978030864Z" level=info msg="CreateContainer within sandbox \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347\"" Sep 4 16:21:09.978682 containerd[1583]: time="2025-09-04T16:21:09.978626850Z" level=info msg="StartContainer for \"f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347\"" Sep 4 16:21:09.980063 containerd[1583]: time="2025-09-04T16:21:09.980035913Z" level=info msg="connecting to shim f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347" address="unix:///run/containerd/s/db05b8684b3d9bba28bd033abd2832fb57133ad27cd8740af237e2702c9a5e30" protocol=ttrpc version=3 Sep 4 16:21:10.008913 systemd[1]: Started cri-containerd-f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347.scope - libcontainer container f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347. Sep 4 16:21:10.053089 systemd[1]: cri-containerd-f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347.scope: Deactivated successfully. Sep 4 16:21:10.054808 containerd[1583]: time="2025-09-04T16:21:10.054679950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347\" id:\"f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347\" pid:3257 exited_at:{seconds:1757002870 nanos:54375325}" Sep 4 16:21:10.071781 containerd[1583]: time="2025-09-04T16:21:10.071442423Z" level=info msg="received exit event container_id:\"f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347\" id:\"f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347\" pid:3257 exited_at:{seconds:1757002870 nanos:54375325}" Sep 4 16:21:10.073735 containerd[1583]: time="2025-09-04T16:21:10.073711479Z" level=info msg="StartContainer for \"f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347\" returns successfully" Sep 4 16:21:10.101592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347-rootfs.mount: Deactivated successfully. Sep 4 16:21:10.305004 containerd[1583]: time="2025-09-04T16:21:10.304938012Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:21:10.305683 containerd[1583]: time="2025-09-04T16:21:10.305613979Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 16:21:10.306914 containerd[1583]: time="2025-09-04T16:21:10.306878999Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:21:10.307948 containerd[1583]: time="2025-09-04T16:21:10.307903424Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.26340157s" Sep 4 16:21:10.307992 containerd[1583]: time="2025-09-04T16:21:10.307948348Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 16:21:10.309844 containerd[1583]: time="2025-09-04T16:21:10.309816828Z" level=info msg="CreateContainer within sandbox \"75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 16:21:10.316531 containerd[1583]: time="2025-09-04T16:21:10.316485805Z" level=info msg="Container d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:21:10.325747 containerd[1583]: time="2025-09-04T16:21:10.324134192Z" level=info msg="CreateContainer within sandbox \"75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\"" Sep 4 16:21:10.327074 containerd[1583]: time="2025-09-04T16:21:10.327027017Z" level=info msg="StartContainer for \"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\"" Sep 4 16:21:10.328041 containerd[1583]: time="2025-09-04T16:21:10.328001047Z" level=info msg="connecting to shim d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329" address="unix:///run/containerd/s/465d74a511309092c56337e05c70b6c33187fb479513e3d70dbf52087e8bba2c" protocol=ttrpc version=3 Sep 4 16:21:10.349822 systemd[1]: Started cri-containerd-d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329.scope - libcontainer container d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329. Sep 4 16:21:10.382049 containerd[1583]: time="2025-09-04T16:21:10.381988055Z" level=info msg="StartContainer for \"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\" returns successfully" Sep 4 16:21:10.818548 kubelet[2719]: E0904 16:21:10.818503 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:10.821693 kubelet[2719]: E0904 16:21:10.821649 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:10.823977 containerd[1583]: time="2025-09-04T16:21:10.823821921Z" level=info msg="CreateContainer within sandbox \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 16:21:10.836480 kubelet[2719]: I0904 16:21:10.836390 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tg58q" podStartSLOduration=1.5263537619999998 podStartE2EDuration="12.831635571s" podCreationTimestamp="2025-09-04 16:20:58 +0000 UTC" firstStartedPulling="2025-09-04 16:20:59.003445982 +0000 UTC m=+7.331248684" lastFinishedPulling="2025-09-04 16:21:10.308727781 +0000 UTC m=+18.636530493" observedRunningTime="2025-09-04 16:21:10.831341325 +0000 UTC m=+19.159144057" watchObservedRunningTime="2025-09-04 16:21:10.831635571 +0000 UTC m=+19.159438293" Sep 4 16:21:10.934226 containerd[1583]: time="2025-09-04T16:21:10.934169902Z" level=info msg="Container 5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:21:10.943498 containerd[1583]: time="2025-09-04T16:21:10.943445763Z" level=info msg="CreateContainer within sandbox \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47\"" Sep 4 16:21:10.945413 containerd[1583]: time="2025-09-04T16:21:10.945373254Z" level=info msg="StartContainer for \"5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47\"" Sep 4 16:21:10.948592 containerd[1583]: time="2025-09-04T16:21:10.948556377Z" level=info msg="connecting to shim 5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47" address="unix:///run/containerd/s/db05b8684b3d9bba28bd033abd2832fb57133ad27cd8740af237e2702c9a5e30" protocol=ttrpc version=3 Sep 4 16:21:10.980551 systemd[1]: Started cri-containerd-5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47.scope - libcontainer container 5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47. Sep 4 16:21:11.036172 systemd[1]: cri-containerd-5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47.scope: Deactivated successfully. Sep 4 16:21:11.039382 containerd[1583]: time="2025-09-04T16:21:11.039337205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47\" id:\"5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47\" pid:3342 exited_at:{seconds:1757002871 nanos:38797025}" Sep 4 16:21:11.041729 containerd[1583]: time="2025-09-04T16:21:11.040241742Z" level=info msg="received exit event container_id:\"5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47\" id:\"5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47\" pid:3342 exited_at:{seconds:1757002871 nanos:38797025}" Sep 4 16:21:11.043041 containerd[1583]: time="2025-09-04T16:21:11.042947151Z" level=info msg="StartContainer for \"5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47\" returns successfully" Sep 4 16:21:11.068901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47-rootfs.mount: Deactivated successfully. Sep 4 16:21:11.826244 kubelet[2719]: E0904 16:21:11.826206 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:11.826244 kubelet[2719]: E0904 16:21:11.826233 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:11.828258 containerd[1583]: time="2025-09-04T16:21:11.828217159Z" level=info msg="CreateContainer within sandbox \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 16:21:11.844482 containerd[1583]: time="2025-09-04T16:21:11.843785564Z" level=info msg="Container 7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:21:11.854096 containerd[1583]: time="2025-09-04T16:21:11.854052289Z" level=info msg="CreateContainer within sandbox \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\"" Sep 4 16:21:11.854796 containerd[1583]: time="2025-09-04T16:21:11.854725811Z" level=info msg="StartContainer for \"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\"" Sep 4 16:21:11.856031 containerd[1583]: time="2025-09-04T16:21:11.855949952Z" level=info msg="connecting to shim 7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5" address="unix:///run/containerd/s/db05b8684b3d9bba28bd033abd2832fb57133ad27cd8740af237e2702c9a5e30" protocol=ttrpc version=3 Sep 4 16:21:11.885845 systemd[1]: Started cri-containerd-7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5.scope - libcontainer container 7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5. Sep 4 16:21:11.925826 containerd[1583]: time="2025-09-04T16:21:11.925773275Z" level=info msg="StartContainer for \"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" returns successfully" Sep 4 16:21:12.003814 containerd[1583]: time="2025-09-04T16:21:12.003771048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" id:\"e87b99664380e3208cac379dd367466b86686dc181e8f20211df706f1ebdf17a\" pid:3412 exited_at:{seconds:1757002872 nanos:3397023}" Sep 4 16:21:12.040415 kubelet[2719]: I0904 16:21:12.040387 2719 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 16:21:12.081194 systemd[1]: Created slice kubepods-burstable-podf37a0db3_05b3_40cb_9e23_c5d75b9d8665.slice - libcontainer container kubepods-burstable-podf37a0db3_05b3_40cb_9e23_c5d75b9d8665.slice. Sep 4 16:21:12.084067 kubelet[2719]: I0904 16:21:12.083845 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzwtb\" (UniqueName: \"kubernetes.io/projected/f37a0db3-05b3-40cb-9e23-c5d75b9d8665-kube-api-access-rzwtb\") pod \"coredns-668d6bf9bc-qz2vx\" (UID: \"f37a0db3-05b3-40cb-9e23-c5d75b9d8665\") " pod="kube-system/coredns-668d6bf9bc-qz2vx" Sep 4 16:21:12.084067 kubelet[2719]: I0904 16:21:12.083887 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f37a0db3-05b3-40cb-9e23-c5d75b9d8665-config-volume\") pod \"coredns-668d6bf9bc-qz2vx\" (UID: \"f37a0db3-05b3-40cb-9e23-c5d75b9d8665\") " pod="kube-system/coredns-668d6bf9bc-qz2vx" Sep 4 16:21:12.086547 systemd[1]: Created slice kubepods-burstable-podf8caf929_16a8_408f_9579_8437fab91122.slice - libcontainer container kubepods-burstable-podf8caf929_16a8_408f_9579_8437fab91122.slice. Sep 4 16:21:12.184290 kubelet[2719]: I0904 16:21:12.184215 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt42h\" (UniqueName: \"kubernetes.io/projected/f8caf929-16a8-408f-9579-8437fab91122-kube-api-access-xt42h\") pod \"coredns-668d6bf9bc-btflx\" (UID: \"f8caf929-16a8-408f-9579-8437fab91122\") " pod="kube-system/coredns-668d6bf9bc-btflx" Sep 4 16:21:12.184290 kubelet[2719]: I0904 16:21:12.184275 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8caf929-16a8-408f-9579-8437fab91122-config-volume\") pod \"coredns-668d6bf9bc-btflx\" (UID: \"f8caf929-16a8-408f-9579-8437fab91122\") " pod="kube-system/coredns-668d6bf9bc-btflx" Sep 4 16:21:12.391561 kubelet[2719]: E0904 16:21:12.391253 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:12.391561 kubelet[2719]: E0904 16:21:12.391380 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:12.397867 containerd[1583]: time="2025-09-04T16:21:12.397708103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qz2vx,Uid:f37a0db3-05b3-40cb-9e23-c5d75b9d8665,Namespace:kube-system,Attempt:0,}" Sep 4 16:21:12.398885 containerd[1583]: time="2025-09-04T16:21:12.398857533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btflx,Uid:f8caf929-16a8-408f-9579-8437fab91122,Namespace:kube-system,Attempt:0,}" Sep 4 16:21:12.833065 kubelet[2719]: E0904 16:21:12.833009 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:13.834967 kubelet[2719]: E0904 16:21:13.834917 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:14.138030 systemd-networkd[1476]: cilium_host: Link UP Sep 4 16:21:14.138275 systemd-networkd[1476]: cilium_net: Link UP Sep 4 16:21:14.138528 systemd-networkd[1476]: cilium_net: Gained carrier Sep 4 16:21:14.138792 systemd-networkd[1476]: cilium_host: Gained carrier Sep 4 16:21:14.247494 systemd-networkd[1476]: cilium_vxlan: Link UP Sep 4 16:21:14.247505 systemd-networkd[1476]: cilium_vxlan: Gained carrier Sep 4 16:21:14.459706 kernel: NET: Registered PF_ALG protocol family Sep 4 16:21:14.749923 systemd-networkd[1476]: cilium_net: Gained IPv6LL Sep 4 16:21:14.752154 systemd-networkd[1476]: cilium_host: Gained IPv6LL Sep 4 16:21:14.836510 kubelet[2719]: E0904 16:21:14.836462 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:15.115078 systemd-networkd[1476]: lxc_health: Link UP Sep 4 16:21:15.130163 systemd-networkd[1476]: lxc_health: Gained carrier Sep 4 16:21:15.513386 systemd-networkd[1476]: lxc0679edcb8b12: Link UP Sep 4 16:21:15.513937 kernel: eth0: renamed from tmpc5cb4 Sep 4 16:21:15.529701 kernel: eth0: renamed from tmpc4d4e Sep 4 16:21:15.533052 systemd-networkd[1476]: lxc0679edcb8b12: Gained carrier Sep 4 16:21:15.533785 systemd-networkd[1476]: lxc3fcdf412ba82: Link UP Sep 4 16:21:15.535338 systemd-networkd[1476]: lxc3fcdf412ba82: Gained carrier Sep 4 16:21:15.839130 kubelet[2719]: E0904 16:21:15.839099 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:16.093936 systemd-networkd[1476]: cilium_vxlan: Gained IPv6LL Sep 4 16:21:16.477935 systemd-networkd[1476]: lxc_health: Gained IPv6LL Sep 4 16:21:16.734050 systemd-networkd[1476]: lxc3fcdf412ba82: Gained IPv6LL Sep 4 16:21:16.814181 kubelet[2719]: I0904 16:21:16.814095 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c855k" podStartSLOduration=9.696433979 podStartE2EDuration="18.8140692s" podCreationTimestamp="2025-09-04 16:20:58 +0000 UTC" firstStartedPulling="2025-09-04 16:20:58.926716288 +0000 UTC m=+7.254519001" lastFinishedPulling="2025-09-04 16:21:08.04435151 +0000 UTC m=+16.372154222" observedRunningTime="2025-09-04 16:21:12.847838732 +0000 UTC m=+21.175641444" watchObservedRunningTime="2025-09-04 16:21:16.8140692 +0000 UTC m=+25.141871912" Sep 4 16:21:16.842098 kubelet[2719]: E0904 16:21:16.841997 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:17.118046 systemd-networkd[1476]: lxc0679edcb8b12: Gained IPv6LL Sep 4 16:21:17.843820 kubelet[2719]: E0904 16:21:17.843770 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:19.076619 containerd[1583]: time="2025-09-04T16:21:19.076527612Z" level=info msg="connecting to shim c4d4e112d5b888954a4a28ecc9210e8d38c436fa1afb99653c4645ced323c632" address="unix:///run/containerd/s/766a9fcc936142ad818119d73be2c914c68edac073b63bebaa237eba00f8f873" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:21:19.077683 containerd[1583]: time="2025-09-04T16:21:19.077572960Z" level=info msg="connecting to shim c5cb4758ba70e7e5a96227e113c4abdbcbaa68cda63b61d4f18773b814fed1c8" address="unix:///run/containerd/s/597cdf75db4788d90002f940a115e2a380e24afced77f8ad5ca41a75136051a0" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:21:19.112793 systemd[1]: Started cri-containerd-c5cb4758ba70e7e5a96227e113c4abdbcbaa68cda63b61d4f18773b814fed1c8.scope - libcontainer container c5cb4758ba70e7e5a96227e113c4abdbcbaa68cda63b61d4f18773b814fed1c8. Sep 4 16:21:19.116331 systemd[1]: Started cri-containerd-c4d4e112d5b888954a4a28ecc9210e8d38c436fa1afb99653c4645ced323c632.scope - libcontainer container c4d4e112d5b888954a4a28ecc9210e8d38c436fa1afb99653c4645ced323c632. Sep 4 16:21:19.130365 systemd-resolved[1270]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:21:19.133693 systemd-resolved[1270]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:21:19.167109 containerd[1583]: time="2025-09-04T16:21:19.167057095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btflx,Uid:f8caf929-16a8-408f-9579-8437fab91122,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5cb4758ba70e7e5a96227e113c4abdbcbaa68cda63b61d4f18773b814fed1c8\"" Sep 4 16:21:19.171279 containerd[1583]: time="2025-09-04T16:21:19.171233519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qz2vx,Uid:f37a0db3-05b3-40cb-9e23-c5d75b9d8665,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4d4e112d5b888954a4a28ecc9210e8d38c436fa1afb99653c4645ced323c632\"" Sep 4 16:21:19.173208 kubelet[2719]: E0904 16:21:19.173082 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:19.175714 containerd[1583]: time="2025-09-04T16:21:19.175686073Z" level=info msg="CreateContainer within sandbox \"c4d4e112d5b888954a4a28ecc9210e8d38c436fa1afb99653c4645ced323c632\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 16:21:19.180235 kubelet[2719]: E0904 16:21:19.180216 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:19.181739 containerd[1583]: time="2025-09-04T16:21:19.181700267Z" level=info msg="CreateContainer within sandbox \"c5cb4758ba70e7e5a96227e113c4abdbcbaa68cda63b61d4f18773b814fed1c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 16:21:19.707808 containerd[1583]: time="2025-09-04T16:21:19.707753351Z" level=info msg="Container 9dd8424a66eed6e122b92202a69595216db302acc0c8791cc3b4782c9104befc: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:21:19.709934 containerd[1583]: time="2025-09-04T16:21:19.709875787Z" level=info msg="Container 8b6e224597b4d5ac6e66e08d68db1146a816e752fc84a1e3a2fb0114c68f2242: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:21:19.714311 containerd[1583]: time="2025-09-04T16:21:19.714273057Z" level=info msg="CreateContainer within sandbox \"c4d4e112d5b888954a4a28ecc9210e8d38c436fa1afb99653c4645ced323c632\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9dd8424a66eed6e122b92202a69595216db302acc0c8791cc3b4782c9104befc\"" Sep 4 16:21:19.715300 containerd[1583]: time="2025-09-04T16:21:19.714866755Z" level=info msg="StartContainer for \"9dd8424a66eed6e122b92202a69595216db302acc0c8791cc3b4782c9104befc\"" Sep 4 16:21:19.719283 containerd[1583]: time="2025-09-04T16:21:19.719236103Z" level=info msg="CreateContainer within sandbox \"c5cb4758ba70e7e5a96227e113c4abdbcbaa68cda63b61d4f18773b814fed1c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b6e224597b4d5ac6e66e08d68db1146a816e752fc84a1e3a2fb0114c68f2242\"" Sep 4 16:21:19.719982 containerd[1583]: time="2025-09-04T16:21:19.719946239Z" level=info msg="StartContainer for \"8b6e224597b4d5ac6e66e08d68db1146a816e752fc84a1e3a2fb0114c68f2242\"" Sep 4 16:21:19.722387 containerd[1583]: time="2025-09-04T16:21:19.722356548Z" level=info msg="connecting to shim 8b6e224597b4d5ac6e66e08d68db1146a816e752fc84a1e3a2fb0114c68f2242" address="unix:///run/containerd/s/597cdf75db4788d90002f940a115e2a380e24afced77f8ad5ca41a75136051a0" protocol=ttrpc version=3 Sep 4 16:21:19.732015 containerd[1583]: time="2025-09-04T16:21:19.731944742Z" level=info msg="connecting to shim 9dd8424a66eed6e122b92202a69595216db302acc0c8791cc3b4782c9104befc" address="unix:///run/containerd/s/766a9fcc936142ad818119d73be2c914c68edac073b63bebaa237eba00f8f873" protocol=ttrpc version=3 Sep 4 16:21:19.745811 systemd[1]: Started cri-containerd-8b6e224597b4d5ac6e66e08d68db1146a816e752fc84a1e3a2fb0114c68f2242.scope - libcontainer container 8b6e224597b4d5ac6e66e08d68db1146a816e752fc84a1e3a2fb0114c68f2242. Sep 4 16:21:19.748801 systemd[1]: Started cri-containerd-9dd8424a66eed6e122b92202a69595216db302acc0c8791cc3b4782c9104befc.scope - libcontainer container 9dd8424a66eed6e122b92202a69595216db302acc0c8791cc3b4782c9104befc. Sep 4 16:21:19.778166 containerd[1583]: time="2025-09-04T16:21:19.778125328Z" level=info msg="StartContainer for \"8b6e224597b4d5ac6e66e08d68db1146a816e752fc84a1e3a2fb0114c68f2242\" returns successfully" Sep 4 16:21:19.786910 containerd[1583]: time="2025-09-04T16:21:19.786866096Z" level=info msg="StartContainer for \"9dd8424a66eed6e122b92202a69595216db302acc0c8791cc3b4782c9104befc\" returns successfully" Sep 4 16:21:19.857198 kubelet[2719]: E0904 16:21:19.857142 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:19.863368 kubelet[2719]: E0904 16:21:19.863334 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:19.882453 kubelet[2719]: I0904 16:21:19.882384 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-btflx" podStartSLOduration=21.882367531 podStartE2EDuration="21.882367531s" podCreationTimestamp="2025-09-04 16:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:21:19.880797054 +0000 UTC m=+28.208599766" watchObservedRunningTime="2025-09-04 16:21:19.882367531 +0000 UTC m=+28.210170243" Sep 4 16:21:20.598962 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:58434.service - OpenSSH per-connection server daemon (10.0.0.1:58434). Sep 4 16:21:20.668276 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 58434 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:20.670385 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:20.676251 systemd-logind[1558]: New session 8 of user core. Sep 4 16:21:20.690835 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 16:21:20.836961 sshd[4068]: Connection closed by 10.0.0.1 port 58434 Sep 4 16:21:20.837285 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:20.841908 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:58434.service: Deactivated successfully. Sep 4 16:21:20.843964 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 16:21:20.845430 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Sep 4 16:21:20.846243 systemd-logind[1558]: Removed session 8. Sep 4 16:21:20.865489 kubelet[2719]: E0904 16:21:20.865298 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:20.866936 kubelet[2719]: E0904 16:21:20.865547 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:21.275903 kubelet[2719]: I0904 16:21:21.275637 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qz2vx" podStartSLOduration=23.275610577 podStartE2EDuration="23.275610577s" podCreationTimestamp="2025-09-04 16:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:21:19.894329555 +0000 UTC m=+28.222132257" watchObservedRunningTime="2025-09-04 16:21:21.275610577 +0000 UTC m=+29.603413289" Sep 4 16:21:21.867772 kubelet[2719]: E0904 16:21:21.867732 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:21.867772 kubelet[2719]: E0904 16:21:21.867753 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:21:25.855076 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:58436.service - OpenSSH per-connection server daemon (10.0.0.1:58436). Sep 4 16:21:25.911038 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 58436 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:25.912899 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:25.917769 systemd-logind[1558]: New session 9 of user core. Sep 4 16:21:25.927910 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 16:21:26.060797 sshd[4093]: Connection closed by 10.0.0.1 port 58436 Sep 4 16:21:26.061169 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:26.066543 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:58436.service: Deactivated successfully. Sep 4 16:21:26.069002 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 16:21:26.069923 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Sep 4 16:21:26.071389 systemd-logind[1558]: Removed session 9. Sep 4 16:21:31.077969 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:50000.service - OpenSSH per-connection server daemon (10.0.0.1:50000). Sep 4 16:21:31.138112 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 50000 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:31.139868 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:31.144531 systemd-logind[1558]: New session 10 of user core. Sep 4 16:21:31.151812 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 16:21:31.430580 sshd[4113]: Connection closed by 10.0.0.1 port 50000 Sep 4 16:21:31.430879 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:31.435904 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:50000.service: Deactivated successfully. Sep 4 16:21:31.438312 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 16:21:31.439246 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Sep 4 16:21:31.440607 systemd-logind[1558]: Removed session 10. Sep 4 16:21:36.447854 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:50012.service - OpenSSH per-connection server daemon (10.0.0.1:50012). Sep 4 16:21:36.548179 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 50012 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:36.550078 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:36.555000 systemd-logind[1558]: New session 11 of user core. Sep 4 16:21:36.564859 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 16:21:36.679672 sshd[4130]: Connection closed by 10.0.0.1 port 50012 Sep 4 16:21:36.680153 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:36.685460 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:50012.service: Deactivated successfully. Sep 4 16:21:36.687502 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 16:21:36.688358 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Sep 4 16:21:36.689689 systemd-logind[1558]: Removed session 11. Sep 4 16:21:41.694128 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:42170.service - OpenSSH per-connection server daemon (10.0.0.1:42170). Sep 4 16:21:41.747265 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 42170 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:41.748706 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:41.753310 systemd-logind[1558]: New session 12 of user core. Sep 4 16:21:41.763815 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 16:21:41.882400 sshd[4148]: Connection closed by 10.0.0.1 port 42170 Sep 4 16:21:41.882883 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:41.893403 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:42170.service: Deactivated successfully. Sep 4 16:21:41.895891 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 16:21:41.896728 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Sep 4 16:21:41.900954 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:42186.service - OpenSSH per-connection server daemon (10.0.0.1:42186). Sep 4 16:21:41.901740 systemd-logind[1558]: Removed session 12. Sep 4 16:21:41.962347 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 42186 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:41.964106 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:41.968581 systemd-logind[1558]: New session 13 of user core. Sep 4 16:21:41.974779 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 16:21:42.127956 sshd[4166]: Connection closed by 10.0.0.1 port 42186 Sep 4 16:21:42.128576 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:42.139959 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:42186.service: Deactivated successfully. Sep 4 16:21:42.142989 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 16:21:42.146095 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Sep 4 16:21:42.150530 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:42194.service - OpenSSH per-connection server daemon (10.0.0.1:42194). Sep 4 16:21:42.151589 systemd-logind[1558]: Removed session 13. Sep 4 16:21:42.210755 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 42194 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:42.212605 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:42.217362 systemd-logind[1558]: New session 14 of user core. Sep 4 16:21:42.226828 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 16:21:42.338691 sshd[4182]: Connection closed by 10.0.0.1 port 42194 Sep 4 16:21:42.339032 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:42.344153 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:42194.service: Deactivated successfully. Sep 4 16:21:42.346141 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 16:21:42.347018 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Sep 4 16:21:42.348119 systemd-logind[1558]: Removed session 14. Sep 4 16:21:47.354825 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:42208.service - OpenSSH per-connection server daemon (10.0.0.1:42208). Sep 4 16:21:47.415043 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 42208 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:47.416816 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:47.421658 systemd-logind[1558]: New session 15 of user core. Sep 4 16:21:47.434985 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 16:21:47.547873 sshd[4199]: Connection closed by 10.0.0.1 port 42208 Sep 4 16:21:47.548197 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:47.553242 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:42208.service: Deactivated successfully. Sep 4 16:21:47.556102 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 16:21:47.557175 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Sep 4 16:21:47.558760 systemd-logind[1558]: Removed session 15. Sep 4 16:21:52.560695 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:58428.service - OpenSSH per-connection server daemon (10.0.0.1:58428). Sep 4 16:21:52.616798 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 58428 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:52.618272 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:52.622822 systemd-logind[1558]: New session 16 of user core. Sep 4 16:21:52.638805 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 16:21:52.755863 sshd[4218]: Connection closed by 10.0.0.1 port 58428 Sep 4 16:21:52.756215 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:52.760472 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:58428.service: Deactivated successfully. Sep 4 16:21:52.762278 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 16:21:52.763132 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Sep 4 16:21:52.764127 systemd-logind[1558]: Removed session 16. Sep 4 16:21:57.769122 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:58438.service - OpenSSH per-connection server daemon (10.0.0.1:58438). Sep 4 16:21:57.820313 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 58438 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:57.821782 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:57.826106 systemd-logind[1558]: New session 17 of user core. Sep 4 16:21:57.836776 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 16:21:57.943355 sshd[4234]: Connection closed by 10.0.0.1 port 58438 Sep 4 16:21:57.943743 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:57.952382 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:58438.service: Deactivated successfully. Sep 4 16:21:57.954130 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 16:21:57.954856 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Sep 4 16:21:57.958201 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:58446.service - OpenSSH per-connection server daemon (10.0.0.1:58446). Sep 4 16:21:57.958805 systemd-logind[1558]: Removed session 17. Sep 4 16:21:58.013930 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 58446 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:58.015213 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:58.019338 systemd-logind[1558]: New session 18 of user core. Sep 4 16:21:58.025784 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 16:21:58.300355 sshd[4250]: Connection closed by 10.0.0.1 port 58446 Sep 4 16:21:58.300858 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:58.309457 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:58446.service: Deactivated successfully. Sep 4 16:21:58.311289 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 16:21:58.312150 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Sep 4 16:21:58.314879 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:58448.service - OpenSSH per-connection server daemon (10.0.0.1:58448). Sep 4 16:21:58.315494 systemd-logind[1558]: Removed session 18. Sep 4 16:21:58.367070 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 58448 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:58.368253 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:58.372508 systemd-logind[1558]: New session 19 of user core. Sep 4 16:21:58.383780 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 16:21:59.128548 sshd[4265]: Connection closed by 10.0.0.1 port 58448 Sep 4 16:21:59.129946 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:59.141148 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:58448.service: Deactivated successfully. Sep 4 16:21:59.143698 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 16:21:59.145336 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Sep 4 16:21:59.149206 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:58450.service - OpenSSH per-connection server daemon (10.0.0.1:58450). Sep 4 16:21:59.149839 systemd-logind[1558]: Removed session 19. Sep 4 16:21:59.198315 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 58450 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:59.200309 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:59.204465 systemd-logind[1558]: New session 20 of user core. Sep 4 16:21:59.213803 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 16:21:59.425310 sshd[4295]: Connection closed by 10.0.0.1 port 58450 Sep 4 16:21:59.425611 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:59.437505 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:58450.service: Deactivated successfully. Sep 4 16:21:59.439428 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 16:21:59.440295 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Sep 4 16:21:59.442333 systemd-logind[1558]: Removed session 20. Sep 4 16:21:59.443514 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:58460.service - OpenSSH per-connection server daemon (10.0.0.1:58460). Sep 4 16:21:59.495969 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 58460 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:21:59.497395 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:21:59.501517 systemd-logind[1558]: New session 21 of user core. Sep 4 16:21:59.510796 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 16:21:59.633074 sshd[4309]: Connection closed by 10.0.0.1 port 58460 Sep 4 16:21:59.633459 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Sep 4 16:21:59.638429 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:58460.service: Deactivated successfully. Sep 4 16:21:59.640501 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 16:21:59.641221 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Sep 4 16:21:59.642318 systemd-logind[1558]: Removed session 21. Sep 4 16:22:04.649129 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:53076.service - OpenSSH per-connection server daemon (10.0.0.1:53076). Sep 4 16:22:04.699954 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 53076 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:22:04.701342 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:22:04.705455 systemd-logind[1558]: New session 22 of user core. Sep 4 16:22:04.714779 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 16:22:04.819732 sshd[4326]: Connection closed by 10.0.0.1 port 53076 Sep 4 16:22:04.820071 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Sep 4 16:22:04.823602 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:53076.service: Deactivated successfully. Sep 4 16:22:04.825430 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 16:22:04.826730 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Sep 4 16:22:04.827789 systemd-logind[1558]: Removed session 22. Sep 4 16:22:05.760857 kubelet[2719]: E0904 16:22:05.760792 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:09.832540 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:53084.service - OpenSSH per-connection server daemon (10.0.0.1:53084). Sep 4 16:22:09.886304 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 53084 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:22:09.887504 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:22:09.891450 systemd-logind[1558]: New session 23 of user core. Sep 4 16:22:09.900784 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 16:22:10.005837 sshd[4344]: Connection closed by 10.0.0.1 port 53084 Sep 4 16:22:10.006152 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Sep 4 16:22:10.010697 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:53084.service: Deactivated successfully. Sep 4 16:22:10.012555 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 16:22:10.013276 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Sep 4 16:22:10.014398 systemd-logind[1558]: Removed session 23. Sep 4 16:22:15.020606 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:42876.service - OpenSSH per-connection server daemon (10.0.0.1:42876). Sep 4 16:22:15.071515 sshd[4357]: Accepted publickey for core from 10.0.0.1 port 42876 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:22:15.072947 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:22:15.077929 systemd-logind[1558]: New session 24 of user core. Sep 4 16:22:15.087814 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 16:22:15.202070 sshd[4360]: Connection closed by 10.0.0.1 port 42876 Sep 4 16:22:15.202429 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Sep 4 16:22:15.207296 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:42876.service: Deactivated successfully. Sep 4 16:22:15.209473 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 16:22:15.210248 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. Sep 4 16:22:15.211503 systemd-logind[1558]: Removed session 24. Sep 4 16:22:16.760885 kubelet[2719]: E0904 16:22:16.760811 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:20.219565 systemd[1]: Started sshd@24-10.0.0.50:22-10.0.0.1:43002.service - OpenSSH per-connection server daemon (10.0.0.1:43002). Sep 4 16:22:20.266532 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 43002 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:22:20.268078 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:22:20.272121 systemd-logind[1558]: New session 25 of user core. Sep 4 16:22:20.282794 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 16:22:20.392651 sshd[4377]: Connection closed by 10.0.0.1 port 43002 Sep 4 16:22:20.393051 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Sep 4 16:22:20.406508 systemd[1]: sshd@24-10.0.0.50:22-10.0.0.1:43002.service: Deactivated successfully. Sep 4 16:22:20.408285 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 16:22:20.409022 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. Sep 4 16:22:20.411328 systemd[1]: Started sshd@25-10.0.0.50:22-10.0.0.1:43004.service - OpenSSH per-connection server daemon (10.0.0.1:43004). Sep 4 16:22:20.412354 systemd-logind[1558]: Removed session 25. Sep 4 16:22:20.459608 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 43004 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:22:20.460905 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:22:20.465536 systemd-logind[1558]: New session 26 of user core. Sep 4 16:22:20.479799 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 16:22:21.121906 update_engine[1560]: I20250904 16:22:21.121825 1560 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 4 16:22:21.121906 update_engine[1560]: I20250904 16:22:21.121891 1560 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 4 16:22:21.122397 update_engine[1560]: I20250904 16:22:21.122215 1560 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 4 16:22:21.122862 update_engine[1560]: I20250904 16:22:21.122833 1560 omaha_request_params.cc:62] Current group set to developer Sep 4 16:22:21.123048 update_engine[1560]: I20250904 16:22:21.123008 1560 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 4 16:22:21.123048 update_engine[1560]: I20250904 16:22:21.123025 1560 update_attempter.cc:643] Scheduling an action processor start. Sep 4 16:22:21.123048 update_engine[1560]: I20250904 16:22:21.123044 1560 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 16:22:21.123300 update_engine[1560]: I20250904 16:22:21.123109 1560 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 4 16:22:21.123300 update_engine[1560]: I20250904 16:22:21.123199 1560 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 16:22:21.123300 update_engine[1560]: I20250904 16:22:21.123211 1560 omaha_request_action.cc:272] Request: Sep 4 16:22:21.123300 update_engine[1560]: Sep 4 16:22:21.123300 update_engine[1560]: Sep 4 16:22:21.123300 update_engine[1560]: Sep 4 16:22:21.123300 update_engine[1560]: Sep 4 16:22:21.123300 update_engine[1560]: Sep 4 16:22:21.123300 update_engine[1560]: Sep 4 16:22:21.123300 update_engine[1560]: Sep 4 16:22:21.123300 update_engine[1560]: Sep 4 16:22:21.123300 update_engine[1560]: I20250904 16:22:21.123224 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 16:22:21.127389 locksmithd[1615]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 4 16:22:21.128358 update_engine[1560]: I20250904 16:22:21.128302 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 16:22:21.130524 update_engine[1560]: I20250904 16:22:21.129930 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 16:22:21.138922 update_engine[1560]: E20250904 16:22:21.138847 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 16:22:21.138990 update_engine[1560]: I20250904 16:22:21.138962 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 4 16:22:21.760592 kubelet[2719]: E0904 16:22:21.760543 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:21.842853 containerd[1583]: time="2025-09-04T16:22:21.842788635Z" level=info msg="StopContainer for \"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\" with timeout 30 (s)" Sep 4 16:22:21.843745 containerd[1583]: time="2025-09-04T16:22:21.843699791Z" level=info msg="Stop container \"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\" with signal terminated" Sep 4 16:22:21.860206 systemd[1]: cri-containerd-d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329.scope: Deactivated successfully. Sep 4 16:22:21.863603 containerd[1583]: time="2025-09-04T16:22:21.863544641Z" level=info msg="received exit event container_id:\"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\" id:\"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\" pid:3305 exited_at:{seconds:1757002941 nanos:862997158}" Sep 4 16:22:21.863747 containerd[1583]: time="2025-09-04T16:22:21.863570982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\" id:\"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\" pid:3305 exited_at:{seconds:1757002941 nanos:862997158}" Sep 4 16:22:21.863747 containerd[1583]: time="2025-09-04T16:22:21.863708844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" id:\"54c6b90953f90d9e992f7f4f4791ae85d790f72c72c93148ef79d77ddcfa95a5\" pid:4415 exited_at:{seconds:1757002941 nanos:863258185}" Sep 4 16:22:21.863796 containerd[1583]: time="2025-09-04T16:22:21.863757326Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 16:22:21.866173 containerd[1583]: time="2025-09-04T16:22:21.866138846Z" level=info msg="StopContainer for \"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" with timeout 2 (s)" Sep 4 16:22:21.867271 containerd[1583]: time="2025-09-04T16:22:21.866635564Z" level=info msg="Stop container \"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" with signal terminated" Sep 4 16:22:21.875232 systemd-networkd[1476]: lxc_health: Link DOWN Sep 4 16:22:21.875257 systemd-networkd[1476]: lxc_health: Lost carrier Sep 4 16:22:21.892685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329-rootfs.mount: Deactivated successfully. Sep 4 16:22:21.899126 systemd[1]: cri-containerd-7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5.scope: Deactivated successfully. Sep 4 16:22:21.899656 systemd[1]: cri-containerd-7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5.scope: Consumed 6.741s CPU time, 122.8M memory peak, 232K read from disk, 13.3M written to disk. Sep 4 16:22:21.900632 containerd[1583]: time="2025-09-04T16:22:21.900271081Z" level=info msg="received exit event container_id:\"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" id:\"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" pid:3381 exited_at:{seconds:1757002941 nanos:899982702}" Sep 4 16:22:21.900769 containerd[1583]: time="2025-09-04T16:22:21.900716010Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" id:\"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" pid:3381 exited_at:{seconds:1757002941 nanos:899982702}" Sep 4 16:22:21.914749 containerd[1583]: time="2025-09-04T16:22:21.914700598Z" level=info msg="StopContainer for \"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\" returns successfully" Sep 4 16:22:21.917412 containerd[1583]: time="2025-09-04T16:22:21.917371780Z" level=info msg="StopPodSandbox for \"75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338\"" Sep 4 16:22:21.917485 containerd[1583]: time="2025-09-04T16:22:21.917467703Z" level=info msg="Container to stop \"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:22:21.920853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5-rootfs.mount: Deactivated successfully. Sep 4 16:22:21.924545 systemd[1]: cri-containerd-75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338.scope: Deactivated successfully. Sep 4 16:22:21.929265 containerd[1583]: time="2025-09-04T16:22:21.929204176Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338\" id:\"75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338\" pid:2925 exit_status:137 exited_at:{seconds:1757002941 nanos:928907280}" Sep 4 16:22:21.933890 containerd[1583]: time="2025-09-04T16:22:21.933834673Z" level=info msg="StopContainer for \"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" returns successfully" Sep 4 16:22:21.934508 containerd[1583]: time="2025-09-04T16:22:21.934489301Z" level=info msg="StopPodSandbox for \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\"" Sep 4 16:22:21.934718 containerd[1583]: time="2025-09-04T16:22:21.934621813Z" level=info msg="Container to stop \"300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:22:21.934718 containerd[1583]: time="2025-09-04T16:22:21.934655588Z" level=info msg="Container to stop \"f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:22:21.934718 containerd[1583]: time="2025-09-04T16:22:21.934715522Z" level=info msg="Container to stop \"5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:22:21.934718 containerd[1583]: time="2025-09-04T16:22:21.934724168Z" level=info msg="Container to stop \"911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:22:21.934898 containerd[1583]: time="2025-09-04T16:22:21.934732695Z" level=info msg="Container to stop \"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 16:22:21.941712 systemd[1]: cri-containerd-6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026.scope: Deactivated successfully. Sep 4 16:22:21.959973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338-rootfs.mount: Deactivated successfully. Sep 4 16:22:21.965342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026-rootfs.mount: Deactivated successfully. Sep 4 16:22:22.034944 containerd[1583]: time="2025-09-04T16:22:22.034709653Z" level=info msg="shim disconnected" id=6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026 namespace=k8s.io Sep 4 16:22:22.034944 containerd[1583]: time="2025-09-04T16:22:22.034746553Z" level=warning msg="cleaning up after shim disconnected" id=6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026 namespace=k8s.io Sep 4 16:22:22.050674 containerd[1583]: time="2025-09-04T16:22:22.034755199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 16:22:22.050748 containerd[1583]: time="2025-09-04T16:22:22.037812005Z" level=info msg="shim disconnected" id=75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338 namespace=k8s.io Sep 4 16:22:22.050748 containerd[1583]: time="2025-09-04T16:22:22.050731533Z" level=warning msg="cleaning up after shim disconnected" id=75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338 namespace=k8s.io Sep 4 16:22:22.050806 containerd[1583]: time="2025-09-04T16:22:22.050739137Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 16:22:22.072449 containerd[1583]: time="2025-09-04T16:22:22.072387389Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" id:\"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" pid:2864 exit_status:137 exited_at:{seconds:1757002941 nanos:942496364}" Sep 4 16:22:22.073646 containerd[1583]: time="2025-09-04T16:22:22.072653286Z" level=info msg="TearDown network for sandbox \"75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338\" successfully" Sep 4 16:22:22.073646 containerd[1583]: time="2025-09-04T16:22:22.073605540Z" level=info msg="StopPodSandbox for \"75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338\" returns successfully" Sep 4 16:22:22.074557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338-shm.mount: Deactivated successfully. Sep 4 16:22:22.074703 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026-shm.mount: Deactivated successfully. Sep 4 16:22:22.078725 containerd[1583]: time="2025-09-04T16:22:22.078561583Z" level=info msg="received exit event sandbox_id:\"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" exit_status:137 exited_at:{seconds:1757002941 nanos:942496364}" Sep 4 16:22:22.078725 containerd[1583]: time="2025-09-04T16:22:22.078690319Z" level=info msg="received exit event sandbox_id:\"75f39208453a7913da33d8d44503a562c43e2fde53e45b258a0e6417a8ba4338\" exit_status:137 exited_at:{seconds:1757002941 nanos:928907280}" Sep 4 16:22:22.079505 containerd[1583]: time="2025-09-04T16:22:22.079472961Z" level=info msg="TearDown network for sandbox \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" successfully" Sep 4 16:22:22.079505 containerd[1583]: time="2025-09-04T16:22:22.079502156Z" level=info msg="StopPodSandbox for \"6c925016b80471edefc7ab3cd2aeb661bb4f00c0f4fe01bc7e5278c832a0f026\" returns successfully" Sep 4 16:22:22.226687 kubelet[2719]: I0904 16:22:22.226643 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-etc-cni-netd\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.226687 kubelet[2719]: I0904 16:22:22.226704 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d35702a-8372-4170-a8a7-0a3606772f13-hubble-tls\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.226908 kubelet[2719]: I0904 16:22:22.226722 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-host-proc-sys-kernel\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.226908 kubelet[2719]: I0904 16:22:22.226738 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cni-path\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.226908 kubelet[2719]: I0904 16:22:22.226754 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-hostproc\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.226908 kubelet[2719]: I0904 16:22:22.226771 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-xtables-lock\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.226908 kubelet[2719]: I0904 16:22:22.226790 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f4f0f13-693c-4bfb-bc08-0461c93591e0-cilium-config-path\") pod \"0f4f0f13-693c-4bfb-bc08-0461c93591e0\" (UID: \"0f4f0f13-693c-4bfb-bc08-0461c93591e0\") " Sep 4 16:22:22.226908 kubelet[2719]: I0904 16:22:22.226804 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-lib-modules\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.227044 kubelet[2719]: I0904 16:22:22.226823 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg9vn\" (UniqueName: \"kubernetes.io/projected/4d35702a-8372-4170-a8a7-0a3606772f13-kube-api-access-dg9vn\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.227044 kubelet[2719]: I0904 16:22:22.226838 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-run\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.227044 kubelet[2719]: I0904 16:22:22.226853 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-cgroup\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.227044 kubelet[2719]: I0904 16:22:22.226871 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d35702a-8372-4170-a8a7-0a3606772f13-clustermesh-secrets\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.227044 kubelet[2719]: I0904 16:22:22.226889 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-host-proc-sys-net\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.227044 kubelet[2719]: I0904 16:22:22.226901 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-bpf-maps\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.227193 kubelet[2719]: I0904 16:22:22.226916 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6nbg\" (UniqueName: \"kubernetes.io/projected/0f4f0f13-693c-4bfb-bc08-0461c93591e0-kube-api-access-z6nbg\") pod \"0f4f0f13-693c-4bfb-bc08-0461c93591e0\" (UID: \"0f4f0f13-693c-4bfb-bc08-0461c93591e0\") " Sep 4 16:22:22.227193 kubelet[2719]: I0904 16:22:22.226932 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-config-path\") pod \"4d35702a-8372-4170-a8a7-0a3606772f13\" (UID: \"4d35702a-8372-4170-a8a7-0a3606772f13\") " Sep 4 16:22:22.229680 kubelet[2719]: I0904 16:22:22.226795 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:22:22.229680 kubelet[2719]: I0904 16:22:22.226832 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-hostproc" (OuterVolumeSpecName: "hostproc") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:22:22.229680 kubelet[2719]: I0904 16:22:22.226843 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:22:22.229680 kubelet[2719]: I0904 16:22:22.226854 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cni-path" (OuterVolumeSpecName: "cni-path") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:22:22.229680 kubelet[2719]: I0904 16:22:22.226873 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:22:22.229860 kubelet[2719]: I0904 16:22:22.226884 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:22:22.229860 kubelet[2719]: I0904 16:22:22.227330 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:22:22.230017 kubelet[2719]: I0904 16:22:22.229987 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f4f0f13-693c-4bfb-bc08-0461c93591e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0f4f0f13-693c-4bfb-bc08-0461c93591e0" (UID: "0f4f0f13-693c-4bfb-bc08-0461c93591e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 16:22:22.230094 kubelet[2719]: I0904 16:22:22.230045 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:22:22.230138 kubelet[2719]: I0904 16:22:22.230092 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:22:22.230138 kubelet[2719]: I0904 16:22:22.230069 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 16:22:22.230138 kubelet[2719]: I0904 16:22:22.230126 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 16:22:22.231207 kubelet[2719]: I0904 16:22:22.231182 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d35702a-8372-4170-a8a7-0a3606772f13-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 16:22:22.231349 kubelet[2719]: I0904 16:22:22.231325 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d35702a-8372-4170-a8a7-0a3606772f13-kube-api-access-dg9vn" (OuterVolumeSpecName: "kube-api-access-dg9vn") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "kube-api-access-dg9vn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 16:22:22.232748 kubelet[2719]: I0904 16:22:22.232710 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d35702a-8372-4170-a8a7-0a3606772f13-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4d35702a-8372-4170-a8a7-0a3606772f13" (UID: "4d35702a-8372-4170-a8a7-0a3606772f13"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 16:22:22.233130 kubelet[2719]: I0904 16:22:22.233096 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4f0f13-693c-4bfb-bc08-0461c93591e0-kube-api-access-z6nbg" (OuterVolumeSpecName: "kube-api-access-z6nbg") pod "0f4f0f13-693c-4bfb-bc08-0461c93591e0" (UID: "0f4f0f13-693c-4bfb-bc08-0461c93591e0"). InnerVolumeSpecName "kube-api-access-z6nbg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 16:22:22.327792 kubelet[2719]: I0904 16:22:22.327745 2719 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.327792 kubelet[2719]: I0904 16:22:22.327780 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.327792 kubelet[2719]: I0904 16:22:22.327788 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.327914 kubelet[2719]: I0904 16:22:22.327798 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dg9vn\" (UniqueName: \"kubernetes.io/projected/4d35702a-8372-4170-a8a7-0a3606772f13-kube-api-access-dg9vn\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.327914 kubelet[2719]: I0904 16:22:22.327810 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z6nbg\" (UniqueName: \"kubernetes.io/projected/0f4f0f13-693c-4bfb-bc08-0461c93591e0-kube-api-access-z6nbg\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.327914 kubelet[2719]: I0904 16:22:22.327818 2719 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d35702a-8372-4170-a8a7-0a3606772f13-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.327914 kubelet[2719]: I0904 16:22:22.327826 2719 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.327914 kubelet[2719]: I0904 16:22:22.327836 2719 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.327914 kubelet[2719]: I0904 16:22:22.327852 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d35702a-8372-4170-a8a7-0a3606772f13-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.327914 kubelet[2719]: I0904 16:22:22.327864 2719 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.327914 kubelet[2719]: I0904 16:22:22.327871 2719 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d35702a-8372-4170-a8a7-0a3606772f13-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.328094 kubelet[2719]: I0904 16:22:22.327879 2719 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.328094 kubelet[2719]: I0904 16:22:22.327886 2719 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.328094 kubelet[2719]: I0904 16:22:22.327893 2719 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.328094 kubelet[2719]: I0904 16:22:22.327900 2719 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d35702a-8372-4170-a8a7-0a3606772f13-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.328094 kubelet[2719]: I0904 16:22:22.327908 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f4f0f13-693c-4bfb-bc08-0461c93591e0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 16:22:22.760046 kubelet[2719]: E0904 16:22:22.759907 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:22.892682 systemd[1]: var-lib-kubelet-pods-0f4f0f13\x2d693c\x2d4bfb\x2dbc08\x2d0461c93591e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz6nbg.mount: Deactivated successfully. Sep 4 16:22:22.892797 systemd[1]: var-lib-kubelet-pods-4d35702a\x2d8372\x2d4170\x2da8a7\x2d0a3606772f13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddg9vn.mount: Deactivated successfully. Sep 4 16:22:22.892875 systemd[1]: var-lib-kubelet-pods-4d35702a\x2d8372\x2d4170\x2da8a7\x2d0a3606772f13-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 16:22:22.892952 systemd[1]: var-lib-kubelet-pods-4d35702a\x2d8372\x2d4170\x2da8a7\x2d0a3606772f13-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 16:22:22.998880 kubelet[2719]: I0904 16:22:22.998824 2719 scope.go:117] "RemoveContainer" containerID="d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329" Sep 4 16:22:23.001276 containerd[1583]: time="2025-09-04T16:22:23.001222899Z" level=info msg="RemoveContainer for \"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\"" Sep 4 16:22:23.010003 systemd[1]: Removed slice kubepods-besteffort-pod0f4f0f13_693c_4bfb_bc08_0461c93591e0.slice - libcontainer container kubepods-besteffort-pod0f4f0f13_693c_4bfb_bc08_0461c93591e0.slice. Sep 4 16:22:23.010347 containerd[1583]: time="2025-09-04T16:22:23.010233944Z" level=info msg="RemoveContainer for \"d6e2c4366079b940211fe408be83436420dd0c2e1272d421b936cc9ed640b329\" returns successfully" Sep 4 16:22:23.012776 systemd[1]: Removed slice kubepods-burstable-pod4d35702a_8372_4170_a8a7_0a3606772f13.slice - libcontainer container kubepods-burstable-pod4d35702a_8372_4170_a8a7_0a3606772f13.slice. Sep 4 16:22:23.012918 systemd[1]: kubepods-burstable-pod4d35702a_8372_4170_a8a7_0a3606772f13.slice: Consumed 6.856s CPU time, 123.2M memory peak, 240K read from disk, 13.3M written to disk. Sep 4 16:22:23.013945 kubelet[2719]: I0904 16:22:23.013764 2719 scope.go:117] "RemoveContainer" containerID="7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5" Sep 4 16:22:23.016200 containerd[1583]: time="2025-09-04T16:22:23.016153860Z" level=info msg="RemoveContainer for \"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\"" Sep 4 16:22:23.021298 containerd[1583]: time="2025-09-04T16:22:23.021247933Z" level=info msg="RemoveContainer for \"7aaf99bd6fe20a4baaf4ea79845aa81d690a700ce0410adff27f1f80e5af5aa5\" returns successfully" Sep 4 16:22:23.021481 kubelet[2719]: I0904 16:22:23.021447 2719 scope.go:117] "RemoveContainer" containerID="5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47" Sep 4 16:22:23.023063 containerd[1583]: time="2025-09-04T16:22:23.023032433Z" level=info msg="RemoveContainer for \"5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47\"" Sep 4 16:22:23.027613 containerd[1583]: time="2025-09-04T16:22:23.027582870Z" level=info msg="RemoveContainer for \"5b905e5bb04a3150362bd74534899ca2774fc85c6c91859a2772fd45fddf0b47\" returns successfully" Sep 4 16:22:23.027845 kubelet[2719]: I0904 16:22:23.027772 2719 scope.go:117] "RemoveContainer" containerID="f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347" Sep 4 16:22:23.030270 containerd[1583]: time="2025-09-04T16:22:23.030113180Z" level=info msg="RemoveContainer for \"f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347\"" Sep 4 16:22:23.035507 containerd[1583]: time="2025-09-04T16:22:23.035243082Z" level=info msg="RemoveContainer for \"f64c5813371f19931dd9406b2b3328bbde28580fed2f63ab7986a28e16a14347\" returns successfully" Sep 4 16:22:23.039694 kubelet[2719]: I0904 16:22:23.038047 2719 scope.go:117] "RemoveContainer" containerID="300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90" Sep 4 16:22:23.040610 containerd[1583]: time="2025-09-04T16:22:23.040585518Z" level=info msg="RemoveContainer for \"300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90\"" Sep 4 16:22:23.047420 containerd[1583]: time="2025-09-04T16:22:23.047365332Z" level=info msg="RemoveContainer for \"300ea8652d8060053c443412c1f8a4d73054223bf5bd123785740bbcf2b53c90\" returns successfully" Sep 4 16:22:23.047649 kubelet[2719]: I0904 16:22:23.047619 2719 scope.go:117] "RemoveContainer" containerID="911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5" Sep 4 16:22:23.048938 containerd[1583]: time="2025-09-04T16:22:23.048907169Z" level=info msg="RemoveContainer for \"911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5\"" Sep 4 16:22:23.063775 containerd[1583]: time="2025-09-04T16:22:23.063620145Z" level=info msg="RemoveContainer for \"911c0932eb8978265edc240ef0bf7db27308f9ae34cb58524ff1c204423fc3c5\" returns successfully" Sep 4 16:22:23.763045 kubelet[2719]: I0904 16:22:23.762988 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4f0f13-693c-4bfb-bc08-0461c93591e0" path="/var/lib/kubelet/pods/0f4f0f13-693c-4bfb-bc08-0461c93591e0/volumes" Sep 4 16:22:23.763554 kubelet[2719]: I0904 16:22:23.763523 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d35702a-8372-4170-a8a7-0a3606772f13" path="/var/lib/kubelet/pods/4d35702a-8372-4170-a8a7-0a3606772f13/volumes" Sep 4 16:22:23.778969 sshd[4394]: Connection closed by 10.0.0.1 port 43004 Sep 4 16:22:23.779611 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Sep 4 16:22:23.788516 systemd[1]: sshd@25-10.0.0.50:22-10.0.0.1:43004.service: Deactivated successfully. Sep 4 16:22:23.790442 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 16:22:23.791204 systemd-logind[1558]: Session 26 logged out. Waiting for processes to exit. Sep 4 16:22:23.794065 systemd[1]: Started sshd@26-10.0.0.50:22-10.0.0.1:43014.service - OpenSSH per-connection server daemon (10.0.0.1:43014). Sep 4 16:22:23.794817 systemd-logind[1558]: Removed session 26. Sep 4 16:22:23.847373 sshd[4548]: Accepted publickey for core from 10.0.0.1 port 43014 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:22:23.848903 sshd-session[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:22:23.853900 systemd-logind[1558]: New session 27 of user core. Sep 4 16:22:23.864944 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 16:22:24.630532 sshd[4551]: Connection closed by 10.0.0.1 port 43014 Sep 4 16:22:24.632058 sshd-session[4548]: pam_unix(sshd:session): session closed for user core Sep 4 16:22:24.641080 systemd[1]: sshd@26-10.0.0.50:22-10.0.0.1:43014.service: Deactivated successfully. Sep 4 16:22:24.644570 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 16:22:24.647472 systemd-logind[1558]: Session 27 logged out. Waiting for processes to exit. Sep 4 16:22:24.649634 kubelet[2719]: I0904 16:22:24.649592 2719 memory_manager.go:355] "RemoveStaleState removing state" podUID="0f4f0f13-693c-4bfb-bc08-0461c93591e0" containerName="cilium-operator" Sep 4 16:22:24.649634 kubelet[2719]: I0904 16:22:24.649627 2719 memory_manager.go:355] "RemoveStaleState removing state" podUID="4d35702a-8372-4170-a8a7-0a3606772f13" containerName="cilium-agent" Sep 4 16:22:24.650638 systemd[1]: Started sshd@27-10.0.0.50:22-10.0.0.1:43024.service - OpenSSH per-connection server daemon (10.0.0.1:43024). Sep 4 16:22:24.653735 systemd-logind[1558]: Removed session 27. Sep 4 16:22:24.670023 systemd[1]: Created slice kubepods-burstable-pod64dd3afb_5fd0_46f2_87b4_c05f5a923a1c.slice - libcontainer container kubepods-burstable-pod64dd3afb_5fd0_46f2_87b4_c05f5a923a1c.slice. Sep 4 16:22:24.717286 sshd[4563]: Accepted publickey for core from 10.0.0.1 port 43024 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:22:24.719044 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:22:24.723569 systemd-logind[1558]: New session 28 of user core. Sep 4 16:22:24.731802 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 16:22:24.743266 kubelet[2719]: I0904 16:22:24.743206 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-clustermesh-secrets\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743266 kubelet[2719]: I0904 16:22:24.743250 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-cilium-config-path\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743391 kubelet[2719]: I0904 16:22:24.743283 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-lib-modules\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743391 kubelet[2719]: I0904 16:22:24.743360 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-xtables-lock\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743391 kubelet[2719]: I0904 16:22:24.743380 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-cilium-ipsec-secrets\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743455 kubelet[2719]: I0904 16:22:24.743439 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-cni-path\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743478 kubelet[2719]: I0904 16:22:24.743459 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-bpf-maps\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743528 kubelet[2719]: I0904 16:22:24.743510 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-host-proc-sys-net\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743553 kubelet[2719]: I0904 16:22:24.743530 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-cilium-cgroup\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743594 kubelet[2719]: I0904 16:22:24.743548 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-866gl\" (UniqueName: \"kubernetes.io/projected/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-kube-api-access-866gl\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743625 kubelet[2719]: I0904 16:22:24.743596 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-etc-cni-netd\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743625 kubelet[2719]: I0904 16:22:24.743611 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-cilium-run\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743717 kubelet[2719]: I0904 16:22:24.743651 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-hostproc\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743741 kubelet[2719]: I0904 16:22:24.743722 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-host-proc-sys-kernel\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.743782 kubelet[2719]: I0904 16:22:24.743767 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64dd3afb-5fd0-46f2-87b4-c05f5a923a1c-hubble-tls\") pod \"cilium-sslnm\" (UID: \"64dd3afb-5fd0-46f2-87b4-c05f5a923a1c\") " pod="kube-system/cilium-sslnm" Sep 4 16:22:24.760749 kubelet[2719]: E0904 16:22:24.760710 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:24.782990 sshd[4566]: Connection closed by 10.0.0.1 port 43024 Sep 4 16:22:24.783372 sshd-session[4563]: pam_unix(sshd:session): session closed for user core Sep 4 16:22:24.793499 systemd[1]: sshd@27-10.0.0.50:22-10.0.0.1:43024.service: Deactivated successfully. Sep 4 16:22:24.795569 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 16:22:24.796512 systemd-logind[1558]: Session 28 logged out. Waiting for processes to exit. Sep 4 16:22:24.799498 systemd[1]: Started sshd@28-10.0.0.50:22-10.0.0.1:43034.service - OpenSSH per-connection server daemon (10.0.0.1:43034). Sep 4 16:22:24.800164 systemd-logind[1558]: Removed session 28. Sep 4 16:22:24.876578 sshd[4576]: Accepted publickey for core from 10.0.0.1 port 43034 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:22:24.878335 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:22:24.884139 systemd-logind[1558]: New session 29 of user core. Sep 4 16:22:24.894857 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 16:22:24.977420 kubelet[2719]: E0904 16:22:24.977111 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:24.979412 containerd[1583]: time="2025-09-04T16:22:24.979377739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sslnm,Uid:64dd3afb-5fd0-46f2-87b4-c05f5a923a1c,Namespace:kube-system,Attempt:0,}" Sep 4 16:22:25.002581 containerd[1583]: time="2025-09-04T16:22:25.002502379Z" level=info msg="connecting to shim b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d" address="unix:///run/containerd/s/b07eeb80871569df45c7029a0c64afe8a8f3a30b0cc930c48da6c36065cac6e0" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:22:25.038857 systemd[1]: Started cri-containerd-b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d.scope - libcontainer container b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d. Sep 4 16:22:25.069206 containerd[1583]: time="2025-09-04T16:22:25.069147184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sslnm,Uid:64dd3afb-5fd0-46f2-87b4-c05f5a923a1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d\"" Sep 4 16:22:25.070037 kubelet[2719]: E0904 16:22:25.070009 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:25.072341 containerd[1583]: time="2025-09-04T16:22:25.072276560Z" level=info msg="CreateContainer within sandbox \"b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 16:22:25.079913 containerd[1583]: time="2025-09-04T16:22:25.079865425Z" level=info msg="Container 43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:22:25.095863 containerd[1583]: time="2025-09-04T16:22:25.095786961Z" level=info msg="CreateContainer within sandbox \"b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1\"" Sep 4 16:22:25.096498 containerd[1583]: time="2025-09-04T16:22:25.096448720Z" level=info msg="StartContainer for \"43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1\"" Sep 4 16:22:25.097644 containerd[1583]: time="2025-09-04T16:22:25.097618387Z" level=info msg="connecting to shim 43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1" address="unix:///run/containerd/s/b07eeb80871569df45c7029a0c64afe8a8f3a30b0cc930c48da6c36065cac6e0" protocol=ttrpc version=3 Sep 4 16:22:25.124872 systemd[1]: Started cri-containerd-43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1.scope - libcontainer container 43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1. Sep 4 16:22:25.158707 containerd[1583]: time="2025-09-04T16:22:25.158556790Z" level=info msg="StartContainer for \"43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1\" returns successfully" Sep 4 16:22:25.169300 systemd[1]: cri-containerd-43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1.scope: Deactivated successfully. Sep 4 16:22:25.171760 containerd[1583]: time="2025-09-04T16:22:25.171722199Z" level=info msg="received exit event container_id:\"43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1\" id:\"43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1\" pid:4648 exited_at:{seconds:1757002945 nanos:171414453}" Sep 4 16:22:25.172047 containerd[1583]: time="2025-09-04T16:22:25.172002803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1\" id:\"43ed1657c478f76a7c0e501b33388e300b20a909a57a2e4b4ff99452ac954bc1\" pid:4648 exited_at:{seconds:1757002945 nanos:171414453}" Sep 4 16:22:26.016737 kubelet[2719]: E0904 16:22:26.016688 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:26.019182 containerd[1583]: time="2025-09-04T16:22:26.018375002Z" level=info msg="CreateContainer within sandbox \"b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 16:22:26.048301 containerd[1583]: time="2025-09-04T16:22:26.048221793Z" level=info msg="Container 4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:22:26.051859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202336455.mount: Deactivated successfully. Sep 4 16:22:26.055559 containerd[1583]: time="2025-09-04T16:22:26.055519600Z" level=info msg="CreateContainer within sandbox \"b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482\"" Sep 4 16:22:26.056058 containerd[1583]: time="2025-09-04T16:22:26.056030632Z" level=info msg="StartContainer for \"4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482\"" Sep 4 16:22:26.056840 containerd[1583]: time="2025-09-04T16:22:26.056797311Z" level=info msg="connecting to shim 4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482" address="unix:///run/containerd/s/b07eeb80871569df45c7029a0c64afe8a8f3a30b0cc930c48da6c36065cac6e0" protocol=ttrpc version=3 Sep 4 16:22:26.076847 systemd[1]: Started cri-containerd-4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482.scope - libcontainer container 4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482. Sep 4 16:22:26.108640 containerd[1583]: time="2025-09-04T16:22:26.108576827Z" level=info msg="StartContainer for \"4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482\" returns successfully" Sep 4 16:22:26.113606 systemd[1]: cri-containerd-4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482.scope: Deactivated successfully. Sep 4 16:22:26.115856 containerd[1583]: time="2025-09-04T16:22:26.115828425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482\" id:\"4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482\" pid:4693 exited_at:{seconds:1757002946 nanos:115516451}" Sep 4 16:22:26.115856 containerd[1583]: time="2025-09-04T16:22:26.115835358Z" level=info msg="received exit event container_id:\"4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482\" id:\"4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482\" pid:4693 exited_at:{seconds:1757002946 nanos:115516451}" Sep 4 16:22:26.142589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f0cc59918a7379d43d0fe250ea69d7deac7ca4c8312f7a8471b13003f017482-rootfs.mount: Deactivated successfully. Sep 4 16:22:26.817984 kubelet[2719]: E0904 16:22:26.817931 2719 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 16:22:27.021242 kubelet[2719]: E0904 16:22:27.021191 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:27.025411 containerd[1583]: time="2025-09-04T16:22:27.025306725Z" level=info msg="CreateContainer within sandbox \"b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 16:22:27.044644 containerd[1583]: time="2025-09-04T16:22:27.044517988Z" level=info msg="Container 60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:22:27.055068 containerd[1583]: time="2025-09-04T16:22:27.055001870Z" level=info msg="CreateContainer within sandbox \"b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198\"" Sep 4 16:22:27.055733 containerd[1583]: time="2025-09-04T16:22:27.055687294Z" level=info msg="StartContainer for \"60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198\"" Sep 4 16:22:27.057315 containerd[1583]: time="2025-09-04T16:22:27.057284302Z" level=info msg="connecting to shim 60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198" address="unix:///run/containerd/s/b07eeb80871569df45c7029a0c64afe8a8f3a30b0cc930c48da6c36065cac6e0" protocol=ttrpc version=3 Sep 4 16:22:27.083902 systemd[1]: Started cri-containerd-60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198.scope - libcontainer container 60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198. Sep 4 16:22:27.238949 systemd[1]: cri-containerd-60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198.scope: Deactivated successfully. Sep 4 16:22:27.240960 containerd[1583]: time="2025-09-04T16:22:27.240901911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198\" id:\"60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198\" pid:4737 exited_at:{seconds:1757002947 nanos:240450653}" Sep 4 16:22:27.245578 containerd[1583]: time="2025-09-04T16:22:27.245492212Z" level=info msg="received exit event container_id:\"60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198\" id:\"60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198\" pid:4737 exited_at:{seconds:1757002947 nanos:240450653}" Sep 4 16:22:27.247578 containerd[1583]: time="2025-09-04T16:22:27.247536241Z" level=info msg="StartContainer for \"60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198\" returns successfully" Sep 4 16:22:27.271550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60d31c8e65c741108d8a03d5defffa8fda5771d23373735c62e19e078eb81198-rootfs.mount: Deactivated successfully. Sep 4 16:22:28.025209 kubelet[2719]: E0904 16:22:28.025178 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:28.027682 containerd[1583]: time="2025-09-04T16:22:28.027359251Z" level=info msg="CreateContainer within sandbox \"b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 16:22:28.036270 containerd[1583]: time="2025-09-04T16:22:28.036207747Z" level=info msg="Container 98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:22:28.044683 containerd[1583]: time="2025-09-04T16:22:28.044626044Z" level=info msg="CreateContainer within sandbox \"b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387\"" Sep 4 16:22:28.045299 containerd[1583]: time="2025-09-04T16:22:28.045275930Z" level=info msg="StartContainer for \"98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387\"" Sep 4 16:22:28.048442 containerd[1583]: time="2025-09-04T16:22:28.048396234Z" level=info msg="connecting to shim 98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387" address="unix:///run/containerd/s/b07eeb80871569df45c7029a0c64afe8a8f3a30b0cc930c48da6c36065cac6e0" protocol=ttrpc version=3 Sep 4 16:22:28.070846 systemd[1]: Started cri-containerd-98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387.scope - libcontainer container 98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387. Sep 4 16:22:28.095933 systemd[1]: cri-containerd-98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387.scope: Deactivated successfully. Sep 4 16:22:28.096772 containerd[1583]: time="2025-09-04T16:22:28.096733920Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387\" id:\"98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387\" pid:4775 exited_at:{seconds:1757002948 nanos:96035682}" Sep 4 16:22:28.097682 containerd[1583]: time="2025-09-04T16:22:28.097450633Z" level=info msg="received exit event container_id:\"98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387\" id:\"98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387\" pid:4775 exited_at:{seconds:1757002948 nanos:96035682}" Sep 4 16:22:28.104548 containerd[1583]: time="2025-09-04T16:22:28.104499266Z" level=info msg="StartContainer for \"98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387\" returns successfully" Sep 4 16:22:28.119209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98ea2c36960cc709df5c87a6436b1b5a517d0db30c2b157ed1a2a7d20a627387-rootfs.mount: Deactivated successfully. Sep 4 16:22:29.029600 kubelet[2719]: E0904 16:22:29.029567 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:29.031264 containerd[1583]: time="2025-09-04T16:22:29.031205291Z" level=info msg="CreateContainer within sandbox \"b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 16:22:29.041594 containerd[1583]: time="2025-09-04T16:22:29.041529317Z" level=info msg="Container 96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:22:29.049992 containerd[1583]: time="2025-09-04T16:22:29.049931429Z" level=info msg="CreateContainer within sandbox \"b802497b10fed107ed62ffcd4968a93142b4f7a3bb56f7682c2d3d4f965dfb3d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb\"" Sep 4 16:22:29.050425 containerd[1583]: time="2025-09-04T16:22:29.050392826Z" level=info msg="StartContainer for \"96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb\"" Sep 4 16:22:29.051379 containerd[1583]: time="2025-09-04T16:22:29.051349445Z" level=info msg="connecting to shim 96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb" address="unix:///run/containerd/s/b07eeb80871569df45c7029a0c64afe8a8f3a30b0cc930c48da6c36065cac6e0" protocol=ttrpc version=3 Sep 4 16:22:29.082826 systemd[1]: Started cri-containerd-96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb.scope - libcontainer container 96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb. Sep 4 16:22:29.113197 containerd[1583]: time="2025-09-04T16:22:29.113147721Z" level=info msg="StartContainer for \"96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb\" returns successfully" Sep 4 16:22:29.177917 containerd[1583]: time="2025-09-04T16:22:29.177853282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb\" id:\"c8faf0dfab00d33a779fc2c42e5f32c02374879ba4e3c97427761a86a1f14fca\" pid:4843 exited_at:{seconds:1757002949 nanos:177506403}" Sep 4 16:22:29.513707 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 4 16:22:30.036620 kubelet[2719]: E0904 16:22:30.036584 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:30.052166 kubelet[2719]: I0904 16:22:30.052079 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sslnm" podStartSLOduration=6.052049698 podStartE2EDuration="6.052049698s" podCreationTimestamp="2025-09-04 16:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:22:30.051051962 +0000 UTC m=+98.378854674" watchObservedRunningTime="2025-09-04 16:22:30.052049698 +0000 UTC m=+98.379852410" Sep 4 16:22:31.038505 kubelet[2719]: E0904 16:22:31.038470 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:31.122712 update_engine[1560]: I20250904 16:22:31.122030 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 16:22:31.122712 update_engine[1560]: I20250904 16:22:31.122211 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 16:22:31.122712 update_engine[1560]: I20250904 16:22:31.122617 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 16:22:31.133894 update_engine[1560]: E20250904 16:22:31.133817 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 16:22:31.133993 update_engine[1560]: I20250904 16:22:31.133952 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 4 16:22:31.233492 containerd[1583]: time="2025-09-04T16:22:31.233438413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb\" id:\"b7aaa928eedc0adeeace7a31c7066a7816cefae1fb638767627e6f51decdaa48\" pid:4986 exit_status:1 exited_at:{seconds:1757002951 nanos:232861467}" Sep 4 16:22:32.540443 systemd-networkd[1476]: lxc_health: Link UP Sep 4 16:22:32.543295 systemd-networkd[1476]: lxc_health: Gained carrier Sep 4 16:22:32.979165 kubelet[2719]: E0904 16:22:32.979116 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:33.042324 kubelet[2719]: E0904 16:22:33.042289 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:33.371353 containerd[1583]: time="2025-09-04T16:22:33.371298782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb\" id:\"fd3ba3e2da233c2ba9bbcc761ad36e2115971a8c633ad2c35b811d136c2595f4\" pid:5379 exited_at:{seconds:1757002953 nanos:370867944}" Sep 4 16:22:34.044559 kubelet[2719]: E0904 16:22:34.044505 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:22:34.045900 systemd-networkd[1476]: lxc_health: Gained IPv6LL Sep 4 16:22:35.464179 containerd[1583]: time="2025-09-04T16:22:35.464120419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb\" id:\"4d524ff5d894dee367c7f65078801583915767033e1b106ad04c406bd3a63fa1\" pid:5413 exited_at:{seconds:1757002955 nanos:463345769}" Sep 4 16:22:37.600288 containerd[1583]: time="2025-09-04T16:22:37.600244522Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb\" id:\"a4feaef698c073948649a96d39ad522859acf0c97675840f2c70fbd1d820f6cf\" pid:5443 exited_at:{seconds:1757002957 nanos:599964221}" Sep 4 16:22:39.881277 containerd[1583]: time="2025-09-04T16:22:39.881176519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96ec9f12e398e69577e816004f23b296aead4e10ca92b61bada5a4636ac2f2cb\" id:\"f949cf77a74846fbc9730d40df2db3857556892e4fba7ae5ca6a3611902082b3\" pid:5467 exited_at:{seconds:1757002959 nanos:880635454}" Sep 4 16:22:39.887932 sshd[4583]: Connection closed by 10.0.0.1 port 43034 Sep 4 16:22:39.893301 systemd[1]: sshd@28-10.0.0.50:22-10.0.0.1:43034.service: Deactivated successfully. Sep 4 16:22:39.888391 sshd-session[4576]: pam_unix(sshd:session): session closed for user core Sep 4 16:22:39.895349 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 16:22:39.896396 systemd-logind[1558]: Session 29 logged out. Waiting for processes to exit. Sep 4 16:22:39.897479 systemd-logind[1558]: Removed session 29. Sep 4 16:22:41.123039 update_engine[1560]: I20250904 16:22:41.122952 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 16:22:41.123506 update_engine[1560]: I20250904 16:22:41.123057 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 16:22:41.123506 update_engine[1560]: I20250904 16:22:41.123487 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 16:22:41.133351 update_engine[1560]: E20250904 16:22:41.133312 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 16:22:41.133420 update_engine[1560]: I20250904 16:22:41.133397 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 3