Aug 19 08:03:13.896865 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 18 22:19:37 -00 2025 Aug 19 08:03:13.896892 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:03:13.896904 kernel: BIOS-provided physical RAM map: Aug 19 08:03:13.896911 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 19 08:03:13.896917 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 19 08:03:13.896923 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 19 08:03:13.896931 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 19 08:03:13.896938 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 19 08:03:13.896947 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Aug 19 08:03:13.896956 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Aug 19 08:03:13.896963 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Aug 19 08:03:13.896969 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Aug 19 08:03:13.896975 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Aug 19 08:03:13.896982 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Aug 19 08:03:13.896990 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Aug 19 08:03:13.897006 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 19 08:03:13.897017 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Aug 19 08:03:13.897024 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Aug 19 08:03:13.897031 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Aug 19 08:03:13.897038 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Aug 19 08:03:13.897045 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Aug 19 08:03:13.897052 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 19 08:03:13.897059 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 19 08:03:13.897066 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 19 08:03:13.897073 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Aug 19 08:03:13.897082 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 19 08:03:13.897090 kernel: NX (Execute Disable) protection: active Aug 19 08:03:13.897097 kernel: APIC: Static calls initialized Aug 19 08:03:13.897104 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Aug 19 08:03:13.897111 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Aug 19 08:03:13.897118 kernel: extended physical RAM map: Aug 19 08:03:13.897125 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 19 08:03:13.897132 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 19 08:03:13.897139 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 19 08:03:13.897146 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Aug 19 08:03:13.897153 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 19 08:03:13.897163 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Aug 19 08:03:13.897170 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Aug 19 08:03:13.897177 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Aug 19 08:03:13.897184 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Aug 19 08:03:13.897195 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Aug 19 08:03:13.897202 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Aug 19 08:03:13.897211 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Aug 19 08:03:13.897219 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Aug 19 08:03:13.897226 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Aug 19 08:03:13.897233 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Aug 19 08:03:13.897240 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Aug 19 08:03:13.897248 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 19 08:03:13.897255 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Aug 19 08:03:13.897262 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Aug 19 08:03:13.897270 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Aug 19 08:03:13.897277 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Aug 19 08:03:13.897287 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Aug 19 08:03:13.897294 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 19 08:03:13.897301 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 19 08:03:13.897308 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 19 08:03:13.897316 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Aug 19 08:03:13.897323 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 19 08:03:13.897332 kernel: efi: EFI v2.7 by EDK II Aug 19 08:03:13.897340 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Aug 19 08:03:13.897347 kernel: random: crng init done Aug 19 08:03:13.897356 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Aug 19 08:03:13.897364 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Aug 19 08:03:13.897376 kernel: secureboot: Secure boot disabled Aug 19 08:03:13.897383 kernel: SMBIOS 2.8 present. Aug 19 08:03:13.897391 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Aug 19 08:03:13.897399 kernel: DMI: Memory slots populated: 1/1 Aug 19 08:03:13.897406 kernel: Hypervisor detected: KVM Aug 19 08:03:13.897413 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 19 08:03:13.897420 kernel: kvm-clock: using sched offset of 5177361269 cycles Aug 19 08:03:13.897428 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 19 08:03:13.897436 kernel: tsc: Detected 2794.748 MHz processor Aug 19 08:03:13.897443 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 19 08:03:13.897451 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 19 08:03:13.897460 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Aug 19 08:03:13.897468 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 19 08:03:13.897476 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 19 08:03:13.897483 kernel: Using GB pages for direct mapping Aug 19 08:03:13.897491 kernel: ACPI: Early table checksum verification disabled Aug 19 08:03:13.897498 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 19 08:03:13.897506 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 19 08:03:13.897513 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:03:13.897521 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:03:13.897531 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 19 08:03:13.897538 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:03:13.897546 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:03:13.897553 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:03:13.897561 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:03:13.897568 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 19 08:03:13.897576 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 19 08:03:13.897583 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 19 08:03:13.897593 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 19 08:03:13.897600 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 19 08:03:13.897608 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 19 08:03:13.897615 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 19 08:03:13.897622 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 19 08:03:13.897630 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 19 08:03:13.897637 kernel: No NUMA configuration found Aug 19 08:03:13.897644 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Aug 19 08:03:13.897652 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Aug 19 08:03:13.897659 kernel: Zone ranges: Aug 19 08:03:13.897669 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 19 08:03:13.897676 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Aug 19 08:03:13.897684 kernel: Normal empty Aug 19 08:03:13.897691 kernel: Device empty Aug 19 08:03:13.897698 kernel: Movable zone start for each node Aug 19 08:03:13.897706 kernel: Early memory node ranges Aug 19 08:03:13.897713 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 19 08:03:13.897720 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 19 08:03:13.897730 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 19 08:03:13.897740 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Aug 19 08:03:13.897747 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Aug 19 08:03:13.897755 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Aug 19 08:03:13.897762 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Aug 19 08:03:13.897770 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Aug 19 08:03:13.897777 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Aug 19 08:03:13.897797 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 19 08:03:13.897807 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 19 08:03:13.897825 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 19 08:03:13.897833 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 19 08:03:13.897840 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Aug 19 08:03:13.897848 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Aug 19 08:03:13.897856 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 19 08:03:13.897866 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Aug 19 08:03:13.897873 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Aug 19 08:03:13.897881 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 19 08:03:13.897889 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 19 08:03:13.897899 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 19 08:03:13.897906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 19 08:03:13.897914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 19 08:03:13.897922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 19 08:03:13.897929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 19 08:03:13.897937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 19 08:03:13.897944 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 19 08:03:13.897952 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 19 08:03:13.897960 kernel: TSC deadline timer available Aug 19 08:03:13.897970 kernel: CPU topo: Max. logical packages: 1 Aug 19 08:03:13.897977 kernel: CPU topo: Max. logical dies: 1 Aug 19 08:03:13.897985 kernel: CPU topo: Max. dies per package: 1 Aug 19 08:03:13.897992 kernel: CPU topo: Max. threads per core: 1 Aug 19 08:03:13.898008 kernel: CPU topo: Num. cores per package: 4 Aug 19 08:03:13.898015 kernel: CPU topo: Num. threads per package: 4 Aug 19 08:03:13.898024 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Aug 19 08:03:13.898031 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 19 08:03:13.898039 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 19 08:03:13.898047 kernel: kvm-guest: setup PV sched yield Aug 19 08:03:13.898057 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Aug 19 08:03:13.898064 kernel: Booting paravirtualized kernel on KVM Aug 19 08:03:13.898072 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 19 08:03:13.898080 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 19 08:03:13.898088 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Aug 19 08:03:13.898095 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Aug 19 08:03:13.898103 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 19 08:03:13.898110 kernel: kvm-guest: PV spinlocks enabled Aug 19 08:03:13.898118 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 19 08:03:13.898129 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:03:13.898139 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 08:03:13.898147 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 08:03:13.898155 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 19 08:03:13.898162 kernel: Fallback order for Node 0: 0 Aug 19 08:03:13.898170 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Aug 19 08:03:13.898178 kernel: Policy zone: DMA32 Aug 19 08:03:13.898185 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 08:03:13.898195 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 19 08:03:13.898203 kernel: ftrace: allocating 40101 entries in 157 pages Aug 19 08:03:13.898210 kernel: ftrace: allocated 157 pages with 5 groups Aug 19 08:03:13.898218 kernel: Dynamic Preempt: voluntary Aug 19 08:03:13.898226 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 08:03:13.898234 kernel: rcu: RCU event tracing is enabled. Aug 19 08:03:13.898242 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 19 08:03:13.898250 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 08:03:13.898258 kernel: Rude variant of Tasks RCU enabled. Aug 19 08:03:13.898267 kernel: Tracing variant of Tasks RCU enabled. Aug 19 08:03:13.898275 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 08:03:13.898285 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 19 08:03:13.898293 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:03:13.898301 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:03:13.898308 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:03:13.898316 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 19 08:03:13.898324 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 08:03:13.898331 kernel: Console: colour dummy device 80x25 Aug 19 08:03:13.898342 kernel: printk: legacy console [ttyS0] enabled Aug 19 08:03:13.898350 kernel: ACPI: Core revision 20240827 Aug 19 08:03:13.898358 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 19 08:03:13.898365 kernel: APIC: Switch to symmetric I/O mode setup Aug 19 08:03:13.898373 kernel: x2apic enabled Aug 19 08:03:13.898380 kernel: APIC: Switched APIC routing to: physical x2apic Aug 19 08:03:13.898388 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 19 08:03:13.898396 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 19 08:03:13.898404 kernel: kvm-guest: setup PV IPIs Aug 19 08:03:13.898414 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 19 08:03:13.898421 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Aug 19 08:03:13.898429 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Aug 19 08:03:13.898437 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 19 08:03:13.898445 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 19 08:03:13.898452 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 19 08:03:13.898460 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 19 08:03:13.898468 kernel: Spectre V2 : Mitigation: Retpolines Aug 19 08:03:13.898475 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 19 08:03:13.898485 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 19 08:03:13.898493 kernel: RETBleed: Mitigation: untrained return thunk Aug 19 08:03:13.898501 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 19 08:03:13.898511 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 19 08:03:13.898518 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 19 08:03:13.898527 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 19 08:03:13.898534 kernel: x86/bugs: return thunk changed Aug 19 08:03:13.898542 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 19 08:03:13.898552 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 19 08:03:13.898560 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 19 08:03:13.898567 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 19 08:03:13.898575 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 19 08:03:13.898582 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 19 08:03:13.898590 kernel: Freeing SMP alternatives memory: 32K Aug 19 08:03:13.898598 kernel: pid_max: default: 32768 minimum: 301 Aug 19 08:03:13.898605 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 08:03:13.898613 kernel: landlock: Up and running. Aug 19 08:03:13.898623 kernel: SELinux: Initializing. Aug 19 08:03:13.898630 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 08:03:13.898638 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 08:03:13.898646 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 19 08:03:13.898654 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 19 08:03:13.898661 kernel: ... version: 0 Aug 19 08:03:13.898669 kernel: ... bit width: 48 Aug 19 08:03:13.898676 kernel: ... generic registers: 6 Aug 19 08:03:13.898684 kernel: ... value mask: 0000ffffffffffff Aug 19 08:03:13.898694 kernel: ... max period: 00007fffffffffff Aug 19 08:03:13.898702 kernel: ... fixed-purpose events: 0 Aug 19 08:03:13.898709 kernel: ... event mask: 000000000000003f Aug 19 08:03:13.898717 kernel: signal: max sigframe size: 1776 Aug 19 08:03:13.898724 kernel: rcu: Hierarchical SRCU implementation. Aug 19 08:03:13.898732 kernel: rcu: Max phase no-delay instances is 400. Aug 19 08:03:13.898742 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 08:03:13.898750 kernel: smp: Bringing up secondary CPUs ... Aug 19 08:03:13.898757 kernel: smpboot: x86: Booting SMP configuration: Aug 19 08:03:13.898767 kernel: .... node #0, CPUs: #1 #2 #3 Aug 19 08:03:13.898774 kernel: smp: Brought up 1 node, 4 CPUs Aug 19 08:03:13.898782 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Aug 19 08:03:13.898858 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54040K init, 2928K bss, 137196K reserved, 0K cma-reserved) Aug 19 08:03:13.898866 kernel: devtmpfs: initialized Aug 19 08:03:13.898874 kernel: x86/mm: Memory block size: 128MB Aug 19 08:03:13.898882 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 19 08:03:13.898890 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 19 08:03:13.898897 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Aug 19 08:03:13.898908 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 19 08:03:13.898916 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Aug 19 08:03:13.898924 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 19 08:03:13.898932 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 08:03:13.898939 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 19 08:03:13.898947 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 08:03:13.898955 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 08:03:13.898963 kernel: audit: initializing netlink subsys (disabled) Aug 19 08:03:13.898970 kernel: audit: type=2000 audit(1755590589.830:1): state=initialized audit_enabled=0 res=1 Aug 19 08:03:13.898980 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 08:03:13.898988 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 19 08:03:13.898996 kernel: cpuidle: using governor menu Aug 19 08:03:13.899011 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 08:03:13.899019 kernel: dca service started, version 1.12.1 Aug 19 08:03:13.899027 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Aug 19 08:03:13.899035 kernel: PCI: Using configuration type 1 for base access Aug 19 08:03:13.899043 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 19 08:03:13.899050 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 08:03:13.899060 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 08:03:13.899068 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 08:03:13.899075 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 08:03:13.899083 kernel: ACPI: Added _OSI(Module Device) Aug 19 08:03:13.899090 kernel: ACPI: Added _OSI(Processor Device) Aug 19 08:03:13.899098 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 08:03:13.899106 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 08:03:13.899113 kernel: ACPI: Interpreter enabled Aug 19 08:03:13.899121 kernel: ACPI: PM: (supports S0 S3 S5) Aug 19 08:03:13.899130 kernel: ACPI: Using IOAPIC for interrupt routing Aug 19 08:03:13.899138 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 19 08:03:13.899146 kernel: PCI: Using E820 reservations for host bridge windows Aug 19 08:03:13.899154 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 19 08:03:13.899161 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 19 08:03:13.899416 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 19 08:03:13.899549 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 19 08:03:13.899679 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 19 08:03:13.899689 kernel: PCI host bridge to bus 0000:00 Aug 19 08:03:13.899852 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 19 08:03:13.899977 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 19 08:03:13.900104 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 19 08:03:13.900216 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Aug 19 08:03:13.900327 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Aug 19 08:03:13.900443 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Aug 19 08:03:13.900562 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 19 08:03:13.900721 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 19 08:03:13.900881 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 19 08:03:13.901016 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Aug 19 08:03:13.901140 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Aug 19 08:03:13.901263 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Aug 19 08:03:13.901389 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 19 08:03:13.901547 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 19 08:03:13.901674 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Aug 19 08:03:13.901814 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Aug 19 08:03:13.901941 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Aug 19 08:03:13.902092 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 19 08:03:13.902221 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Aug 19 08:03:13.902344 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Aug 19 08:03:13.902467 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Aug 19 08:03:13.902610 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 19 08:03:13.902735 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Aug 19 08:03:13.902876 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Aug 19 08:03:13.903008 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Aug 19 08:03:13.903149 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Aug 19 08:03:13.903312 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 19 08:03:13.903437 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 19 08:03:13.903585 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 19 08:03:13.903710 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Aug 19 08:03:13.903857 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Aug 19 08:03:13.904008 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 19 08:03:13.904138 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Aug 19 08:03:13.904150 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 19 08:03:13.904158 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 19 08:03:13.904166 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 19 08:03:13.904174 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 19 08:03:13.904457 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 19 08:03:13.904466 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 19 08:03:13.904474 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 19 08:03:13.904487 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 19 08:03:13.904495 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 19 08:03:13.904503 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 19 08:03:13.904511 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 19 08:03:13.904519 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 19 08:03:13.904527 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 19 08:03:13.904535 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 19 08:03:13.904543 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 19 08:03:13.904551 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 19 08:03:13.904561 kernel: iommu: Default domain type: Translated Aug 19 08:03:13.904569 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 19 08:03:13.904577 kernel: efivars: Registered efivars operations Aug 19 08:03:13.904585 kernel: PCI: Using ACPI for IRQ routing Aug 19 08:03:13.904592 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 19 08:03:13.904600 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 19 08:03:13.904608 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Aug 19 08:03:13.904616 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Aug 19 08:03:13.904624 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Aug 19 08:03:13.904633 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Aug 19 08:03:13.904641 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Aug 19 08:03:13.904650 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Aug 19 08:03:13.904657 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Aug 19 08:03:13.904811 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 19 08:03:13.904939 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 19 08:03:13.905098 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 19 08:03:13.905110 kernel: vgaarb: loaded Aug 19 08:03:13.905121 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 19 08:03:13.905129 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 19 08:03:13.905137 kernel: clocksource: Switched to clocksource kvm-clock Aug 19 08:03:13.905145 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 08:03:13.905153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 08:03:13.905161 kernel: pnp: PnP ACPI init Aug 19 08:03:13.905302 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Aug 19 08:03:13.905328 kernel: pnp: PnP ACPI: found 6 devices Aug 19 08:03:13.905340 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 19 08:03:13.905349 kernel: NET: Registered PF_INET protocol family Aug 19 08:03:13.905357 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 08:03:13.905365 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 19 08:03:13.905373 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 08:03:13.905382 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 19 08:03:13.905390 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 19 08:03:13.905398 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 19 08:03:13.905406 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 08:03:13.905416 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 08:03:13.905424 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 08:03:13.905433 kernel: NET: Registered PF_XDP protocol family Aug 19 08:03:13.906280 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Aug 19 08:03:13.906439 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Aug 19 08:03:13.906567 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 19 08:03:13.906681 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 19 08:03:13.906811 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 19 08:03:13.906945 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Aug 19 08:03:13.907092 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Aug 19 08:03:13.907208 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Aug 19 08:03:13.907218 kernel: PCI: CLS 0 bytes, default 64 Aug 19 08:03:13.907227 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Aug 19 08:03:13.907235 kernel: Initialise system trusted keyrings Aug 19 08:03:13.907244 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 19 08:03:13.907252 kernel: Key type asymmetric registered Aug 19 08:03:13.907263 kernel: Asymmetric key parser 'x509' registered Aug 19 08:03:13.907271 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 19 08:03:13.907280 kernel: io scheduler mq-deadline registered Aug 19 08:03:13.907290 kernel: io scheduler kyber registered Aug 19 08:03:13.907298 kernel: io scheduler bfq registered Aug 19 08:03:13.907307 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 19 08:03:13.907317 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 19 08:03:13.907325 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 19 08:03:13.907334 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 19 08:03:13.907342 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 08:03:13.907350 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 19 08:03:13.907358 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 19 08:03:13.907367 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 19 08:03:13.907375 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 19 08:03:13.907605 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 19 08:03:13.907623 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 19 08:03:13.907742 kernel: rtc_cmos 00:04: registered as rtc0 Aug 19 08:03:13.907884 kernel: rtc_cmos 00:04: setting system clock to 2025-08-19T08:03:13 UTC (1755590593) Aug 19 08:03:13.908010 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Aug 19 08:03:13.908021 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 19 08:03:13.908030 kernel: efifb: probing for efifb Aug 19 08:03:13.908038 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Aug 19 08:03:13.908046 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Aug 19 08:03:13.908057 kernel: efifb: scrolling: redraw Aug 19 08:03:13.908065 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 19 08:03:13.908074 kernel: Console: switching to colour frame buffer device 160x50 Aug 19 08:03:13.908082 kernel: fb0: EFI VGA frame buffer device Aug 19 08:03:13.908090 kernel: pstore: Using crash dump compression: deflate Aug 19 08:03:13.908098 kernel: pstore: Registered efi_pstore as persistent store backend Aug 19 08:03:13.908106 kernel: NET: Registered PF_INET6 protocol family Aug 19 08:03:13.908114 kernel: Segment Routing with IPv6 Aug 19 08:03:13.908122 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 08:03:13.908132 kernel: NET: Registered PF_PACKET protocol family Aug 19 08:03:13.908140 kernel: Key type dns_resolver registered Aug 19 08:03:13.908148 kernel: IPI shorthand broadcast: enabled Aug 19 08:03:13.908156 kernel: sched_clock: Marking stable (4186001765, 184096913)->(4509305999, -139207321) Aug 19 08:03:13.908164 kernel: registered taskstats version 1 Aug 19 08:03:13.908172 kernel: Loading compiled-in X.509 certificates Aug 19 08:03:13.908180 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: 93a065b103c00d4b81cc5822e4e7f9674e63afaf' Aug 19 08:03:13.908188 kernel: Demotion targets for Node 0: null Aug 19 08:03:13.908196 kernel: Key type .fscrypt registered Aug 19 08:03:13.908206 kernel: Key type fscrypt-provisioning registered Aug 19 08:03:13.908214 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 08:03:13.908222 kernel: ima: Allocated hash algorithm: sha1 Aug 19 08:03:13.908231 kernel: ima: No architecture policies found Aug 19 08:03:13.908238 kernel: clk: Disabling unused clocks Aug 19 08:03:13.908246 kernel: Warning: unable to open an initial console. Aug 19 08:03:13.908255 kernel: Freeing unused kernel image (initmem) memory: 54040K Aug 19 08:03:13.908263 kernel: Write protecting the kernel read-only data: 24576k Aug 19 08:03:13.908271 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 19 08:03:13.908281 kernel: Run /init as init process Aug 19 08:03:13.908290 kernel: with arguments: Aug 19 08:03:13.908297 kernel: /init Aug 19 08:03:13.908305 kernel: with environment: Aug 19 08:03:13.908313 kernel: HOME=/ Aug 19 08:03:13.908321 kernel: TERM=linux Aug 19 08:03:13.908329 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 08:03:13.908342 systemd[1]: Successfully made /usr/ read-only. Aug 19 08:03:13.908355 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:03:13.908364 systemd[1]: Detected virtualization kvm. Aug 19 08:03:13.908373 systemd[1]: Detected architecture x86-64. Aug 19 08:03:13.908381 systemd[1]: Running in initrd. Aug 19 08:03:13.908389 systemd[1]: No hostname configured, using default hostname. Aug 19 08:03:13.908398 systemd[1]: Hostname set to . Aug 19 08:03:13.908407 systemd[1]: Initializing machine ID from VM UUID. Aug 19 08:03:13.908415 systemd[1]: Queued start job for default target initrd.target. Aug 19 08:03:13.908427 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:03:13.908436 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:03:13.908445 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 08:03:13.908454 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:03:13.908462 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 08:03:13.908472 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 08:03:13.908481 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 08:03:13.908492 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 08:03:13.908501 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:03:13.908509 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:03:13.908518 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:03:13.908526 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:03:13.908535 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:03:13.908543 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:03:13.908552 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:03:13.908563 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:03:13.908571 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 08:03:13.908580 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 08:03:13.908588 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:03:13.908597 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:03:13.908605 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:03:13.908614 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:03:13.908622 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 08:03:13.908631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:03:13.908641 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 08:03:13.908650 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 08:03:13.908659 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 08:03:13.908667 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:03:13.908676 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:03:13.908684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:03:13.908693 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 08:03:13.908704 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:03:13.908712 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 08:03:13.908721 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 08:03:13.908755 systemd-journald[219]: Collecting audit messages is disabled. Aug 19 08:03:13.908782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:03:13.908814 systemd-journald[219]: Journal started Aug 19 08:03:13.908836 systemd-journald[219]: Runtime Journal (/run/log/journal/5bdb79862ee14071b3b6b14492e6bf3a) is 6M, max 48.4M, 42.4M free. Aug 19 08:03:13.911818 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 08:03:13.912334 systemd-modules-load[221]: Inserted module 'overlay' Aug 19 08:03:13.916359 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:03:13.914718 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:03:13.925883 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:03:13.927899 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:03:13.947601 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:03:13.950112 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 08:03:13.950806 kernel: Bridge firewalling registered Aug 19 08:03:13.950777 systemd-modules-load[221]: Inserted module 'br_netfilter' Aug 19 08:03:13.951937 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 08:03:13.953143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:03:13.955306 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:03:13.956907 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 08:03:13.958357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:03:13.968052 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:03:13.977679 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:03:13.979936 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:03:13.988151 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:03:14.033302 systemd-resolved[269]: Positive Trust Anchors: Aug 19 08:03:14.033327 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:03:14.033357 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:03:14.037152 systemd-resolved[269]: Defaulting to hostname 'linux'. Aug 19 08:03:14.038663 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:03:14.045801 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:03:14.110840 kernel: SCSI subsystem initialized Aug 19 08:03:14.120821 kernel: Loading iSCSI transport class v2.0-870. Aug 19 08:03:14.130823 kernel: iscsi: registered transport (tcp) Aug 19 08:03:14.156844 kernel: iscsi: registered transport (qla4xxx) Aug 19 08:03:14.156940 kernel: QLogic iSCSI HBA Driver Aug 19 08:03:14.182260 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:03:14.206729 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:03:14.209667 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:03:14.279062 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 08:03:14.281935 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 08:03:14.346827 kernel: raid6: avx2x4 gen() 29531 MB/s Aug 19 08:03:14.363824 kernel: raid6: avx2x2 gen() 27522 MB/s Aug 19 08:03:14.380880 kernel: raid6: avx2x1 gen() 23438 MB/s Aug 19 08:03:14.380940 kernel: raid6: using algorithm avx2x4 gen() 29531 MB/s Aug 19 08:03:14.398944 kernel: raid6: .... xor() 7691 MB/s, rmw enabled Aug 19 08:03:14.399046 kernel: raid6: using avx2x2 recovery algorithm Aug 19 08:03:14.422840 kernel: xor: automatically using best checksumming function avx Aug 19 08:03:14.600851 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 08:03:14.609712 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:03:14.615016 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:03:14.642696 systemd-udevd[472]: Using default interface naming scheme 'v255'. Aug 19 08:03:14.649370 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:03:14.653242 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 08:03:14.687110 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Aug 19 08:03:14.721328 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:03:14.723899 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:03:14.814376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:03:14.819465 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 08:03:14.871824 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 19 08:03:14.875825 kernel: cryptd: max_cpu_qlen set to 1000 Aug 19 08:03:14.878845 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 19 08:03:14.886174 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 19 08:03:14.896367 kernel: AES CTR mode by8 optimization enabled Aug 19 08:03:14.908957 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 08:03:14.909030 kernel: GPT:9289727 != 19775487 Aug 19 08:03:14.909046 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 08:03:14.909060 kernel: libata version 3.00 loaded. Aug 19 08:03:14.909074 kernel: GPT:9289727 != 19775487 Aug 19 08:03:14.909087 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 08:03:14.909109 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:03:14.927821 kernel: ahci 0000:00:1f.2: version 3.0 Aug 19 08:03:14.928774 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 19 08:03:14.945465 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 19 08:03:14.945737 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 19 08:03:14.945991 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 19 08:03:14.962063 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 19 08:03:14.964557 kernel: scsi host0: ahci Aug 19 08:03:14.964858 kernel: scsi host1: ahci Aug 19 08:03:14.966677 kernel: scsi host2: ahci Aug 19 08:03:14.967814 kernel: scsi host3: ahci Aug 19 08:03:14.968819 kernel: scsi host4: ahci Aug 19 08:03:14.970715 kernel: scsi host5: ahci Aug 19 08:03:14.971059 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Aug 19 08:03:14.971076 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Aug 19 08:03:14.972733 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Aug 19 08:03:14.972763 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Aug 19 08:03:14.973811 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Aug 19 08:03:14.975724 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Aug 19 08:03:14.988808 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 19 08:03:15.006843 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 08:03:15.014989 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 19 08:03:15.016580 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 19 08:03:15.018002 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 08:03:15.021103 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:03:15.021174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:03:15.026094 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:03:15.040720 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:03:15.042209 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:03:15.050129 disk-uuid[629]: Primary Header is updated. Aug 19 08:03:15.050129 disk-uuid[629]: Secondary Entries is updated. Aug 19 08:03:15.050129 disk-uuid[629]: Secondary Header is updated. Aug 19 08:03:15.054840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:03:15.062813 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:03:15.063779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:03:15.284843 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 19 08:03:15.284939 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 19 08:03:15.285844 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 19 08:03:15.286823 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 19 08:03:15.287828 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 19 08:03:15.288817 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 19 08:03:15.288835 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 19 08:03:15.290102 kernel: ata3.00: applying bridge limits Aug 19 08:03:15.290820 kernel: ata3.00: configured for UDMA/100 Aug 19 08:03:15.292828 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 19 08:03:15.344879 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 19 08:03:15.345241 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 19 08:03:15.357844 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 19 08:03:15.782545 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 08:03:15.785288 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:03:15.787664 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:03:15.789853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:03:15.792821 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 08:03:15.827521 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:03:16.060822 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:03:16.061620 disk-uuid[632]: The operation has completed successfully. Aug 19 08:03:16.098707 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 08:03:16.098897 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 08:03:16.137439 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 08:03:16.164824 sh[665]: Success Aug 19 08:03:16.187243 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 08:03:16.187329 kernel: device-mapper: uevent: version 1.0.3 Aug 19 08:03:16.188518 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 08:03:16.198821 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 19 08:03:16.232169 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 08:03:16.236432 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 08:03:16.257493 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 08:03:16.262860 kernel: BTRFS: device fsid 99050df3-5e04-4f37-acde-dec46aab7896 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (677) Aug 19 08:03:16.262895 kernel: BTRFS info (device dm-0): first mount of filesystem 99050df3-5e04-4f37-acde-dec46aab7896 Aug 19 08:03:16.264602 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:03:16.264628 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 08:03:16.271046 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 08:03:16.273956 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:03:16.276631 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 08:03:16.280102 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 08:03:16.284421 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 08:03:16.325832 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Aug 19 08:03:16.328762 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:03:16.328826 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:03:16.328839 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:03:16.338840 kernel: BTRFS info (device vda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:03:16.340230 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 08:03:16.343349 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 08:03:16.670735 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:03:16.675449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:03:16.742543 systemd-networkd[854]: lo: Link UP Aug 19 08:03:16.742555 systemd-networkd[854]: lo: Gained carrier Aug 19 08:03:16.744419 systemd-networkd[854]: Enumeration completed Aug 19 08:03:16.744531 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:03:16.745044 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:03:16.745049 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:03:16.746593 systemd-networkd[854]: eth0: Link UP Aug 19 08:03:16.746894 systemd-networkd[854]: eth0: Gained carrier Aug 19 08:03:16.746903 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:03:16.747186 systemd[1]: Reached target network.target - Network. Aug 19 08:03:16.815993 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 08:03:16.832229 ignition[750]: Ignition 2.21.0 Aug 19 08:03:16.832264 ignition[750]: Stage: fetch-offline Aug 19 08:03:16.832476 ignition[750]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:03:16.832493 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:03:16.833270 ignition[750]: parsed url from cmdline: "" Aug 19 08:03:16.833277 ignition[750]: no config URL provided Aug 19 08:03:16.833285 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 08:03:16.834670 ignition[750]: no config at "/usr/lib/ignition/user.ign" Aug 19 08:03:16.835513 ignition[750]: op(1): [started] loading QEMU firmware config module Aug 19 08:03:16.835526 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 19 08:03:16.851123 ignition[750]: op(1): [finished] loading QEMU firmware config module Aug 19 08:03:16.887900 ignition[750]: parsing config with SHA512: dcb5ac3785f7fd18eb709b6ff3a0de97246825996bf8cad590fadd2cd602125b0def3ddbe31c492c570ca7ae15b758764556ca6182f4f191fa9c79619b5eb692 Aug 19 08:03:16.892190 unknown[750]: fetched base config from "system" Aug 19 08:03:16.892202 unknown[750]: fetched user config from "qemu" Aug 19 08:03:16.892603 ignition[750]: fetch-offline: fetch-offline passed Aug 19 08:03:16.893215 systemd-resolved[269]: Detected conflict on linux IN A 10.0.0.16 Aug 19 08:03:16.892683 ignition[750]: Ignition finished successfully Aug 19 08:03:16.893226 systemd-resolved[269]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Aug 19 08:03:16.900671 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:03:16.902035 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 19 08:03:16.902972 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 08:03:16.972656 ignition[861]: Ignition 2.21.0 Aug 19 08:03:16.972676 ignition[861]: Stage: kargs Aug 19 08:03:16.972880 ignition[861]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:03:16.972893 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:03:16.976462 ignition[861]: kargs: kargs passed Aug 19 08:03:16.976557 ignition[861]: Ignition finished successfully Aug 19 08:03:16.981015 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 08:03:16.983184 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 08:03:17.022068 ignition[870]: Ignition 2.21.0 Aug 19 08:03:17.022084 ignition[870]: Stage: disks Aug 19 08:03:17.022388 ignition[870]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:03:17.022402 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:03:17.026655 ignition[870]: disks: disks passed Aug 19 08:03:17.026764 ignition[870]: Ignition finished successfully Aug 19 08:03:17.031172 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 08:03:17.032397 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 08:03:17.034334 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 08:03:17.034392 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:03:17.037607 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:03:17.039870 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:03:17.042646 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 08:03:17.097763 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 19 08:03:17.105619 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 08:03:17.108399 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 08:03:17.387840 kernel: EXT4-fs (vda9): mounted filesystem 41966107-04fa-426e-9830-6b4efa50e27b r/w with ordered data mode. Quota mode: none. Aug 19 08:03:17.388897 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 08:03:17.391021 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 08:03:17.394432 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:03:17.396900 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 08:03:17.398884 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 19 08:03:17.398944 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 08:03:17.400638 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:03:17.413609 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 08:03:17.416890 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 08:03:17.419807 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Aug 19 08:03:17.421803 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:03:17.421840 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:03:17.423198 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:03:17.427058 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:03:17.469764 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 08:03:17.474621 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Aug 19 08:03:17.480056 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 08:03:17.485699 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 08:03:17.713078 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 08:03:17.715961 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 08:03:17.717943 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 08:03:17.742510 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 08:03:17.743934 kernel: BTRFS info (device vda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:03:17.756193 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 08:03:17.780215 ignition[1002]: INFO : Ignition 2.21.0 Aug 19 08:03:17.780215 ignition[1002]: INFO : Stage: mount Aug 19 08:03:17.782048 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:03:17.782048 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:03:17.784174 ignition[1002]: INFO : mount: mount passed Aug 19 08:03:17.784174 ignition[1002]: INFO : Ignition finished successfully Aug 19 08:03:17.786202 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 08:03:17.787460 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 08:03:18.126054 systemd-networkd[854]: eth0: Gained IPv6LL Aug 19 08:03:18.391329 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:03:18.427632 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Aug 19 08:03:18.427692 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:03:18.427704 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:03:18.428700 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:03:18.435059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:03:18.477729 ignition[1032]: INFO : Ignition 2.21.0 Aug 19 08:03:18.477729 ignition[1032]: INFO : Stage: files Aug 19 08:03:18.480036 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:03:18.480036 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:03:18.482942 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Aug 19 08:03:18.484369 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 08:03:18.484369 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 08:03:18.488011 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 08:03:18.488011 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 08:03:18.488011 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 08:03:18.486584 unknown[1032]: wrote ssh authorized keys file for user: core Aug 19 08:03:18.494598 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 19 08:03:18.494598 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 19 08:03:18.646948 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 08:03:18.983815 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 19 08:03:18.983815 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 08:03:18.988976 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 19 08:03:19.299176 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 19 08:03:19.521740 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 08:03:19.521740 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 19 08:03:19.525676 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 08:03:19.525676 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:03:19.525676 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:03:19.525676 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:03:19.532248 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:03:19.532248 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:03:19.535491 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:03:19.637564 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:03:19.639937 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:03:19.642199 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:03:19.645356 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:03:19.645356 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:03:19.645356 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 19 08:03:20.144143 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 19 08:03:21.061020 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:03:21.061020 ignition[1032]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 19 08:03:21.064862 ignition[1032]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:03:21.111815 ignition[1032]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:03:21.111815 ignition[1032]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 19 08:03:21.111815 ignition[1032]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 19 08:03:21.111815 ignition[1032]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 08:03:21.118313 ignition[1032]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 08:03:21.118313 ignition[1032]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 19 08:03:21.118313 ignition[1032]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 19 08:03:21.141418 ignition[1032]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 08:03:21.146339 ignition[1032]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 08:03:21.148064 ignition[1032]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 19 08:03:21.148064 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 19 08:03:21.148064 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 08:03:21.148064 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:03:21.148064 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:03:21.148064 ignition[1032]: INFO : files: files passed Aug 19 08:03:21.148064 ignition[1032]: INFO : Ignition finished successfully Aug 19 08:03:21.159962 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 08:03:21.163111 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 08:03:21.165597 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 08:03:21.291006 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 08:03:21.291139 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 08:03:21.294738 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Aug 19 08:03:21.298761 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:03:21.298761 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:03:21.304077 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:03:21.302170 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:03:21.304271 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 08:03:21.307976 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 08:03:21.403757 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 08:03:21.403927 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 08:03:21.404591 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 08:03:21.407223 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 08:03:21.410172 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 08:03:21.411323 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 08:03:21.450878 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:03:21.452650 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 08:03:21.481613 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:03:21.482943 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:03:21.485191 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 08:03:21.487486 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 08:03:21.487681 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:03:21.489936 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 08:03:21.492977 systemd[1]: Stopped target basic.target - Basic System. Aug 19 08:03:21.495267 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 08:03:21.497322 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:03:21.499478 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 08:03:21.501861 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:03:21.504059 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 08:03:21.506189 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:03:21.508396 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 08:03:21.510602 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 08:03:21.512564 systemd[1]: Stopped target swap.target - Swaps. Aug 19 08:03:21.514315 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 08:03:21.514512 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:03:21.516635 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:03:21.518265 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:03:21.520474 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 08:03:21.520662 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:03:21.522777 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 08:03:21.522919 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 08:03:21.525241 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 08:03:21.525353 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:03:21.527186 systemd[1]: Stopped target paths.target - Path Units. Aug 19 08:03:21.529342 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 08:03:21.533668 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:03:21.533895 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 08:03:21.534188 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 08:03:21.534499 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 08:03:21.534596 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:03:21.535296 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 08:03:21.535380 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:03:21.535848 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 08:03:21.535966 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:03:21.536278 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 08:03:21.536380 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 08:03:21.537680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 08:03:21.538107 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 08:03:21.538219 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:03:21.539302 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 08:03:21.539583 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 08:03:21.539718 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:03:21.540146 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 08:03:21.540243 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:03:21.545923 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 08:03:21.553129 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 08:03:21.577079 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 08:03:21.643460 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 08:03:21.643646 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 08:03:21.653915 ignition[1088]: INFO : Ignition 2.21.0 Aug 19 08:03:21.653915 ignition[1088]: INFO : Stage: umount Aug 19 08:03:21.655909 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:03:21.655909 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:03:21.659693 ignition[1088]: INFO : umount: umount passed Aug 19 08:03:21.659693 ignition[1088]: INFO : Ignition finished successfully Aug 19 08:03:21.665060 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 08:03:21.665340 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 08:03:21.666903 systemd[1]: Stopped target network.target - Network. Aug 19 08:03:21.671855 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 08:03:21.672010 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 08:03:21.676344 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 08:03:21.676889 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 08:03:21.680376 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 08:03:21.680498 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 08:03:21.682908 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 08:03:21.682987 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 08:03:21.684194 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 08:03:21.684251 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 08:03:21.685567 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 08:03:21.688027 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 08:03:21.700511 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 08:03:21.700719 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 08:03:21.706132 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 08:03:21.706431 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 08:03:21.706575 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 08:03:21.715061 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 08:03:21.720976 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 08:03:21.723186 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 08:03:21.723248 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:03:21.730455 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 08:03:21.731889 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 08:03:21.731984 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:03:21.739481 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:03:21.739559 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:03:21.744468 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 08:03:21.746224 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 08:03:21.750443 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 08:03:21.750508 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:03:21.754274 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:03:21.757616 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 08:03:21.757699 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:03:21.776858 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 08:03:21.786143 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:03:21.789424 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 08:03:21.789550 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 08:03:21.791179 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 08:03:21.791284 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 08:03:21.792361 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 08:03:21.792400 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:03:21.794288 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 08:03:21.794357 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:03:21.802069 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 08:03:21.802139 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 08:03:21.803442 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 08:03:21.803520 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:03:21.808967 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 08:03:21.810289 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 08:03:21.810351 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:03:21.814604 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 08:03:21.814653 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:03:21.818217 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:03:21.818269 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:03:21.826116 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 19 08:03:21.826202 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 19 08:03:21.828565 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:03:21.856153 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 08:03:21.856307 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 08:03:21.859442 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 08:03:21.861696 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 08:03:21.887186 systemd[1]: Switching root. Aug 19 08:03:21.935106 systemd-journald[219]: Journal stopped Aug 19 08:03:23.478316 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Aug 19 08:03:23.478421 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 08:03:23.478442 kernel: SELinux: policy capability open_perms=1 Aug 19 08:03:23.478462 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 08:03:23.478478 kernel: SELinux: policy capability always_check_network=0 Aug 19 08:03:23.478494 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 08:03:23.478510 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 08:03:23.478526 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 08:03:23.478552 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 08:03:23.478663 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 08:03:23.478686 kernel: audit: type=1403 audit(1755590602.384:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 08:03:23.478705 systemd[1]: Successfully loaded SELinux policy in 72.242ms. Aug 19 08:03:23.478741 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.881ms. Aug 19 08:03:23.478760 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:03:23.478812 systemd[1]: Detected virtualization kvm. Aug 19 08:03:23.478830 systemd[1]: Detected architecture x86-64. Aug 19 08:03:23.478851 systemd[1]: Detected first boot. Aug 19 08:03:23.478869 systemd[1]: Initializing machine ID from VM UUID. Aug 19 08:03:23.478886 zram_generator::config[1133]: No configuration found. Aug 19 08:03:23.478917 kernel: Guest personality initialized and is inactive Aug 19 08:03:23.478932 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 19 08:03:23.478955 kernel: Initialized host personality Aug 19 08:03:23.478971 kernel: NET: Registered PF_VSOCK protocol family Aug 19 08:03:23.478987 systemd[1]: Populated /etc with preset unit settings. Aug 19 08:03:23.479005 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 08:03:23.479025 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 08:03:23.479040 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 08:03:23.479053 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 08:03:23.479067 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 08:03:23.479081 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 08:03:23.479106 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 08:03:23.479120 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 08:03:23.479135 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 08:03:23.479154 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 08:03:23.479171 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 08:03:23.479186 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 08:03:23.479203 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:03:23.479220 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:03:23.479237 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 08:03:23.479254 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 08:03:23.479271 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 08:03:23.479292 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:03:23.479308 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 19 08:03:23.479325 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:03:23.479342 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:03:23.479365 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 08:03:23.479383 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 08:03:23.479399 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 08:03:23.479415 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 08:03:23.479432 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:03:23.479454 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:03:23.479653 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:03:23.479671 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:03:23.479688 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 08:03:23.479706 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 08:03:23.479722 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 08:03:23.479739 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:03:23.479755 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:03:23.479804 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:03:23.479825 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 08:03:23.479848 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 08:03:23.479865 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 08:03:23.479882 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 08:03:23.479898 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:03:23.479915 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 08:03:23.479932 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 08:03:23.479948 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 08:03:23.479973 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 08:03:23.479994 systemd[1]: Reached target machines.target - Containers. Aug 19 08:03:23.480011 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 08:03:23.480027 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:03:23.480044 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:03:23.480060 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 08:03:23.480090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:03:23.480107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:03:23.480124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:03:23.480145 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 08:03:23.480162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:03:23.480179 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 08:03:23.480195 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 08:03:23.480212 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 08:03:23.480228 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 08:03:23.480245 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 08:03:23.480263 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:03:23.480284 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:03:23.480301 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:03:23.480318 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:03:23.480335 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 08:03:23.480352 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 08:03:23.480373 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:03:23.480390 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 08:03:23.480407 systemd[1]: Stopped verity-setup.service. Aug 19 08:03:23.480423 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:03:23.480440 kernel: loop: module loaded Aug 19 08:03:23.480467 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 08:03:23.480483 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 08:03:23.480499 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 08:03:23.480520 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 08:03:23.480568 systemd-journald[1211]: Collecting audit messages is disabled. Aug 19 08:03:23.480604 kernel: fuse: init (API version 7.41) Aug 19 08:03:23.480622 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 08:03:23.480638 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 08:03:23.480659 systemd-journald[1211]: Journal started Aug 19 08:03:23.480687 systemd-journald[1211]: Runtime Journal (/run/log/journal/5bdb79862ee14071b3b6b14492e6bf3a) is 6M, max 48.4M, 42.4M free. Aug 19 08:03:23.094551 systemd[1]: Queued start job for default target multi-user.target. Aug 19 08:03:23.119746 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 19 08:03:23.120469 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 08:03:23.484066 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:03:23.485440 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 08:03:23.487294 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:03:23.489380 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 08:03:23.490095 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 08:03:23.492223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:03:23.492464 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:03:23.494093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:03:23.494310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:03:23.496545 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:03:23.496862 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:03:23.498615 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:03:23.500595 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:03:23.502483 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 08:03:23.541237 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:03:23.572581 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 08:03:23.574233 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 08:03:23.574382 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:03:23.577621 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 08:03:23.585904 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 08:03:23.587469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:03:23.600855 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 08:03:23.839865 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 08:03:23.842068 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:03:23.850674 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 08:03:23.852533 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:03:23.854808 kernel: ACPI: bus type drm_connector registered Aug 19 08:03:23.856998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:03:23.863023 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 08:03:23.865892 systemd-journald[1211]: Time spent on flushing to /var/log/journal/5bdb79862ee14071b3b6b14492e6bf3a is 31.588ms for 1060 entries. Aug 19 08:03:23.865892 systemd-journald[1211]: System Journal (/var/log/journal/5bdb79862ee14071b3b6b14492e6bf3a) is 8M, max 195.6M, 187.6M free. Aug 19 08:03:23.928174 systemd-journald[1211]: Received client request to flush runtime journal. Aug 19 08:03:23.928266 kernel: loop0: detected capacity change from 0 to 111000 Aug 19 08:03:23.866981 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 08:03:23.870764 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:03:23.871056 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:03:23.874204 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 08:03:23.874466 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 08:03:23.882798 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 08:03:23.971488 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 08:03:23.975723 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 08:03:23.973635 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 08:03:23.982094 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 08:03:23.993879 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 08:03:24.000165 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 08:03:24.011046 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 08:03:24.025843 kernel: loop1: detected capacity change from 0 to 128016 Aug 19 08:03:24.025964 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:03:24.043353 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 08:03:24.046695 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:03:24.072847 kernel: loop2: detected capacity change from 0 to 221472 Aug 19 08:03:24.086052 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 08:03:24.091011 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:03:24.099283 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 08:03:24.120832 kernel: loop3: detected capacity change from 0 to 111000 Aug 19 08:03:24.126150 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 08:03:24.133670 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Aug 19 08:03:24.134199 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Aug 19 08:03:24.147068 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:03:24.151008 kernel: loop4: detected capacity change from 0 to 128016 Aug 19 08:03:24.168822 kernel: loop5: detected capacity change from 0 to 221472 Aug 19 08:03:24.180660 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 19 08:03:24.181507 (sd-merge)[1274]: Merged extensions into '/usr'. Aug 19 08:03:24.187530 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 08:03:24.187557 systemd[1]: Reloading... Aug 19 08:03:24.285829 zram_generator::config[1301]: No configuration found. Aug 19 08:03:24.583895 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 08:03:24.596955 systemd[1]: Reloading finished in 408 ms. Aug 19 08:03:24.627308 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 08:03:24.741617 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 08:03:24.759545 systemd[1]: Starting ensure-sysext.service... Aug 19 08:03:24.762024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:03:24.785120 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Aug 19 08:03:24.785141 systemd[1]: Reloading... Aug 19 08:03:24.793276 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 08:03:24.793564 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 08:03:24.794027 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 08:03:24.794405 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 08:03:24.795570 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 08:03:24.795962 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Aug 19 08:03:24.796234 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Aug 19 08:03:24.802188 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:03:24.802430 systemd-tmpfiles[1339]: Skipping /boot Aug 19 08:03:24.816953 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:03:24.817118 systemd-tmpfiles[1339]: Skipping /boot Aug 19 08:03:24.871893 zram_generator::config[1369]: No configuration found. Aug 19 08:03:25.087169 systemd[1]: Reloading finished in 301 ms. Aug 19 08:03:25.110330 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 08:03:25.145547 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:03:25.158321 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:03:25.162037 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 08:03:25.165264 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 08:03:25.187257 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:03:25.191363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:03:25.195319 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 08:03:25.200176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:03:25.200384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:03:25.201966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:03:25.210268 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:03:25.212963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:03:25.214362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:03:25.214659 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:03:25.214768 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:03:25.220417 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 08:03:25.222525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:03:25.223328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:03:25.225472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:03:25.225866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:03:25.230214 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:03:25.231305 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:03:25.240181 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 08:03:25.252092 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:03:25.252409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:03:25.256125 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:03:25.257455 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Aug 19 08:03:25.259187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:03:25.262671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:03:25.264156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:03:25.264334 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:03:25.266517 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 08:03:25.267781 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:03:25.282235 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 08:03:25.283263 augenrules[1442]: No rules Aug 19 08:03:25.285526 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:03:25.285891 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:03:25.288154 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:03:25.288456 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:03:25.291397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:03:25.291990 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:03:25.294433 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:03:25.295605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:03:25.303053 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 08:03:25.310456 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 08:03:25.320234 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 08:03:25.323841 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:03:25.330039 systemd[1]: Finished ensure-sysext.service. Aug 19 08:03:25.335657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:03:25.337959 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:03:25.339337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:03:25.343090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:03:25.347088 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:03:25.353017 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:03:25.368844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:03:25.370364 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:03:25.370425 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:03:25.374134 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:03:25.379228 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 19 08:03:25.380642 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 08:03:25.380677 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:03:25.381524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:03:25.389128 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:03:25.391690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:03:25.391994 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:03:25.395074 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:03:25.395308 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:03:25.400042 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:03:25.400344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:03:25.405319 augenrules[1476]: /sbin/augenrules: No change Aug 19 08:03:25.405704 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:03:25.406393 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:03:25.429832 augenrules[1514]: No rules Aug 19 08:03:25.432875 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:03:25.433263 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:03:25.453356 systemd-resolved[1408]: Positive Trust Anchors: Aug 19 08:03:25.453841 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:03:25.453965 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:03:25.459629 systemd-resolved[1408]: Defaulting to hostname 'linux'. Aug 19 08:03:25.462340 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:03:25.464545 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:03:25.516590 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 19 08:03:25.555268 systemd-networkd[1496]: lo: Link UP Aug 19 08:03:25.555285 systemd-networkd[1496]: lo: Gained carrier Aug 19 08:03:25.557281 systemd-networkd[1496]: Enumeration completed Aug 19 08:03:25.557707 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:03:25.557734 systemd-networkd[1496]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:03:25.558534 systemd-networkd[1496]: eth0: Link UP Aug 19 08:03:25.558695 systemd-networkd[1496]: eth0: Gained carrier Aug 19 08:03:25.558726 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:03:25.558924 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:03:25.560546 systemd[1]: Reached target network.target - Network. Aug 19 08:03:25.565476 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 08:03:25.569139 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 08:03:25.572853 kernel: mousedev: PS/2 mouse device common for all mice Aug 19 08:03:25.573972 systemd-networkd[1496]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 08:03:25.587885 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 08:03:25.682634 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 08:03:25.691776 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 19 08:03:25.692232 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 19 08:03:26.215515 systemd-timesyncd[1499]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 19 08:03:26.215520 systemd-resolved[1408]: Clock change detected. Flushing caches. Aug 19 08:03:26.215573 systemd-timesyncd[1499]: Initial clock synchronization to Tue 2025-08-19 08:03:26.215425 UTC. Aug 19 08:03:26.220552 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:03:26.221993 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 08:03:26.223402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 08:03:26.225117 kernel: ACPI: button: Power Button [PWRF] Aug 19 08:03:26.225492 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 19 08:03:26.227049 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 08:03:26.228614 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 08:03:26.228654 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:03:26.229762 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 08:03:26.231406 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 08:03:26.233124 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 08:03:26.234819 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:03:26.237905 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 08:03:26.281952 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 08:03:26.287099 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 08:03:26.289435 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 08:03:26.290981 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 08:03:26.294783 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 19 08:03:26.295132 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 19 08:03:26.295352 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 19 08:03:26.302968 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 08:03:26.304627 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 08:03:26.307159 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 08:03:26.308950 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 08:03:26.310572 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 08:03:26.315466 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:03:26.317456 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:03:26.318618 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:03:26.318654 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:03:26.322104 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 08:03:26.327054 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 08:03:26.329306 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 08:03:26.339243 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 08:03:26.343254 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 08:03:26.344313 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 08:03:26.345440 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 19 08:03:26.355324 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 08:03:26.360110 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 08:03:26.373994 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 08:03:26.378072 jq[1558]: false Aug 19 08:03:26.377309 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 08:03:26.384097 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 08:03:26.386059 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 08:03:26.387548 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing passwd entry cache Aug 19 08:03:26.387560 oslogin_cache_refresh[1560]: Refreshing passwd entry cache Aug 19 08:03:26.391318 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 08:03:26.393478 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 08:03:26.396922 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting users, quitting Aug 19 08:03:26.396922 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:03:26.396922 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing group entry cache Aug 19 08:03:26.396550 oslogin_cache_refresh[1560]: Failure getting users, quitting Aug 19 08:03:26.396569 oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:03:26.396618 oslogin_cache_refresh[1560]: Refreshing group entry cache Aug 19 08:03:26.397378 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 08:03:26.408297 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting groups, quitting Aug 19 08:03:26.408297 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:03:26.408286 oslogin_cache_refresh[1560]: Failure getting groups, quitting Aug 19 08:03:26.408299 oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:03:26.409668 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 08:03:26.411504 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 08:03:26.411848 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 08:03:26.412291 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 19 08:03:26.412541 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 19 08:03:26.414436 extend-filesystems[1559]: Found /dev/vda6 Aug 19 08:03:26.414667 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 08:03:26.415996 jq[1575]: true Aug 19 08:03:26.415605 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 08:03:26.419158 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 08:03:26.419541 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 08:03:26.430627 extend-filesystems[1559]: Found /dev/vda9 Aug 19 08:03:26.476849 extend-filesystems[1559]: Checking size of /dev/vda9 Aug 19 08:03:26.478523 jq[1580]: true Aug 19 08:03:26.513379 update_engine[1574]: I20250819 08:03:26.513118 1574 main.cc:92] Flatcar Update Engine starting Aug 19 08:03:26.517533 tar[1577]: linux-amd64/helm Aug 19 08:03:26.528053 kernel: kvm_amd: TSC scaling supported Aug 19 08:03:26.528104 kernel: kvm_amd: Nested Virtualization enabled Aug 19 08:03:26.528154 kernel: kvm_amd: Nested Paging enabled Aug 19 08:03:26.528175 kernel: kvm_amd: LBR virtualization supported Aug 19 08:03:26.532663 extend-filesystems[1559]: Resized partition /dev/vda9 Aug 19 08:03:26.533719 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 19 08:03:26.533748 kernel: kvm_amd: Virtual GIF supported Aug 19 08:03:26.534559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:03:26.542209 extend-filesystems[1603]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 08:03:26.555221 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 08:03:26.556541 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 19 08:03:26.581006 dbus-daemon[1555]: [system] SELinux support is enabled Aug 19 08:03:26.582392 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 08:03:26.586584 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 08:03:26.586618 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 08:03:26.588176 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 08:03:26.588207 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 08:03:26.592933 systemd[1]: Started update-engine.service - Update Engine. Aug 19 08:03:26.596021 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 19 08:03:26.596208 update_engine[1574]: I20250819 08:03:26.596136 1574 update_check_scheduler.cc:74] Next update check in 9m32s Aug 19 08:03:26.596689 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 08:03:26.628535 extend-filesystems[1603]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 19 08:03:26.628535 extend-filesystems[1603]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 19 08:03:26.628535 extend-filesystems[1603]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 19 08:03:26.632587 extend-filesystems[1559]: Resized filesystem in /dev/vda9 Aug 19 08:03:26.634494 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 08:03:26.634821 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 08:03:26.720703 bash[1619]: Updated "/home/core/.ssh/authorized_keys" Aug 19 08:03:26.721844 systemd-logind[1568]: Watching system buttons on /dev/input/event2 (Power Button) Aug 19 08:03:26.721869 systemd-logind[1568]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 19 08:03:26.724491 systemd-logind[1568]: New seat seat0. Aug 19 08:03:26.818775 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 08:03:26.819311 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 08:03:26.830061 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:03:26.890191 locksmithd[1614]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 08:03:26.893914 kernel: EDAC MC: Ver: 3.0.0 Aug 19 08:03:26.918514 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 19 08:03:26.967570 sshd_keygen[1584]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 08:03:26.998537 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 08:03:27.002350 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 08:03:27.021427 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 08:03:27.021744 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 08:03:27.027369 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 08:03:27.112948 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 08:03:27.117483 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 08:03:27.121421 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 19 08:03:27.122861 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 08:03:27.141185 containerd[1591]: time="2025-08-19T08:03:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 08:03:27.143037 containerd[1591]: time="2025-08-19T08:03:27.142978686Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 08:03:27.191822 containerd[1591]: time="2025-08-19T08:03:27.191721423Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="23.895µs" Aug 19 08:03:27.193028 containerd[1591]: time="2025-08-19T08:03:27.191992832Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 08:03:27.193028 containerd[1591]: time="2025-08-19T08:03:27.192025614Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 08:03:27.193028 containerd[1591]: time="2025-08-19T08:03:27.192308154Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 08:03:27.193028 containerd[1591]: time="2025-08-19T08:03:27.192323513Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 08:03:27.193028 containerd[1591]: time="2025-08-19T08:03:27.192356034Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:03:27.193028 containerd[1591]: time="2025-08-19T08:03:27.192440963Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:03:27.193028 containerd[1591]: time="2025-08-19T08:03:27.192456272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:03:27.193028 containerd[1591]: time="2025-08-19T08:03:27.192803763Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:03:27.193028 containerd[1591]: time="2025-08-19T08:03:27.192815986Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:03:27.193028 containerd[1591]: time="2025-08-19T08:03:27.192833209Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:03:27.193028 containerd[1591]: time="2025-08-19T08:03:27.192840502Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 08:03:27.193390 containerd[1591]: time="2025-08-19T08:03:27.193068710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 08:03:27.193390 containerd[1591]: time="2025-08-19T08:03:27.193357432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:03:27.193433 containerd[1591]: time="2025-08-19T08:03:27.193393960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:03:27.193433 containerd[1591]: time="2025-08-19T08:03:27.193403899Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 08:03:27.193472 containerd[1591]: time="2025-08-19T08:03:27.193437001Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 08:03:27.193938 containerd[1591]: time="2025-08-19T08:03:27.193843103Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 08:03:27.194081 containerd[1591]: time="2025-08-19T08:03:27.194050692Z" level=info msg="metadata content store policy set" policy=shared Aug 19 08:03:27.203456 containerd[1591]: time="2025-08-19T08:03:27.203377657Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 08:03:27.203545 containerd[1591]: time="2025-08-19T08:03:27.203491661Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 08:03:27.203545 containerd[1591]: time="2025-08-19T08:03:27.203515686Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 08:03:27.203545 containerd[1591]: time="2025-08-19T08:03:27.203532107Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 08:03:27.203603 containerd[1591]: time="2025-08-19T08:03:27.203551193Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 08:03:27.203603 containerd[1591]: time="2025-08-19T08:03:27.203567663Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 08:03:27.203603 containerd[1591]: time="2025-08-19T08:03:27.203587420Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 08:03:27.203658 containerd[1591]: time="2025-08-19T08:03:27.203607398Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 08:03:27.203658 containerd[1591]: time="2025-08-19T08:03:27.203624470Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 08:03:27.203658 containerd[1591]: time="2025-08-19T08:03:27.203638446Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 08:03:27.203658 containerd[1591]: time="2025-08-19T08:03:27.203651110Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 08:03:27.203732 containerd[1591]: time="2025-08-19T08:03:27.203669174Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 08:03:27.203981 containerd[1591]: time="2025-08-19T08:03:27.203942547Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 08:03:27.204017 containerd[1591]: time="2025-08-19T08:03:27.203984766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 08:03:27.204017 containerd[1591]: time="2025-08-19T08:03:27.204011095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 08:03:27.204066 containerd[1591]: time="2025-08-19T08:03:27.204024901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 08:03:27.204066 containerd[1591]: time="2025-08-19T08:03:27.204036222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 08:03:27.204066 containerd[1591]: time="2025-08-19T08:03:27.204046612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 08:03:27.204066 containerd[1591]: time="2025-08-19T08:03:27.204057282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 08:03:27.204066 containerd[1591]: time="2025-08-19T08:03:27.204067240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 08:03:27.204175 containerd[1591]: time="2025-08-19T08:03:27.204079193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 08:03:27.204175 containerd[1591]: time="2025-08-19T08:03:27.204091927Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 08:03:27.204175 containerd[1591]: time="2025-08-19T08:03:27.204102256Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 08:03:27.204229 containerd[1591]: time="2025-08-19T08:03:27.204202023Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 08:03:27.204229 containerd[1591]: time="2025-08-19T08:03:27.204218735Z" level=info msg="Start snapshots syncer" Aug 19 08:03:27.204306 containerd[1591]: time="2025-08-19T08:03:27.204274339Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 08:03:27.204623 containerd[1591]: time="2025-08-19T08:03:27.204570254Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 08:03:27.204913 containerd[1591]: time="2025-08-19T08:03:27.204643241Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 08:03:27.204913 containerd[1591]: time="2025-08-19T08:03:27.204754560Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 08:03:27.205061 containerd[1591]: time="2025-08-19T08:03:27.205034795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 08:03:27.205105 containerd[1591]: time="2025-08-19T08:03:27.205075251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 08:03:27.205105 containerd[1591]: time="2025-08-19T08:03:27.205093135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 08:03:27.205105 containerd[1591]: time="2025-08-19T08:03:27.205102813Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 08:03:27.205160 containerd[1591]: time="2025-08-19T08:03:27.205114034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 08:03:27.205160 containerd[1591]: time="2025-08-19T08:03:27.205125105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 08:03:27.205160 containerd[1591]: time="2025-08-19T08:03:27.205136777Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 08:03:27.205219 containerd[1591]: time="2025-08-19T08:03:27.205170169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 08:03:27.205219 containerd[1591]: time="2025-08-19T08:03:27.205182843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 08:03:27.205219 containerd[1591]: time="2025-08-19T08:03:27.205193423Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 08:03:27.205274 containerd[1591]: time="2025-08-19T08:03:27.205237636Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:03:27.205274 containerd[1591]: time="2025-08-19T08:03:27.205254057Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:03:27.205322 containerd[1591]: time="2025-08-19T08:03:27.205274305Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:03:27.205322 containerd[1591]: time="2025-08-19T08:03:27.205286037Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:03:27.205322 containerd[1591]: time="2025-08-19T08:03:27.205294743Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 08:03:27.205322 containerd[1591]: time="2025-08-19T08:03:27.205311354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 08:03:27.205417 containerd[1591]: time="2025-08-19T08:03:27.205324559Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 08:03:27.205417 containerd[1591]: time="2025-08-19T08:03:27.205356859Z" level=info msg="runtime interface created" Aug 19 08:03:27.205417 containerd[1591]: time="2025-08-19T08:03:27.205362250Z" level=info msg="created NRI interface" Aug 19 08:03:27.205417 containerd[1591]: time="2025-08-19T08:03:27.205373962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 08:03:27.205417 containerd[1591]: time="2025-08-19T08:03:27.205384541Z" level=info msg="Connect containerd service" Aug 19 08:03:27.205417 containerd[1591]: time="2025-08-19T08:03:27.205406923Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 08:03:27.206455 containerd[1591]: time="2025-08-19T08:03:27.206418912Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:03:27.368086 tar[1577]: linux-amd64/LICENSE Aug 19 08:03:27.368086 tar[1577]: linux-amd64/README.md Aug 19 08:03:27.450165 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 08:03:27.522916 containerd[1591]: time="2025-08-19T08:03:27.522802577Z" level=info msg="Start subscribing containerd event" Aug 19 08:03:27.523099 containerd[1591]: time="2025-08-19T08:03:27.522913936Z" level=info msg="Start recovering state" Aug 19 08:03:27.523170 containerd[1591]: time="2025-08-19T08:03:27.523148686Z" level=info msg="Start event monitor" Aug 19 08:03:27.523198 containerd[1591]: time="2025-08-19T08:03:27.523173613Z" level=info msg="Start cni network conf syncer for default" Aug 19 08:03:27.523252 containerd[1591]: time="2025-08-19T08:03:27.523209630Z" level=info msg="Start streaming server" Aug 19 08:03:27.523252 containerd[1591]: time="2025-08-19T08:03:27.523243274Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 08:03:27.523252 containerd[1591]: time="2025-08-19T08:03:27.523252461Z" level=info msg="runtime interface starting up..." Aug 19 08:03:27.523310 containerd[1591]: time="2025-08-19T08:03:27.523262329Z" level=info msg="starting plugins..." Aug 19 08:03:27.523310 containerd[1591]: time="2025-08-19T08:03:27.523284801Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 08:03:27.523444 containerd[1591]: time="2025-08-19T08:03:27.523389307Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 08:03:27.523522 containerd[1591]: time="2025-08-19T08:03:27.523485949Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 08:03:27.523643 containerd[1591]: time="2025-08-19T08:03:27.523614580Z" level=info msg="containerd successfully booted in 0.383202s" Aug 19 08:03:27.523854 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 08:03:27.636751 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 08:03:27.639608 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:55818.service - OpenSSH per-connection server daemon (10.0.0.1:55818). Aug 19 08:03:27.717510 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 55818 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:03:27.719698 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:03:27.727170 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 08:03:27.729804 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 08:03:27.733019 systemd-networkd[1496]: eth0: Gained IPv6LL Aug 19 08:03:27.750198 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 08:03:27.755525 systemd-logind[1568]: New session 1 of user core. Aug 19 08:03:27.756010 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 08:03:27.758795 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 19 08:03:27.762027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:03:27.775320 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 08:03:27.779835 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 08:03:27.790674 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 08:03:27.808273 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 08:03:27.811194 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 08:03:27.813189 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 19 08:03:27.813609 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 19 08:03:27.816308 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 08:03:27.818659 systemd-logind[1568]: New session c1 of user core. Aug 19 08:03:28.014681 systemd[1690]: Queued start job for default target default.target. Aug 19 08:03:28.023367 systemd[1690]: Created slice app.slice - User Application Slice. Aug 19 08:03:28.023395 systemd[1690]: Reached target paths.target - Paths. Aug 19 08:03:28.023438 systemd[1690]: Reached target timers.target - Timers. Aug 19 08:03:28.025200 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 08:03:28.040312 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 08:03:28.040501 systemd[1690]: Reached target sockets.target - Sockets. Aug 19 08:03:28.040559 systemd[1690]: Reached target basic.target - Basic System. Aug 19 08:03:28.040610 systemd[1690]: Reached target default.target - Main User Target. Aug 19 08:03:28.040655 systemd[1690]: Startup finished in 213ms. Aug 19 08:03:28.040809 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 08:03:28.053086 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 08:03:28.110015 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:55828.service - OpenSSH per-connection server daemon (10.0.0.1:55828). Aug 19 08:03:28.196589 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 55828 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:03:28.198568 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:03:28.203928 systemd-logind[1568]: New session 2 of user core. Aug 19 08:03:28.248160 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 08:03:28.307463 sshd[1712]: Connection closed by 10.0.0.1 port 55828 Aug 19 08:03:28.308853 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Aug 19 08:03:28.319844 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:55828.service: Deactivated successfully. Aug 19 08:03:28.321933 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 08:03:28.322869 systemd-logind[1568]: Session 2 logged out. Waiting for processes to exit. Aug 19 08:03:28.326111 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:55842.service - OpenSSH per-connection server daemon (10.0.0.1:55842). Aug 19 08:03:28.328482 systemd-logind[1568]: Removed session 2. Aug 19 08:03:28.443046 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 55842 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:03:28.445016 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:03:28.450383 systemd-logind[1568]: New session 3 of user core. Aug 19 08:03:28.457147 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 08:03:28.517442 sshd[1721]: Connection closed by 10.0.0.1 port 55842 Aug 19 08:03:28.517833 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Aug 19 08:03:28.522584 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:55842.service: Deactivated successfully. Aug 19 08:03:28.525072 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 08:03:28.526637 systemd-logind[1568]: Session 3 logged out. Waiting for processes to exit. Aug 19 08:03:28.528038 systemd-logind[1568]: Removed session 3. Aug 19 08:03:29.603333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:03:29.605106 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 08:03:29.607011 systemd[1]: Startup finished in 4.258s (kernel) + 8.718s (initrd) + 6.773s (userspace) = 19.750s. Aug 19 08:03:29.617264 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:03:30.368562 kubelet[1731]: E0819 08:03:30.368470 1731 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:03:30.373582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:03:30.373810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:03:30.374271 systemd[1]: kubelet.service: Consumed 2.368s CPU time, 267.8M memory peak. Aug 19 08:03:38.531784 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:37340.service - OpenSSH per-connection server daemon (10.0.0.1:37340). Aug 19 08:03:38.601724 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 37340 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:03:38.604117 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:03:38.610678 systemd-logind[1568]: New session 4 of user core. Aug 19 08:03:38.621088 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 08:03:38.676818 sshd[1747]: Connection closed by 10.0.0.1 port 37340 Aug 19 08:03:38.677201 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Aug 19 08:03:38.696911 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:37340.service: Deactivated successfully. Aug 19 08:03:38.698950 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 08:03:38.699872 systemd-logind[1568]: Session 4 logged out. Waiting for processes to exit. Aug 19 08:03:38.702999 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:37354.service - OpenSSH per-connection server daemon (10.0.0.1:37354). Aug 19 08:03:38.703728 systemd-logind[1568]: Removed session 4. Aug 19 08:03:38.772241 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 37354 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:03:38.774249 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:03:38.779137 systemd-logind[1568]: New session 5 of user core. Aug 19 08:03:38.789077 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 08:03:38.841398 sshd[1756]: Connection closed by 10.0.0.1 port 37354 Aug 19 08:03:38.841980 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Aug 19 08:03:38.852712 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:37354.service: Deactivated successfully. Aug 19 08:03:38.854989 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 08:03:38.855946 systemd-logind[1568]: Session 5 logged out. Waiting for processes to exit. Aug 19 08:03:38.859345 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:37364.service - OpenSSH per-connection server daemon (10.0.0.1:37364). Aug 19 08:03:38.860086 systemd-logind[1568]: Removed session 5. Aug 19 08:03:38.923634 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 37364 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:03:38.925675 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:03:38.931285 systemd-logind[1568]: New session 6 of user core. Aug 19 08:03:38.941114 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 08:03:38.998864 sshd[1765]: Connection closed by 10.0.0.1 port 37364 Aug 19 08:03:38.999356 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Aug 19 08:03:39.011622 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:37364.service: Deactivated successfully. Aug 19 08:03:39.013847 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 08:03:39.014829 systemd-logind[1568]: Session 6 logged out. Waiting for processes to exit. Aug 19 08:03:39.017815 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:50744.service - OpenSSH per-connection server daemon (10.0.0.1:50744). Aug 19 08:03:39.018623 systemd-logind[1568]: Removed session 6. Aug 19 08:03:39.070148 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 50744 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:03:39.071740 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:03:39.076758 systemd-logind[1568]: New session 7 of user core. Aug 19 08:03:39.086125 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 08:03:39.148137 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 08:03:39.148684 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:03:39.170023 sudo[1775]: pam_unix(sudo:session): session closed for user root Aug 19 08:03:39.172560 sshd[1774]: Connection closed by 10.0.0.1 port 50744 Aug 19 08:03:39.173121 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Aug 19 08:03:39.189518 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:50744.service: Deactivated successfully. Aug 19 08:03:39.192520 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 08:03:39.193643 systemd-logind[1568]: Session 7 logged out. Waiting for processes to exit. Aug 19 08:03:39.197777 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:50748.service - OpenSSH per-connection server daemon (10.0.0.1:50748). Aug 19 08:03:39.198539 systemd-logind[1568]: Removed session 7. Aug 19 08:03:39.273533 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 50748 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:03:39.276361 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:03:39.285873 systemd-logind[1568]: New session 8 of user core. Aug 19 08:03:39.296175 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 08:03:39.363967 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 08:03:39.364504 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:03:39.374280 sudo[1786]: pam_unix(sudo:session): session closed for user root Aug 19 08:03:39.386101 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 08:03:39.386678 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:03:39.409071 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:03:39.488349 augenrules[1808]: No rules Aug 19 08:03:39.492580 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:03:39.493026 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:03:39.494941 sudo[1785]: pam_unix(sudo:session): session closed for user root Aug 19 08:03:39.497310 sshd[1784]: Connection closed by 10.0.0.1 port 50748 Aug 19 08:03:39.497867 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Aug 19 08:03:39.508324 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:50748.service: Deactivated successfully. Aug 19 08:03:39.510467 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 08:03:39.511388 systemd-logind[1568]: Session 8 logged out. Waiting for processes to exit. Aug 19 08:03:39.515214 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:50750.service - OpenSSH per-connection server daemon (10.0.0.1:50750). Aug 19 08:03:39.516031 systemd-logind[1568]: Removed session 8. Aug 19 08:03:39.573339 sshd[1817]: Accepted publickey for core from 10.0.0.1 port 50750 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:03:39.575463 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:03:39.580932 systemd-logind[1568]: New session 9 of user core. Aug 19 08:03:39.599213 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 08:03:39.656006 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 08:03:39.656342 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:03:40.534600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 08:03:40.536802 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 08:03:40.538366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:03:40.562729 (dockerd)[1841]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 08:03:40.916814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:03:40.922337 (kubelet)[1850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:03:40.989307 kubelet[1850]: E0819 08:03:40.989237 1850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:03:40.996060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:03:40.996273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:03:40.996720 systemd[1]: kubelet.service: Consumed 390ms CPU time, 113.6M memory peak. Aug 19 08:03:41.369097 dockerd[1841]: time="2025-08-19T08:03:41.368813596Z" level=info msg="Starting up" Aug 19 08:03:41.370593 dockerd[1841]: time="2025-08-19T08:03:41.370534584Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 08:03:41.391727 dockerd[1841]: time="2025-08-19T08:03:41.391657755Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 08:03:41.682484 dockerd[1841]: time="2025-08-19T08:03:41.682273197Z" level=info msg="Loading containers: start." Aug 19 08:03:41.696947 kernel: Initializing XFRM netlink socket Aug 19 08:03:42.080189 systemd-networkd[1496]: docker0: Link UP Aug 19 08:03:42.086530 dockerd[1841]: time="2025-08-19T08:03:42.086455818Z" level=info msg="Loading containers: done." Aug 19 08:03:42.123484 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1752358273-merged.mount: Deactivated successfully. Aug 19 08:03:42.125195 dockerd[1841]: time="2025-08-19T08:03:42.125131923Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 08:03:42.125297 dockerd[1841]: time="2025-08-19T08:03:42.125275392Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 08:03:42.125474 dockerd[1841]: time="2025-08-19T08:03:42.125437927Z" level=info msg="Initializing buildkit" Aug 19 08:03:42.172074 dockerd[1841]: time="2025-08-19T08:03:42.172015673Z" level=info msg="Completed buildkit initialization" Aug 19 08:03:42.179046 dockerd[1841]: time="2025-08-19T08:03:42.178947095Z" level=info msg="Daemon has completed initialization" Aug 19 08:03:42.179673 dockerd[1841]: time="2025-08-19T08:03:42.179085394Z" level=info msg="API listen on /run/docker.sock" Aug 19 08:03:42.179302 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 08:03:43.343671 containerd[1591]: time="2025-08-19T08:03:43.343567730Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Aug 19 08:03:43.999439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1891697069.mount: Deactivated successfully. Aug 19 08:03:45.741776 containerd[1591]: time="2025-08-19T08:03:45.741694971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:45.742423 containerd[1591]: time="2025-08-19T08:03:45.742367723Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Aug 19 08:03:45.743653 containerd[1591]: time="2025-08-19T08:03:45.743583153Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:45.746484 containerd[1591]: time="2025-08-19T08:03:45.746452014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:45.747374 containerd[1591]: time="2025-08-19T08:03:45.747320674Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.403652054s" Aug 19 08:03:45.747374 containerd[1591]: time="2025-08-19T08:03:45.747376939Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Aug 19 08:03:45.748276 containerd[1591]: time="2025-08-19T08:03:45.748244987Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Aug 19 08:03:47.753980 containerd[1591]: time="2025-08-19T08:03:47.753914190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:47.754782 containerd[1591]: time="2025-08-19T08:03:47.754722626Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Aug 19 08:03:47.757688 containerd[1591]: time="2025-08-19T08:03:47.757643124Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:47.760433 containerd[1591]: time="2025-08-19T08:03:47.760397971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:47.761272 containerd[1591]: time="2025-08-19T08:03:47.761233088Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 2.012959146s" Aug 19 08:03:47.761272 containerd[1591]: time="2025-08-19T08:03:47.761270959Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Aug 19 08:03:47.761830 containerd[1591]: time="2025-08-19T08:03:47.761784522Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Aug 19 08:03:49.476777 containerd[1591]: time="2025-08-19T08:03:49.476717186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:49.477398 containerd[1591]: time="2025-08-19T08:03:49.477368167Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Aug 19 08:03:49.478559 containerd[1591]: time="2025-08-19T08:03:49.478512964Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:49.480804 containerd[1591]: time="2025-08-19T08:03:49.480743728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:49.481817 containerd[1591]: time="2025-08-19T08:03:49.481777096Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.719957619s" Aug 19 08:03:49.481817 containerd[1591]: time="2025-08-19T08:03:49.481810138Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Aug 19 08:03:49.482407 containerd[1591]: time="2025-08-19T08:03:49.482380057Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Aug 19 08:03:50.992443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251501521.mount: Deactivated successfully. Aug 19 08:03:51.059694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 19 08:03:51.061624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:03:51.332149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:03:51.343248 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:03:51.485635 kubelet[2156]: E0819 08:03:51.485556 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:03:51.490860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:03:51.491168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:03:51.492034 systemd[1]: kubelet.service: Consumed 371ms CPU time, 110.7M memory peak. Aug 19 08:03:52.045769 containerd[1591]: time="2025-08-19T08:03:52.045683492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:52.047224 containerd[1591]: time="2025-08-19T08:03:52.047182333Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Aug 19 08:03:52.048401 containerd[1591]: time="2025-08-19T08:03:52.048325347Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:52.050975 containerd[1591]: time="2025-08-19T08:03:52.050863859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:52.051573 containerd[1591]: time="2025-08-19T08:03:52.051517675Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.56910701s" Aug 19 08:03:52.051573 containerd[1591]: time="2025-08-19T08:03:52.051568721Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Aug 19 08:03:52.052393 containerd[1591]: time="2025-08-19T08:03:52.052365686Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 19 08:03:52.716788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838716425.mount: Deactivated successfully. Aug 19 08:03:53.834448 containerd[1591]: time="2025-08-19T08:03:53.834336093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:53.835167 containerd[1591]: time="2025-08-19T08:03:53.835083405Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 19 08:03:53.837912 containerd[1591]: time="2025-08-19T08:03:53.836928135Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:53.841801 containerd[1591]: time="2025-08-19T08:03:53.841707249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:53.842715 containerd[1591]: time="2025-08-19T08:03:53.842655257Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.790258443s" Aug 19 08:03:53.842715 containerd[1591]: time="2025-08-19T08:03:53.842705472Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 19 08:03:53.843394 containerd[1591]: time="2025-08-19T08:03:53.843327528Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 08:03:54.321674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164039313.mount: Deactivated successfully. Aug 19 08:03:54.327859 containerd[1591]: time="2025-08-19T08:03:54.327799569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:03:54.328624 containerd[1591]: time="2025-08-19T08:03:54.328599710Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 19 08:03:54.330164 containerd[1591]: time="2025-08-19T08:03:54.330090376Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:03:54.332612 containerd[1591]: time="2025-08-19T08:03:54.332572322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:03:54.333328 containerd[1591]: time="2025-08-19T08:03:54.333291290Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 489.929808ms" Aug 19 08:03:54.333328 containerd[1591]: time="2025-08-19T08:03:54.333329632Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 19 08:03:54.333963 containerd[1591]: time="2025-08-19T08:03:54.333905913Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 19 08:03:54.825013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount471687429.mount: Deactivated successfully. Aug 19 08:03:57.136952 containerd[1591]: time="2025-08-19T08:03:57.136829607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:57.137574 containerd[1591]: time="2025-08-19T08:03:57.137513750Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Aug 19 08:03:57.138990 containerd[1591]: time="2025-08-19T08:03:57.138955715Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:57.142310 containerd[1591]: time="2025-08-19T08:03:57.142251998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:03:57.143452 containerd[1591]: time="2025-08-19T08:03:57.143392557Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.809451548s" Aug 19 08:03:57.143452 containerd[1591]: time="2025-08-19T08:03:57.143432692Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 19 08:04:00.116166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:04:00.116371 systemd[1]: kubelet.service: Consumed 371ms CPU time, 110.7M memory peak. Aug 19 08:04:00.119131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:04:00.156817 systemd[1]: Reload requested from client PID 2305 ('systemctl') (unit session-9.scope)... Aug 19 08:04:00.156851 systemd[1]: Reloading... Aug 19 08:04:00.262923 zram_generator::config[2348]: No configuration found. Aug 19 08:04:00.766016 systemd[1]: Reloading finished in 608 ms. Aug 19 08:04:00.837742 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 08:04:00.837860 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 08:04:00.838225 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:04:00.838274 systemd[1]: kubelet.service: Consumed 164ms CPU time, 98.3M memory peak. Aug 19 08:04:00.840112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:04:01.016136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:04:01.028371 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:04:01.076958 kubelet[2396]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:04:01.076958 kubelet[2396]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 19 08:04:01.076958 kubelet[2396]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:04:01.077411 kubelet[2396]: I0819 08:04:01.077117 2396 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:04:01.230107 kubelet[2396]: I0819 08:04:01.230022 2396 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 19 08:04:01.230107 kubelet[2396]: I0819 08:04:01.230085 2396 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:04:01.230443 kubelet[2396]: I0819 08:04:01.230416 2396 server.go:934] "Client rotation is on, will bootstrap in background" Aug 19 08:04:01.314122 kubelet[2396]: E0819 08:04:01.314046 2396 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:04:01.316844 kubelet[2396]: I0819 08:04:01.316798 2396 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:04:01.332124 kubelet[2396]: I0819 08:04:01.332098 2396 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:04:01.340989 kubelet[2396]: I0819 08:04:01.340936 2396 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:04:01.342122 kubelet[2396]: I0819 08:04:01.342090 2396 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 19 08:04:01.342359 kubelet[2396]: I0819 08:04:01.342306 2396 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:04:01.342589 kubelet[2396]: I0819 08:04:01.342355 2396 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:04:01.342844 kubelet[2396]: I0819 08:04:01.342602 2396 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:04:01.342844 kubelet[2396]: I0819 08:04:01.342611 2396 container_manager_linux.go:300] "Creating device plugin manager" Aug 19 08:04:01.342844 kubelet[2396]: I0819 08:04:01.342751 2396 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:04:01.346536 kubelet[2396]: I0819 08:04:01.346474 2396 kubelet.go:408] "Attempting to sync node with API server" Aug 19 08:04:01.346536 kubelet[2396]: I0819 08:04:01.346506 2396 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:04:01.346536 kubelet[2396]: I0819 08:04:01.346550 2396 kubelet.go:314] "Adding apiserver pod source" Aug 19 08:04:01.346866 kubelet[2396]: I0819 08:04:01.346593 2396 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:04:01.350035 kubelet[2396]: W0819 08:04:01.349949 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Aug 19 08:04:01.350035 kubelet[2396]: E0819 08:04:01.350031 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:04:01.350035 kubelet[2396]: W0819 08:04:01.349961 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Aug 19 08:04:01.350249 kubelet[2396]: E0819 08:04:01.350077 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:04:01.350249 kubelet[2396]: I0819 08:04:01.350134 2396 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:04:01.354522 kubelet[2396]: I0819 08:04:01.354497 2396 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 08:04:01.355499 kubelet[2396]: W0819 08:04:01.355362 2396 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 08:04:01.359594 kubelet[2396]: I0819 08:04:01.359554 2396 server.go:1274] "Started kubelet" Aug 19 08:04:01.359743 kubelet[2396]: I0819 08:04:01.359690 2396 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:04:01.360232 kubelet[2396]: I0819 08:04:01.360200 2396 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:04:01.360761 kubelet[2396]: I0819 08:04:01.360742 2396 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:04:01.361269 kubelet[2396]: I0819 08:04:01.361224 2396 server.go:449] "Adding debug handlers to kubelet server" Aug 19 08:04:01.361901 kubelet[2396]: I0819 08:04:01.361850 2396 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:04:01.362782 kubelet[2396]: I0819 08:04:01.362752 2396 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:04:01.364877 kubelet[2396]: E0819 08:04:01.364825 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:04:01.364877 kubelet[2396]: I0819 08:04:01.364897 2396 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 19 08:04:01.365129 kubelet[2396]: I0819 08:04:01.365103 2396 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 19 08:04:01.365295 kubelet[2396]: I0819 08:04:01.365183 2396 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:04:01.365545 kubelet[2396]: W0819 08:04:01.365498 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Aug 19 08:04:01.365545 kubelet[2396]: E0819 08:04:01.365542 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:04:01.365727 kubelet[2396]: I0819 08:04:01.365709 2396 factory.go:221] Registration of the systemd container factory successfully Aug 19 08:04:01.365819 kubelet[2396]: I0819 08:04:01.365798 2396 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:04:01.366397 kubelet[2396]: E0819 08:04:01.366353 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Aug 19 08:04:01.366560 kubelet[2396]: E0819 08:04:01.366469 2396 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:04:01.367770 kubelet[2396]: E0819 08:04:01.365857 2396 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185d1c63053bd7f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-19 08:04:01.359509493 +0000 UTC m=+0.326475752,LastTimestamp:2025-08-19 08:04:01.359509493 +0000 UTC m=+0.326475752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 19 08:04:01.367926 kubelet[2396]: I0819 08:04:01.367908 2396 factory.go:221] Registration of the containerd container factory successfully Aug 19 08:04:01.385339 kubelet[2396]: I0819 08:04:01.385031 2396 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 08:04:01.387571 kubelet[2396]: I0819 08:04:01.387538 2396 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 08:04:01.387615 kubelet[2396]: I0819 08:04:01.387586 2396 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 19 08:04:01.387660 kubelet[2396]: I0819 08:04:01.387628 2396 kubelet.go:2321] "Starting kubelet main sync loop" Aug 19 08:04:01.387718 kubelet[2396]: E0819 08:04:01.387692 2396 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:04:01.388607 kubelet[2396]: W0819 08:04:01.388498 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Aug 19 08:04:01.388607 kubelet[2396]: E0819 08:04:01.388561 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:04:01.389622 kubelet[2396]: I0819 08:04:01.389588 2396 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 19 08:04:01.389622 kubelet[2396]: I0819 08:04:01.389615 2396 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 19 08:04:01.389680 kubelet[2396]: I0819 08:04:01.389644 2396 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:04:01.465674 kubelet[2396]: E0819 08:04:01.465569 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:04:01.487919 kubelet[2396]: E0819 08:04:01.487829 2396 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:04:01.566735 kubelet[2396]: E0819 08:04:01.566565 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:04:01.567046 kubelet[2396]: E0819 08:04:01.566996 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Aug 19 08:04:01.651294 kubelet[2396]: I0819 08:04:01.651238 2396 policy_none.go:49] "None policy: Start" Aug 19 08:04:01.652324 kubelet[2396]: I0819 08:04:01.652295 2396 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 19 08:04:01.652324 kubelet[2396]: I0819 08:04:01.652327 2396 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:04:01.659543 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 08:04:01.667502 kubelet[2396]: E0819 08:04:01.667467 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:04:01.674872 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 08:04:01.678556 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 08:04:01.688508 kubelet[2396]: E0819 08:04:01.688469 2396 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:04:01.698541 kubelet[2396]: I0819 08:04:01.698490 2396 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 08:04:01.698861 kubelet[2396]: I0819 08:04:01.698820 2396 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:04:01.698940 kubelet[2396]: I0819 08:04:01.698846 2396 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:04:01.699390 kubelet[2396]: I0819 08:04:01.699235 2396 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:04:01.700361 kubelet[2396]: E0819 08:04:01.700336 2396 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 19 08:04:01.800679 kubelet[2396]: I0819 08:04:01.800627 2396 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 08:04:01.801172 kubelet[2396]: E0819 08:04:01.801114 2396 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Aug 19 08:04:01.968673 kubelet[2396]: E0819 08:04:01.968494 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Aug 19 08:04:02.002986 kubelet[2396]: I0819 08:04:02.002905 2396 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 08:04:02.003533 kubelet[2396]: E0819 08:04:02.003500 2396 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Aug 19 08:04:02.099592 systemd[1]: Created slice kubepods-burstable-pod9da9049669923e4556cb3e8ee1c2f8ab.slice - libcontainer container kubepods-burstable-pod9da9049669923e4556cb3e8ee1c2f8ab.slice. Aug 19 08:04:02.129032 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Aug 19 08:04:02.133852 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Aug 19 08:04:02.170313 kubelet[2396]: I0819 08:04:02.170244 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9da9049669923e4556cb3e8ee1c2f8ab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9da9049669923e4556cb3e8ee1c2f8ab\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:04:02.170313 kubelet[2396]: I0819 08:04:02.170293 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:02.170313 kubelet[2396]: I0819 08:04:02.170313 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:02.170876 kubelet[2396]: I0819 08:04:02.170333 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Aug 19 08:04:02.170876 kubelet[2396]: I0819 08:04:02.170350 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:02.170876 kubelet[2396]: I0819 08:04:02.170363 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9da9049669923e4556cb3e8ee1c2f8ab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9da9049669923e4556cb3e8ee1c2f8ab\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:04:02.170876 kubelet[2396]: I0819 08:04:02.170378 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9da9049669923e4556cb3e8ee1c2f8ab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9da9049669923e4556cb3e8ee1c2f8ab\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:04:02.170876 kubelet[2396]: I0819 08:04:02.170400 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:02.171021 kubelet[2396]: I0819 08:04:02.170478 2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:02.238466 kubelet[2396]: W0819 08:04:02.238277 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Aug 19 08:04:02.238466 kubelet[2396]: E0819 08:04:02.238377 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:04:02.249248 kubelet[2396]: W0819 08:04:02.249177 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Aug 19 08:04:02.249248 kubelet[2396]: E0819 08:04:02.249244 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:04:02.346331 kubelet[2396]: W0819 08:04:02.346193 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Aug 19 08:04:02.346331 kubelet[2396]: E0819 08:04:02.346304 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:04:02.405468 kubelet[2396]: I0819 08:04:02.405417 2396 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 08:04:02.405784 kubelet[2396]: E0819 08:04:02.405737 2396 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Aug 19 08:04:02.425320 kubelet[2396]: E0819 08:04:02.425263 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:02.426111 containerd[1591]: time="2025-08-19T08:04:02.426033993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9da9049669923e4556cb3e8ee1c2f8ab,Namespace:kube-system,Attempt:0,}" Aug 19 08:04:02.432272 kubelet[2396]: E0819 08:04:02.432240 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:02.432609 containerd[1591]: time="2025-08-19T08:04:02.432570148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Aug 19 08:04:02.436899 kubelet[2396]: E0819 08:04:02.436845 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:02.437196 containerd[1591]: time="2025-08-19T08:04:02.437162520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Aug 19 08:04:02.470278 kubelet[2396]: W0819 08:04:02.470160 2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Aug 19 08:04:02.470348 kubelet[2396]: E0819 08:04:02.470286 2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:04:02.769701 kubelet[2396]: E0819 08:04:02.769628 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" Aug 19 08:04:03.117919 containerd[1591]: time="2025-08-19T08:04:03.117162514Z" level=info msg="connecting to shim d2addede2676021d8b6f44d0328ba7af192bc91420af98e2d12a7c118c5ec23b" address="unix:///run/containerd/s/13387c989bc77d638b2682987246758edcf0c6d59493b167c11f2803f913c677" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:04:03.140187 containerd[1591]: time="2025-08-19T08:04:03.140122919Z" level=info msg="connecting to shim 2c07876303c3a101ab65ba10bf487c96d79e2791bc3bd25fc6c4c7ae9bc92156" address="unix:///run/containerd/s/aeeaf17c6ecdaf8e500adc8201d1aca8e28cb4ee2f980d6c2abd73a5485f2684" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:04:03.151425 containerd[1591]: time="2025-08-19T08:04:03.151374726Z" level=info msg="connecting to shim 92ef37b9b5828c0ca76519718ce17bdef342946ba4be00ee21ce9983df48491a" address="unix:///run/containerd/s/7b6bdf3dec739d98a27ac581dc0c00421e055acb91c059754ff7dad2a91dc727" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:04:03.175217 systemd[1]: Started cri-containerd-d2addede2676021d8b6f44d0328ba7af192bc91420af98e2d12a7c118c5ec23b.scope - libcontainer container d2addede2676021d8b6f44d0328ba7af192bc91420af98e2d12a7c118c5ec23b. Aug 19 08:04:03.198099 systemd[1]: Started cri-containerd-2c07876303c3a101ab65ba10bf487c96d79e2791bc3bd25fc6c4c7ae9bc92156.scope - libcontainer container 2c07876303c3a101ab65ba10bf487c96d79e2791bc3bd25fc6c4c7ae9bc92156. Aug 19 08:04:03.203983 systemd[1]: Started cri-containerd-92ef37b9b5828c0ca76519718ce17bdef342946ba4be00ee21ce9983df48491a.scope - libcontainer container 92ef37b9b5828c0ca76519718ce17bdef342946ba4be00ee21ce9983df48491a. Aug 19 08:04:03.208260 kubelet[2396]: I0819 08:04:03.208214 2396 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 08:04:03.208744 kubelet[2396]: E0819 08:04:03.208662 2396 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Aug 19 08:04:03.270285 containerd[1591]: time="2025-08-19T08:04:03.270193124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9da9049669923e4556cb3e8ee1c2f8ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2addede2676021d8b6f44d0328ba7af192bc91420af98e2d12a7c118c5ec23b\"" Aug 19 08:04:03.272402 kubelet[2396]: E0819 08:04:03.272356 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:03.275277 containerd[1591]: time="2025-08-19T08:04:03.275242379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"92ef37b9b5828c0ca76519718ce17bdef342946ba4be00ee21ce9983df48491a\"" Aug 19 08:04:03.275746 containerd[1591]: time="2025-08-19T08:04:03.275711466Z" level=info msg="CreateContainer within sandbox \"d2addede2676021d8b6f44d0328ba7af192bc91420af98e2d12a7c118c5ec23b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 08:04:03.276193 kubelet[2396]: E0819 08:04:03.276171 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:03.278177 containerd[1591]: time="2025-08-19T08:04:03.278102198Z" level=info msg="CreateContainer within sandbox \"92ef37b9b5828c0ca76519718ce17bdef342946ba4be00ee21ce9983df48491a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 08:04:03.291994 containerd[1591]: time="2025-08-19T08:04:03.291947715Z" level=info msg="Container bf3d2f0ba425603b628e0d2ca076246b4fc8ddf6bb23385eeeaff213e8e80002: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:03.293236 containerd[1591]: time="2025-08-19T08:04:03.293196732Z" level=info msg="Container bb8d2613682cb183594a183093c22b2d8c0f022f1df3a4a9d3f13f04a681764b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:03.294851 containerd[1591]: time="2025-08-19T08:04:03.294803776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c07876303c3a101ab65ba10bf487c96d79e2791bc3bd25fc6c4c7ae9bc92156\"" Aug 19 08:04:03.295684 kubelet[2396]: E0819 08:04:03.295658 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:03.297453 containerd[1591]: time="2025-08-19T08:04:03.297420259Z" level=info msg="CreateContainer within sandbox \"2c07876303c3a101ab65ba10bf487c96d79e2791bc3bd25fc6c4c7ae9bc92156\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 08:04:03.304474 containerd[1591]: time="2025-08-19T08:04:03.304411088Z" level=info msg="CreateContainer within sandbox \"d2addede2676021d8b6f44d0328ba7af192bc91420af98e2d12a7c118c5ec23b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bf3d2f0ba425603b628e0d2ca076246b4fc8ddf6bb23385eeeaff213e8e80002\"" Aug 19 08:04:03.305258 containerd[1591]: time="2025-08-19T08:04:03.305231276Z" level=info msg="StartContainer for \"bf3d2f0ba425603b628e0d2ca076246b4fc8ddf6bb23385eeeaff213e8e80002\"" Aug 19 08:04:03.306473 containerd[1591]: time="2025-08-19T08:04:03.306431601Z" level=info msg="connecting to shim bf3d2f0ba425603b628e0d2ca076246b4fc8ddf6bb23385eeeaff213e8e80002" address="unix:///run/containerd/s/13387c989bc77d638b2682987246758edcf0c6d59493b167c11f2803f913c677" protocol=ttrpc version=3 Aug 19 08:04:03.307358 containerd[1591]: time="2025-08-19T08:04:03.307330951Z" level=info msg="CreateContainer within sandbox \"92ef37b9b5828c0ca76519718ce17bdef342946ba4be00ee21ce9983df48491a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bb8d2613682cb183594a183093c22b2d8c0f022f1df3a4a9d3f13f04a681764b\"" Aug 19 08:04:03.307714 containerd[1591]: time="2025-08-19T08:04:03.307668997Z" level=info msg="StartContainer for \"bb8d2613682cb183594a183093c22b2d8c0f022f1df3a4a9d3f13f04a681764b\"" Aug 19 08:04:03.308951 containerd[1591]: time="2025-08-19T08:04:03.308925771Z" level=info msg="connecting to shim bb8d2613682cb183594a183093c22b2d8c0f022f1df3a4a9d3f13f04a681764b" address="unix:///run/containerd/s/7b6bdf3dec739d98a27ac581dc0c00421e055acb91c059754ff7dad2a91dc727" protocol=ttrpc version=3 Aug 19 08:04:03.311704 containerd[1591]: time="2025-08-19T08:04:03.311651112Z" level=info msg="Container d9f328ffcfd4961616db9060bd211281ef001f7674f48b8dbbf30862f4a9f459: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:03.319558 containerd[1591]: time="2025-08-19T08:04:03.319505762Z" level=info msg="CreateContainer within sandbox \"2c07876303c3a101ab65ba10bf487c96d79e2791bc3bd25fc6c4c7ae9bc92156\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d9f328ffcfd4961616db9060bd211281ef001f7674f48b8dbbf30862f4a9f459\"" Aug 19 08:04:03.320346 containerd[1591]: time="2025-08-19T08:04:03.320289801Z" level=info msg="StartContainer for \"d9f328ffcfd4961616db9060bd211281ef001f7674f48b8dbbf30862f4a9f459\"" Aug 19 08:04:03.324432 containerd[1591]: time="2025-08-19T08:04:03.324396805Z" level=info msg="connecting to shim d9f328ffcfd4961616db9060bd211281ef001f7674f48b8dbbf30862f4a9f459" address="unix:///run/containerd/s/aeeaf17c6ecdaf8e500adc8201d1aca8e28cb4ee2f980d6c2abd73a5485f2684" protocol=ttrpc version=3 Aug 19 08:04:03.329382 kubelet[2396]: E0819 08:04:03.329348 2396 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:04:03.330107 systemd[1]: Started cri-containerd-bb8d2613682cb183594a183093c22b2d8c0f022f1df3a4a9d3f13f04a681764b.scope - libcontainer container bb8d2613682cb183594a183093c22b2d8c0f022f1df3a4a9d3f13f04a681764b. Aug 19 08:04:03.331677 systemd[1]: Started cri-containerd-bf3d2f0ba425603b628e0d2ca076246b4fc8ddf6bb23385eeeaff213e8e80002.scope - libcontainer container bf3d2f0ba425603b628e0d2ca076246b4fc8ddf6bb23385eeeaff213e8e80002. Aug 19 08:04:03.355145 systemd[1]: Started cri-containerd-d9f328ffcfd4961616db9060bd211281ef001f7674f48b8dbbf30862f4a9f459.scope - libcontainer container d9f328ffcfd4961616db9060bd211281ef001f7674f48b8dbbf30862f4a9f459. Aug 19 08:04:03.400814 containerd[1591]: time="2025-08-19T08:04:03.400659588Z" level=info msg="StartContainer for \"bf3d2f0ba425603b628e0d2ca076246b4fc8ddf6bb23385eeeaff213e8e80002\" returns successfully" Aug 19 08:04:03.414763 kubelet[2396]: E0819 08:04:03.414621 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:03.423451 containerd[1591]: time="2025-08-19T08:04:03.423361508Z" level=info msg="StartContainer for \"bb8d2613682cb183594a183093c22b2d8c0f022f1df3a4a9d3f13f04a681764b\" returns successfully" Aug 19 08:04:03.451921 containerd[1591]: time="2025-08-19T08:04:03.451846259Z" level=info msg="StartContainer for \"d9f328ffcfd4961616db9060bd211281ef001f7674f48b8dbbf30862f4a9f459\" returns successfully" Aug 19 08:04:04.435621 kubelet[2396]: E0819 08:04:04.435468 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:04.442558 kubelet[2396]: E0819 08:04:04.442510 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:04.443397 kubelet[2396]: E0819 08:04:04.443356 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:04.810652 kubelet[2396]: I0819 08:04:04.810615 2396 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 08:04:04.865296 kubelet[2396]: E0819 08:04:04.865234 2396 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 19 08:04:04.954718 kubelet[2396]: I0819 08:04:04.954662 2396 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 19 08:04:04.954718 kubelet[2396]: E0819 08:04:04.954725 2396 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 19 08:04:04.967022 kubelet[2396]: E0819 08:04:04.966968 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:04:05.067341 kubelet[2396]: E0819 08:04:05.067201 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:04:05.168310 kubelet[2396]: E0819 08:04:05.168269 2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:04:05.352354 kubelet[2396]: I0819 08:04:05.352248 2396 apiserver.go:52] "Watching apiserver" Aug 19 08:04:05.365794 kubelet[2396]: I0819 08:04:05.365749 2396 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 19 08:04:05.444775 kubelet[2396]: E0819 08:04:05.444719 2396 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:05.444775 kubelet[2396]: E0819 08:04:05.444724 2396 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 19 08:04:05.445276 kubelet[2396]: E0819 08:04:05.444931 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:05.445276 kubelet[2396]: E0819 08:04:05.444938 2396 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:06.981025 systemd[1]: Reload requested from client PID 2677 ('systemctl') (unit session-9.scope)... Aug 19 08:04:06.981047 systemd[1]: Reloading... Aug 19 08:04:07.061935 zram_generator::config[2723]: No configuration found. Aug 19 08:04:07.313954 systemd[1]: Reloading finished in 332 ms. Aug 19 08:04:07.350579 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:04:07.377619 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 08:04:07.378015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:04:07.378083 systemd[1]: kubelet.service: Consumed 866ms CPU time, 130.3M memory peak. Aug 19 08:04:07.380291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:04:07.623164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:04:07.636226 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:04:07.691108 kubelet[2765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:04:07.691108 kubelet[2765]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 19 08:04:07.691108 kubelet[2765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:04:07.691741 kubelet[2765]: I0819 08:04:07.691149 2765 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:04:07.698594 kubelet[2765]: I0819 08:04:07.698513 2765 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 19 08:04:07.698594 kubelet[2765]: I0819 08:04:07.698561 2765 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:04:07.699070 kubelet[2765]: I0819 08:04:07.699039 2765 server.go:934] "Client rotation is on, will bootstrap in background" Aug 19 08:04:07.700514 kubelet[2765]: I0819 08:04:07.700483 2765 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 19 08:04:07.703547 kubelet[2765]: I0819 08:04:07.703509 2765 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:04:07.805800 kubelet[2765]: I0819 08:04:07.805755 2765 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:04:07.811523 kubelet[2765]: I0819 08:04:07.811469 2765 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:04:07.811757 kubelet[2765]: I0819 08:04:07.811626 2765 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 19 08:04:07.811822 kubelet[2765]: I0819 08:04:07.811787 2765 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:04:07.812009 kubelet[2765]: I0819 08:04:07.811816 2765 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:04:07.812146 kubelet[2765]: I0819 08:04:07.812016 2765 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:04:07.812146 kubelet[2765]: I0819 08:04:07.812026 2765 container_manager_linux.go:300] "Creating device plugin manager" Aug 19 08:04:07.812146 kubelet[2765]: I0819 08:04:07.812053 2765 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:04:07.812236 kubelet[2765]: I0819 08:04:07.812190 2765 kubelet.go:408] "Attempting to sync node with API server" Aug 19 08:04:07.812236 kubelet[2765]: I0819 08:04:07.812203 2765 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:04:07.812236 kubelet[2765]: I0819 08:04:07.812235 2765 kubelet.go:314] "Adding apiserver pod source" Aug 19 08:04:07.812327 kubelet[2765]: I0819 08:04:07.812248 2765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:04:07.813008 kubelet[2765]: I0819 08:04:07.812980 2765 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:04:07.813460 kubelet[2765]: I0819 08:04:07.813423 2765 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 08:04:07.814043 kubelet[2765]: I0819 08:04:07.814019 2765 server.go:1274] "Started kubelet" Aug 19 08:04:07.814999 kubelet[2765]: I0819 08:04:07.814945 2765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:04:07.815358 kubelet[2765]: I0819 08:04:07.815330 2765 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:04:07.815958 kubelet[2765]: I0819 08:04:07.815836 2765 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:04:07.822755 kubelet[2765]: I0819 08:04:07.822697 2765 server.go:449] "Adding debug handlers to kubelet server" Aug 19 08:04:07.824910 kubelet[2765]: I0819 08:04:07.817627 2765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:04:07.825968 kubelet[2765]: I0819 08:04:07.817770 2765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:04:07.826562 kubelet[2765]: I0819 08:04:07.826526 2765 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 19 08:04:07.827038 kubelet[2765]: I0819 08:04:07.827003 2765 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 19 08:04:07.827391 kubelet[2765]: I0819 08:04:07.827362 2765 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:04:07.828407 kubelet[2765]: E0819 08:04:07.828348 2765 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:04:07.830354 kubelet[2765]: E0819 08:04:07.830321 2765 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:04:07.831332 kubelet[2765]: I0819 08:04:07.831298 2765 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:04:07.835493 kubelet[2765]: I0819 08:04:07.835448 2765 factory.go:221] Registration of the containerd container factory successfully Aug 19 08:04:07.835493 kubelet[2765]: I0819 08:04:07.835471 2765 factory.go:221] Registration of the systemd container factory successfully Aug 19 08:04:07.848414 kubelet[2765]: I0819 08:04:07.848356 2765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 08:04:07.851013 kubelet[2765]: I0819 08:04:07.850960 2765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 08:04:07.851013 kubelet[2765]: I0819 08:04:07.851018 2765 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 19 08:04:07.851223 kubelet[2765]: I0819 08:04:07.851054 2765 kubelet.go:2321] "Starting kubelet main sync loop" Aug 19 08:04:07.851223 kubelet[2765]: E0819 08:04:07.851132 2765 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:04:07.883215 kubelet[2765]: I0819 08:04:07.882845 2765 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 19 08:04:07.883215 kubelet[2765]: I0819 08:04:07.882868 2765 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 19 08:04:07.883215 kubelet[2765]: I0819 08:04:07.882917 2765 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:04:07.883215 kubelet[2765]: I0819 08:04:07.883067 2765 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 08:04:07.883215 kubelet[2765]: I0819 08:04:07.883077 2765 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 08:04:07.883215 kubelet[2765]: I0819 08:04:07.883095 2765 policy_none.go:49] "None policy: Start" Aug 19 08:04:07.883824 kubelet[2765]: I0819 08:04:07.883791 2765 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 19 08:04:07.883824 kubelet[2765]: I0819 08:04:07.883822 2765 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:04:07.884008 kubelet[2765]: I0819 08:04:07.883977 2765 state_mem.go:75] "Updated machine memory state" Aug 19 08:04:07.889558 kubelet[2765]: I0819 08:04:07.889521 2765 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 08:04:07.889790 kubelet[2765]: I0819 08:04:07.889767 2765 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:04:07.889846 kubelet[2765]: I0819 08:04:07.889789 2765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:04:07.890080 kubelet[2765]: I0819 08:04:07.890038 2765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:04:07.978995 sudo[2800]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 19 08:04:07.979392 sudo[2800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 19 08:04:07.998567 kubelet[2765]: I0819 08:04:07.998502 2765 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 08:04:08.007417 kubelet[2765]: I0819 08:04:08.007351 2765 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 19 08:04:08.007609 kubelet[2765]: I0819 08:04:08.007488 2765 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 19 08:04:08.029420 kubelet[2765]: I0819 08:04:08.028477 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:08.029420 kubelet[2765]: I0819 08:04:08.028645 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:08.029420 kubelet[2765]: I0819 08:04:08.028682 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Aug 19 08:04:08.029420 kubelet[2765]: I0819 08:04:08.028791 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9da9049669923e4556cb3e8ee1c2f8ab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9da9049669923e4556cb3e8ee1c2f8ab\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:04:08.029420 kubelet[2765]: I0819 08:04:08.028925 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9da9049669923e4556cb3e8ee1c2f8ab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9da9049669923e4556cb3e8ee1c2f8ab\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:04:08.029721 kubelet[2765]: I0819 08:04:08.029039 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:08.029721 kubelet[2765]: I0819 08:04:08.029182 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:08.029721 kubelet[2765]: I0819 08:04:08.029276 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9da9049669923e4556cb3e8ee1c2f8ab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9da9049669923e4556cb3e8ee1c2f8ab\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:04:08.029721 kubelet[2765]: I0819 08:04:08.029303 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:08.259977 kubelet[2765]: E0819 08:04:08.259784 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:08.264809 kubelet[2765]: E0819 08:04:08.264761 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:08.264989 kubelet[2765]: E0819 08:04:08.264859 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:08.420409 sudo[2800]: pam_unix(sudo:session): session closed for user root Aug 19 08:04:08.813374 kubelet[2765]: I0819 08:04:08.813295 2765 apiserver.go:52] "Watching apiserver" Aug 19 08:04:08.827500 kubelet[2765]: I0819 08:04:08.827454 2765 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 19 08:04:08.980256 kubelet[2765]: E0819 08:04:08.980185 2765 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 19 08:04:08.982074 kubelet[2765]: E0819 08:04:08.980431 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:08.982074 kubelet[2765]: E0819 08:04:08.981730 2765 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 19 08:04:08.982074 kubelet[2765]: E0819 08:04:08.981944 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:08.982937 kubelet[2765]: E0819 08:04:08.982744 2765 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 19 08:04:08.983181 kubelet[2765]: E0819 08:04:08.983132 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:09.065757 kubelet[2765]: I0819 08:04:09.064507 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.064474472 podStartE2EDuration="2.064474472s" podCreationTimestamp="2025-08-19 08:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:04:08.981356143 +0000 UTC m=+1.340507171" watchObservedRunningTime="2025-08-19 08:04:09.064474472 +0000 UTC m=+1.423625490" Aug 19 08:04:09.065757 kubelet[2765]: I0819 08:04:09.064686 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.064679622 podStartE2EDuration="2.064679622s" podCreationTimestamp="2025-08-19 08:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:04:09.064664524 +0000 UTC m=+1.423815542" watchObservedRunningTime="2025-08-19 08:04:09.064679622 +0000 UTC m=+1.423830640" Aug 19 08:04:09.081944 kubelet[2765]: I0819 08:04:09.081800 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.081777327 podStartE2EDuration="2.081777327s" podCreationTimestamp="2025-08-19 08:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:04:09.081597625 +0000 UTC m=+1.440748643" watchObservedRunningTime="2025-08-19 08:04:09.081777327 +0000 UTC m=+1.440928345" Aug 19 08:04:09.868322 kubelet[2765]: E0819 08:04:09.867963 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:09.868322 kubelet[2765]: E0819 08:04:09.868027 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:09.868759 kubelet[2765]: E0819 08:04:09.868383 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:10.414696 sudo[1821]: pam_unix(sudo:session): session closed for user root Aug 19 08:04:10.416472 sshd[1820]: Connection closed by 10.0.0.1 port 50750 Aug 19 08:04:10.417106 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Aug 19 08:04:10.422320 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:50750.service: Deactivated successfully. Aug 19 08:04:10.424989 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 08:04:10.425289 systemd[1]: session-9.scope: Consumed 5.756s CPU time, 260.1M memory peak. Aug 19 08:04:10.426871 systemd-logind[1568]: Session 9 logged out. Waiting for processes to exit. Aug 19 08:04:10.428778 systemd-logind[1568]: Removed session 9. Aug 19 08:04:11.858532 update_engine[1574]: I20250819 08:04:11.858423 1574 update_attempter.cc:509] Updating boot flags... Aug 19 08:04:12.145694 kubelet[2765]: I0819 08:04:12.145167 2765 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 08:04:12.146371 containerd[1591]: time="2025-08-19T08:04:12.145680975Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 08:04:12.146693 kubelet[2765]: I0819 08:04:12.146573 2765 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 08:04:12.859944 systemd[1]: Created slice kubepods-besteffort-pod141f88c6_2410_4342_a25d_de72ef076447.slice - libcontainer container kubepods-besteffort-pod141f88c6_2410_4342_a25d_de72ef076447.slice. Aug 19 08:04:12.864031 kubelet[2765]: I0819 08:04:12.860708 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/141f88c6-2410-4342-a25d-de72ef076447-xtables-lock\") pod \"kube-proxy-k5fpl\" (UID: \"141f88c6-2410-4342-a25d-de72ef076447\") " pod="kube-system/kube-proxy-k5fpl" Aug 19 08:04:12.864031 kubelet[2765]: I0819 08:04:12.860743 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-942nw\" (UniqueName: \"kubernetes.io/projected/141f88c6-2410-4342-a25d-de72ef076447-kube-api-access-942nw\") pod \"kube-proxy-k5fpl\" (UID: \"141f88c6-2410-4342-a25d-de72ef076447\") " pod="kube-system/kube-proxy-k5fpl" Aug 19 08:04:12.864031 kubelet[2765]: I0819 08:04:12.860762 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/141f88c6-2410-4342-a25d-de72ef076447-kube-proxy\") pod \"kube-proxy-k5fpl\" (UID: \"141f88c6-2410-4342-a25d-de72ef076447\") " pod="kube-system/kube-proxy-k5fpl" Aug 19 08:04:12.864031 kubelet[2765]: I0819 08:04:12.860778 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/141f88c6-2410-4342-a25d-de72ef076447-lib-modules\") pod \"kube-proxy-k5fpl\" (UID: \"141f88c6-2410-4342-a25d-de72ef076447\") " pod="kube-system/kube-proxy-k5fpl" Aug 19 08:04:12.881376 systemd[1]: Created slice kubepods-burstable-podae3fb5fb_db74_4f6d_a7e4_7cb428729cab.slice - libcontainer container kubepods-burstable-podae3fb5fb_db74_4f6d_a7e4_7cb428729cab.slice. Aug 19 08:04:12.961548 kubelet[2765]: I0819 08:04:12.961450 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-etc-cni-netd\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.961761 kubelet[2765]: I0819 08:04:12.961531 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-xtables-lock\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.961796 kubelet[2765]: I0819 08:04:12.961731 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px7qx\" (UniqueName: \"kubernetes.io/projected/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-kube-api-access-px7qx\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.961829 kubelet[2765]: I0819 08:04:12.961804 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-host-proc-sys-kernel\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.961878 kubelet[2765]: I0819 08:04:12.961839 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-bpf-maps\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.961878 kubelet[2765]: I0819 08:04:12.961865 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-hostproc\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.961976 kubelet[2765]: I0819 08:04:12.961917 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-lib-modules\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.961976 kubelet[2765]: I0819 08:04:12.961944 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-clustermesh-secrets\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.962030 kubelet[2765]: I0819 08:04:12.961985 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-config-path\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.962030 kubelet[2765]: I0819 08:04:12.962010 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-host-proc-sys-net\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.962091 kubelet[2765]: I0819 08:04:12.962032 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-hubble-tls\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.962185 kubelet[2765]: I0819 08:04:12.962152 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-cgroup\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.962525 kubelet[2765]: I0819 08:04:12.962357 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cni-path\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:12.962525 kubelet[2765]: I0819 08:04:12.962445 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-run\") pod \"cilium-5rnwh\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " pod="kube-system/cilium-5rnwh" Aug 19 08:04:13.171457 kubelet[2765]: E0819 08:04:13.170729 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:13.171880 containerd[1591]: time="2025-08-19T08:04:13.171792244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5fpl,Uid:141f88c6-2410-4342-a25d-de72ef076447,Namespace:kube-system,Attempt:0,}" Aug 19 08:04:13.487683 kubelet[2765]: E0819 08:04:13.487526 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:13.488323 containerd[1591]: time="2025-08-19T08:04:13.488273845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5rnwh,Uid:ae3fb5fb-db74-4f6d-a7e4-7cb428729cab,Namespace:kube-system,Attempt:0,}" Aug 19 08:04:13.636596 systemd[1]: Created slice kubepods-besteffort-pod94ee9dfa_db75_44eb_8a3a_8c734a14a7ee.slice - libcontainer container kubepods-besteffort-pod94ee9dfa_db75_44eb_8a3a_8c734a14a7ee.slice. Aug 19 08:04:13.642216 containerd[1591]: time="2025-08-19T08:04:13.639873266Z" level=info msg="connecting to shim 09876c42f6233102d06dd764ccd495bafc34abf81169467c0a80ea2cf6f2272c" address="unix:///run/containerd/s/8b8772850d021dcf3b773504bec1ed246478e51938516f98570926ba2b97e30f" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:04:13.646760 containerd[1591]: time="2025-08-19T08:04:13.646698689Z" level=info msg="connecting to shim 88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd" address="unix:///run/containerd/s/3ac308c548af414b8035b0174a46333ca0a6cf21165a819ef1b90a310dd73405" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:04:13.667916 kubelet[2765]: I0819 08:04:13.667856 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94ee9dfa-db75-44eb-8a3a-8c734a14a7ee-cilium-config-path\") pod \"cilium-operator-5d85765b45-wwpnv\" (UID: \"94ee9dfa-db75-44eb-8a3a-8c734a14a7ee\") " pod="kube-system/cilium-operator-5d85765b45-wwpnv" Aug 19 08:04:13.668056 kubelet[2765]: I0819 08:04:13.668034 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6r2q\" (UniqueName: \"kubernetes.io/projected/94ee9dfa-db75-44eb-8a3a-8c734a14a7ee-kube-api-access-x6r2q\") pod \"cilium-operator-5d85765b45-wwpnv\" (UID: \"94ee9dfa-db75-44eb-8a3a-8c734a14a7ee\") " pod="kube-system/cilium-operator-5d85765b45-wwpnv" Aug 19 08:04:13.673312 systemd[1]: Started cri-containerd-09876c42f6233102d06dd764ccd495bafc34abf81169467c0a80ea2cf6f2272c.scope - libcontainer container 09876c42f6233102d06dd764ccd495bafc34abf81169467c0a80ea2cf6f2272c. Aug 19 08:04:13.676953 systemd[1]: Started cri-containerd-88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd.scope - libcontainer container 88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd. Aug 19 08:04:13.711794 containerd[1591]: time="2025-08-19T08:04:13.711738441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5fpl,Uid:141f88c6-2410-4342-a25d-de72ef076447,Namespace:kube-system,Attempt:0,} returns sandbox id \"09876c42f6233102d06dd764ccd495bafc34abf81169467c0a80ea2cf6f2272c\"" Aug 19 08:04:13.712671 kubelet[2765]: E0819 08:04:13.712644 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:13.715057 containerd[1591]: time="2025-08-19T08:04:13.715026862Z" level=info msg="CreateContainer within sandbox \"09876c42f6233102d06dd764ccd495bafc34abf81169467c0a80ea2cf6f2272c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 08:04:13.723829 containerd[1591]: time="2025-08-19T08:04:13.722760787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5rnwh,Uid:ae3fb5fb-db74-4f6d-a7e4-7cb428729cab,Namespace:kube-system,Attempt:0,} returns sandbox id \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\"" Aug 19 08:04:13.724048 kubelet[2765]: E0819 08:04:13.723559 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:13.724584 containerd[1591]: time="2025-08-19T08:04:13.724546801Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 19 08:04:13.757217 containerd[1591]: time="2025-08-19T08:04:13.757078033Z" level=info msg="Container 0b1c8420fc98fe3ab57a39febdd663e4af727f4293db577ab2f4dc34256ad1c8: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:13.765479 containerd[1591]: time="2025-08-19T08:04:13.765439097Z" level=info msg="CreateContainer within sandbox \"09876c42f6233102d06dd764ccd495bafc34abf81169467c0a80ea2cf6f2272c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b1c8420fc98fe3ab57a39febdd663e4af727f4293db577ab2f4dc34256ad1c8\"" Aug 19 08:04:13.766148 containerd[1591]: time="2025-08-19T08:04:13.765992485Z" level=info msg="StartContainer for \"0b1c8420fc98fe3ab57a39febdd663e4af727f4293db577ab2f4dc34256ad1c8\"" Aug 19 08:04:13.767716 containerd[1591]: time="2025-08-19T08:04:13.767685944Z" level=info msg="connecting to shim 0b1c8420fc98fe3ab57a39febdd663e4af727f4293db577ab2f4dc34256ad1c8" address="unix:///run/containerd/s/8b8772850d021dcf3b773504bec1ed246478e51938516f98570926ba2b97e30f" protocol=ttrpc version=3 Aug 19 08:04:13.795221 systemd[1]: Started cri-containerd-0b1c8420fc98fe3ab57a39febdd663e4af727f4293db577ab2f4dc34256ad1c8.scope - libcontainer container 0b1c8420fc98fe3ab57a39febdd663e4af727f4293db577ab2f4dc34256ad1c8. Aug 19 08:04:13.841942 containerd[1591]: time="2025-08-19T08:04:13.841863952Z" level=info msg="StartContainer for \"0b1c8420fc98fe3ab57a39febdd663e4af727f4293db577ab2f4dc34256ad1c8\" returns successfully" Aug 19 08:04:13.882918 kubelet[2765]: E0819 08:04:13.882818 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:13.893856 kubelet[2765]: I0819 08:04:13.893760 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k5fpl" podStartSLOduration=1.8937366839999998 podStartE2EDuration="1.893736684s" podCreationTimestamp="2025-08-19 08:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:04:13.893711957 +0000 UTC m=+6.252862975" watchObservedRunningTime="2025-08-19 08:04:13.893736684 +0000 UTC m=+6.252887692" Aug 19 08:04:13.948004 kubelet[2765]: E0819 08:04:13.947956 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:13.948735 containerd[1591]: time="2025-08-19T08:04:13.948659345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wwpnv,Uid:94ee9dfa-db75-44eb-8a3a-8c734a14a7ee,Namespace:kube-system,Attempt:0,}" Aug 19 08:04:13.975282 containerd[1591]: time="2025-08-19T08:04:13.975143118Z" level=info msg="connecting to shim 9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0" address="unix:///run/containerd/s/c9ff742d5ef019ad9f3eaadf1b5f4e06aeebf0f5ba5bcdda17ee36ffccaaf463" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:04:14.004417 systemd[1]: Started cri-containerd-9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0.scope - libcontainer container 9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0. Aug 19 08:04:14.214708 containerd[1591]: time="2025-08-19T08:04:14.214640510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wwpnv,Uid:94ee9dfa-db75-44eb-8a3a-8c734a14a7ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0\"" Aug 19 08:04:14.215836 kubelet[2765]: E0819 08:04:14.215788 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:14.435266 kubelet[2765]: E0819 08:04:14.435211 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:14.886707 kubelet[2765]: E0819 08:04:14.886670 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:16.975627 kubelet[2765]: E0819 08:04:16.975569 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:17.892628 kubelet[2765]: E0819 08:04:17.892585 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:18.774465 kubelet[2765]: E0819 08:04:18.774416 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:18.798669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2455632100.mount: Deactivated successfully. Aug 19 08:04:21.918257 containerd[1591]: time="2025-08-19T08:04:21.918164524Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:04:21.919145 containerd[1591]: time="2025-08-19T08:04:21.919099820Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 19 08:04:21.920440 containerd[1591]: time="2025-08-19T08:04:21.920385055Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:04:21.922181 containerd[1591]: time="2025-08-19T08:04:21.922129226Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.197529945s" Aug 19 08:04:21.922181 containerd[1591]: time="2025-08-19T08:04:21.922174311Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 19 08:04:21.923787 containerd[1591]: time="2025-08-19T08:04:21.923741379Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 19 08:04:21.926062 containerd[1591]: time="2025-08-19T08:04:21.925974582Z" level=info msg="CreateContainer within sandbox \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 08:04:21.937361 containerd[1591]: time="2025-08-19T08:04:21.937295287Z" level=info msg="Container bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:21.945333 containerd[1591]: time="2025-08-19T08:04:21.945276388Z" level=info msg="CreateContainer within sandbox \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d\"" Aug 19 08:04:21.945998 containerd[1591]: time="2025-08-19T08:04:21.945926946Z" level=info msg="StartContainer for \"bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d\"" Aug 19 08:04:21.947183 containerd[1591]: time="2025-08-19T08:04:21.947105730Z" level=info msg="connecting to shim bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d" address="unix:///run/containerd/s/3ac308c548af414b8035b0174a46333ca0a6cf21165a819ef1b90a310dd73405" protocol=ttrpc version=3 Aug 19 08:04:22.012195 systemd[1]: Started cri-containerd-bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d.scope - libcontainer container bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d. Aug 19 08:04:22.052466 containerd[1591]: time="2025-08-19T08:04:22.052414878Z" level=info msg="StartContainer for \"bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d\" returns successfully" Aug 19 08:04:22.064155 systemd[1]: cri-containerd-bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d.scope: Deactivated successfully. Aug 19 08:04:22.067414 containerd[1591]: time="2025-08-19T08:04:22.067371983Z" level=info msg="received exit event container_id:\"bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d\" id:\"bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d\" pid:3200 exited_at:{seconds:1755590662 nanos:66755390}" Aug 19 08:04:22.067518 containerd[1591]: time="2025-08-19T08:04:22.067476510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d\" id:\"bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d\" pid:3200 exited_at:{seconds:1755590662 nanos:66755390}" Aug 19 08:04:22.090656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d-rootfs.mount: Deactivated successfully. Aug 19 08:04:22.907502 kubelet[2765]: E0819 08:04:22.907453 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:22.909821 containerd[1591]: time="2025-08-19T08:04:22.909777151Z" level=info msg="CreateContainer within sandbox \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 08:04:22.922958 containerd[1591]: time="2025-08-19T08:04:22.922868487Z" level=info msg="Container fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:22.932445 containerd[1591]: time="2025-08-19T08:04:22.932380140Z" level=info msg="CreateContainer within sandbox \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67\"" Aug 19 08:04:22.933025 containerd[1591]: time="2025-08-19T08:04:22.932962859Z" level=info msg="StartContainer for \"fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67\"" Aug 19 08:04:22.934141 containerd[1591]: time="2025-08-19T08:04:22.934112960Z" level=info msg="connecting to shim fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67" address="unix:///run/containerd/s/3ac308c548af414b8035b0174a46333ca0a6cf21165a819ef1b90a310dd73405" protocol=ttrpc version=3 Aug 19 08:04:22.958133 systemd[1]: Started cri-containerd-fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67.scope - libcontainer container fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67. Aug 19 08:04:22.995348 containerd[1591]: time="2025-08-19T08:04:22.995289835Z" level=info msg="StartContainer for \"fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67\" returns successfully" Aug 19 08:04:23.011712 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:04:23.012394 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:04:23.012614 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:04:23.015538 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:04:23.017164 containerd[1591]: time="2025-08-19T08:04:23.017109967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67\" id:\"fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67\" pid:3244 exited_at:{seconds:1755590663 nanos:16635051}" Aug 19 08:04:23.017411 containerd[1591]: time="2025-08-19T08:04:23.017337014Z" level=info msg="received exit event container_id:\"fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67\" id:\"fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67\" pid:3244 exited_at:{seconds:1755590663 nanos:16635051}" Aug 19 08:04:23.018652 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 08:04:23.019257 systemd[1]: cri-containerd-fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67.scope: Deactivated successfully. Aug 19 08:04:23.042640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67-rootfs.mount: Deactivated successfully. Aug 19 08:04:23.050324 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:04:23.912632 kubelet[2765]: E0819 08:04:23.912579 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:23.914941 containerd[1591]: time="2025-08-19T08:04:23.914872351Z" level=info msg="CreateContainer within sandbox \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 08:04:23.938822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771217844.mount: Deactivated successfully. Aug 19 08:04:24.167714 containerd[1591]: time="2025-08-19T08:04:24.167574812Z" level=info msg="Container 58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:24.174270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2956387838.mount: Deactivated successfully. Aug 19 08:04:24.182497 containerd[1591]: time="2025-08-19T08:04:24.182441769Z" level=info msg="CreateContainer within sandbox \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f\"" Aug 19 08:04:24.183316 containerd[1591]: time="2025-08-19T08:04:24.183278927Z" level=info msg="StartContainer for \"58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f\"" Aug 19 08:04:24.184875 containerd[1591]: time="2025-08-19T08:04:24.184833247Z" level=info msg="connecting to shim 58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f" address="unix:///run/containerd/s/3ac308c548af414b8035b0174a46333ca0a6cf21165a819ef1b90a310dd73405" protocol=ttrpc version=3 Aug 19 08:04:24.213237 systemd[1]: Started cri-containerd-58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f.scope - libcontainer container 58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f. Aug 19 08:04:24.266688 systemd[1]: cri-containerd-58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f.scope: Deactivated successfully. Aug 19 08:04:24.270069 containerd[1591]: time="2025-08-19T08:04:24.269953019Z" level=info msg="StartContainer for \"58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f\" returns successfully" Aug 19 08:04:24.271432 containerd[1591]: time="2025-08-19T08:04:24.271394496Z" level=info msg="received exit event container_id:\"58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f\" id:\"58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f\" pid:3306 exited_at:{seconds:1755590664 nanos:270553822}" Aug 19 08:04:24.271606 containerd[1591]: time="2025-08-19T08:04:24.271403724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f\" id:\"58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f\" pid:3306 exited_at:{seconds:1755590664 nanos:270553822}" Aug 19 08:04:24.303020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f-rootfs.mount: Deactivated successfully. Aug 19 08:04:24.536875 containerd[1591]: time="2025-08-19T08:04:24.536649337Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:04:24.537877 containerd[1591]: time="2025-08-19T08:04:24.537835533Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 19 08:04:24.539480 containerd[1591]: time="2025-08-19T08:04:24.539416443Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:04:24.541108 containerd[1591]: time="2025-08-19T08:04:24.541066304Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.617274571s" Aug 19 08:04:24.541169 containerd[1591]: time="2025-08-19T08:04:24.541108483Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 19 08:04:24.543777 containerd[1591]: time="2025-08-19T08:04:24.543728783Z" level=info msg="CreateContainer within sandbox \"9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 19 08:04:24.555816 containerd[1591]: time="2025-08-19T08:04:24.555742811Z" level=info msg="Container ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:24.593037 containerd[1591]: time="2025-08-19T08:04:24.592949570Z" level=info msg="CreateContainer within sandbox \"9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\"" Aug 19 08:04:24.593677 containerd[1591]: time="2025-08-19T08:04:24.593630334Z" level=info msg="StartContainer for \"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\"" Aug 19 08:04:24.595164 containerd[1591]: time="2025-08-19T08:04:24.595116616Z" level=info msg="connecting to shim ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca" address="unix:///run/containerd/s/c9ff742d5ef019ad9f3eaadf1b5f4e06aeebf0f5ba5bcdda17ee36ffccaaf463" protocol=ttrpc version=3 Aug 19 08:04:24.620156 systemd[1]: Started cri-containerd-ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca.scope - libcontainer container ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca. Aug 19 08:04:24.658299 containerd[1591]: time="2025-08-19T08:04:24.658240602Z" level=info msg="StartContainer for \"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\" returns successfully" Aug 19 08:04:24.924641 kubelet[2765]: E0819 08:04:24.924582 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:24.932904 kubelet[2765]: E0819 08:04:24.932825 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:24.942786 containerd[1591]: time="2025-08-19T08:04:24.942254777Z" level=info msg="CreateContainer within sandbox \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 08:04:24.971209 containerd[1591]: time="2025-08-19T08:04:24.969341830Z" level=info msg="Container 43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:24.990925 containerd[1591]: time="2025-08-19T08:04:24.989980627Z" level=info msg="CreateContainer within sandbox \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be\"" Aug 19 08:04:24.991402 containerd[1591]: time="2025-08-19T08:04:24.991343485Z" level=info msg="StartContainer for \"43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be\"" Aug 19 08:04:24.993319 containerd[1591]: time="2025-08-19T08:04:24.993272182Z" level=info msg="connecting to shim 43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be" address="unix:///run/containerd/s/3ac308c548af414b8035b0174a46333ca0a6cf21165a819ef1b90a310dd73405" protocol=ttrpc version=3 Aug 19 08:04:24.999672 kubelet[2765]: I0819 08:04:24.999182 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-wwpnv" podStartSLOduration=1.6736534650000001 podStartE2EDuration="11.999152336s" podCreationTimestamp="2025-08-19 08:04:13 +0000 UTC" firstStartedPulling="2025-08-19 08:04:14.216470385 +0000 UTC m=+6.575621403" lastFinishedPulling="2025-08-19 08:04:24.541969256 +0000 UTC m=+16.901120274" observedRunningTime="2025-08-19 08:04:24.964253337 +0000 UTC m=+17.323404355" watchObservedRunningTime="2025-08-19 08:04:24.999152336 +0000 UTC m=+17.358303354" Aug 19 08:04:25.022071 systemd[1]: Started cri-containerd-43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be.scope - libcontainer container 43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be. Aug 19 08:04:25.056261 systemd[1]: cri-containerd-43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be.scope: Deactivated successfully. Aug 19 08:04:25.057347 containerd[1591]: time="2025-08-19T08:04:25.057289109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be\" id:\"43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be\" pid:3381 exited_at:{seconds:1755590665 nanos:56716309}" Aug 19 08:04:25.113496 containerd[1591]: time="2025-08-19T08:04:25.113359937Z" level=info msg="received exit event container_id:\"43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be\" id:\"43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be\" pid:3381 exited_at:{seconds:1755590665 nanos:56716309}" Aug 19 08:04:25.115714 containerd[1591]: time="2025-08-19T08:04:25.115643921Z" level=info msg="StartContainer for \"43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be\" returns successfully" Aug 19 08:04:25.137806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be-rootfs.mount: Deactivated successfully. Aug 19 08:04:25.940131 kubelet[2765]: E0819 08:04:25.940069 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:25.941067 kubelet[2765]: E0819 08:04:25.940412 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:25.943841 containerd[1591]: time="2025-08-19T08:04:25.943762417Z" level=info msg="CreateContainer within sandbox \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 08:04:25.966003 containerd[1591]: time="2025-08-19T08:04:25.965938158Z" level=info msg="Container 65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:25.977433 containerd[1591]: time="2025-08-19T08:04:25.977352299Z" level=info msg="CreateContainer within sandbox \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\"" Aug 19 08:04:25.978267 containerd[1591]: time="2025-08-19T08:04:25.978019246Z" level=info msg="StartContainer for \"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\"" Aug 19 08:04:25.979060 containerd[1591]: time="2025-08-19T08:04:25.979007869Z" level=info msg="connecting to shim 65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a" address="unix:///run/containerd/s/3ac308c548af414b8035b0174a46333ca0a6cf21165a819ef1b90a310dd73405" protocol=ttrpc version=3 Aug 19 08:04:26.015565 systemd[1]: Started cri-containerd-65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a.scope - libcontainer container 65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a. Aug 19 08:04:26.064295 containerd[1591]: time="2025-08-19T08:04:26.064243243Z" level=info msg="StartContainer for \"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" returns successfully" Aug 19 08:04:26.163785 containerd[1591]: time="2025-08-19T08:04:26.163721479Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" id:\"53d8df85243cff985d983c294d7c26c7adde9ca1931267053d2b0e0a30e02497\" pid:3453 exited_at:{seconds:1755590666 nanos:163115658}" Aug 19 08:04:26.262619 kubelet[2765]: I0819 08:04:26.262475 2765 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 19 08:04:26.355593 systemd[1]: Created slice kubepods-burstable-pod7bdcc2e0_fc11_4f8b_ad81_b401af6f5f9f.slice - libcontainer container kubepods-burstable-pod7bdcc2e0_fc11_4f8b_ad81_b401af6f5f9f.slice. Aug 19 08:04:26.358989 kubelet[2765]: I0819 08:04:26.358503 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsvzl\" (UniqueName: \"kubernetes.io/projected/7bdcc2e0-fc11-4f8b-ad81-b401af6f5f9f-kube-api-access-gsvzl\") pod \"coredns-7c65d6cfc9-mjdth\" (UID: \"7bdcc2e0-fc11-4f8b-ad81-b401af6f5f9f\") " pod="kube-system/coredns-7c65d6cfc9-mjdth" Aug 19 08:04:26.358989 kubelet[2765]: I0819 08:04:26.358549 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeee4932-f624-4718-93e5-6abf06c6d52d-config-volume\") pod \"coredns-7c65d6cfc9-jdpt7\" (UID: \"eeee4932-f624-4718-93e5-6abf06c6d52d\") " pod="kube-system/coredns-7c65d6cfc9-jdpt7" Aug 19 08:04:26.358989 kubelet[2765]: I0819 08:04:26.358580 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bdcc2e0-fc11-4f8b-ad81-b401af6f5f9f-config-volume\") pod \"coredns-7c65d6cfc9-mjdth\" (UID: \"7bdcc2e0-fc11-4f8b-ad81-b401af6f5f9f\") " pod="kube-system/coredns-7c65d6cfc9-mjdth" Aug 19 08:04:26.358989 kubelet[2765]: I0819 08:04:26.358601 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlrfp\" (UniqueName: \"kubernetes.io/projected/eeee4932-f624-4718-93e5-6abf06c6d52d-kube-api-access-dlrfp\") pod \"coredns-7c65d6cfc9-jdpt7\" (UID: \"eeee4932-f624-4718-93e5-6abf06c6d52d\") " pod="kube-system/coredns-7c65d6cfc9-jdpt7" Aug 19 08:04:26.369570 systemd[1]: Created slice kubepods-burstable-podeeee4932_f624_4718_93e5_6abf06c6d52d.slice - libcontainer container kubepods-burstable-podeeee4932_f624_4718_93e5_6abf06c6d52d.slice. Aug 19 08:04:26.663271 kubelet[2765]: E0819 08:04:26.663200 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:26.672379 containerd[1591]: time="2025-08-19T08:04:26.672312090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mjdth,Uid:7bdcc2e0-fc11-4f8b-ad81-b401af6f5f9f,Namespace:kube-system,Attempt:0,}" Aug 19 08:04:26.674257 kubelet[2765]: E0819 08:04:26.674195 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:26.674911 containerd[1591]: time="2025-08-19T08:04:26.674766935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jdpt7,Uid:eeee4932-f624-4718-93e5-6abf06c6d52d,Namespace:kube-system,Attempt:0,}" Aug 19 08:04:26.957988 kubelet[2765]: E0819 08:04:26.957816 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:27.976047 kubelet[2765]: E0819 08:04:27.975995 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:28.370025 systemd-networkd[1496]: cilium_host: Link UP Aug 19 08:04:28.370243 systemd-networkd[1496]: cilium_net: Link UP Aug 19 08:04:28.370451 systemd-networkd[1496]: cilium_net: Gained carrier Aug 19 08:04:28.370659 systemd-networkd[1496]: cilium_host: Gained carrier Aug 19 08:04:28.500319 systemd-networkd[1496]: cilium_vxlan: Link UP Aug 19 08:04:28.500330 systemd-networkd[1496]: cilium_vxlan: Gained carrier Aug 19 08:04:28.728940 kernel: NET: Registered PF_ALG protocol family Aug 19 08:04:28.918124 systemd-networkd[1496]: cilium_host: Gained IPv6LL Aug 19 08:04:28.977873 kubelet[2765]: E0819 08:04:28.977822 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:29.365067 systemd-networkd[1496]: cilium_net: Gained IPv6LL Aug 19 08:04:29.420844 systemd-networkd[1496]: lxc_health: Link UP Aug 19 08:04:29.421172 systemd-networkd[1496]: lxc_health: Gained carrier Aug 19 08:04:29.508067 kubelet[2765]: I0819 08:04:29.507994 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5rnwh" podStartSLOduration=9.308696057 podStartE2EDuration="17.507969897s" podCreationTimestamp="2025-08-19 08:04:12 +0000 UTC" firstStartedPulling="2025-08-19 08:04:13.724178243 +0000 UTC m=+6.083329261" lastFinishedPulling="2025-08-19 08:04:21.923452083 +0000 UTC m=+14.282603101" observedRunningTime="2025-08-19 08:04:27.038255004 +0000 UTC m=+19.397406022" watchObservedRunningTime="2025-08-19 08:04:29.507969897 +0000 UTC m=+21.867120905" Aug 19 08:04:29.524221 systemd-networkd[1496]: lxc68dd2cebb259: Link UP Aug 19 08:04:29.528391 kernel: eth0: renamed from tmpdaa43 Aug 19 08:04:29.530981 systemd-networkd[1496]: lxc68dd2cebb259: Gained carrier Aug 19 08:04:29.979675 kubelet[2765]: E0819 08:04:29.979621 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:29.997954 systemd-networkd[1496]: lxcb9212dde55b6: Link UP Aug 19 08:04:30.007008 kernel: eth0: renamed from tmp03449 Aug 19 08:04:30.008752 systemd-networkd[1496]: lxcb9212dde55b6: Gained carrier Aug 19 08:04:30.136029 systemd-networkd[1496]: cilium_vxlan: Gained IPv6LL Aug 19 08:04:31.157197 systemd-networkd[1496]: lxc_health: Gained IPv6LL Aug 19 08:04:31.157757 systemd-networkd[1496]: lxc68dd2cebb259: Gained IPv6LL Aug 19 08:04:31.925188 systemd-networkd[1496]: lxcb9212dde55b6: Gained IPv6LL Aug 19 08:04:33.290964 containerd[1591]: time="2025-08-19T08:04:33.290824109Z" level=info msg="connecting to shim daa43886dae5f25581e6b740865ad643a099e57e132a1874e70fcc462bc1826a" address="unix:///run/containerd/s/52c226c3b86e75a5a230396cee5ee01ae98644642c7aa3fbc555cdccf6c44bce" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:04:33.291444 containerd[1591]: time="2025-08-19T08:04:33.291105670Z" level=info msg="connecting to shim 0344940757c0072f549741f45dd314c2de7103b82b119b7273b049bdc6040bf8" address="unix:///run/containerd/s/94c1c0275b69333ba8b08a74815b056e6ca5395d854109a1a3cae5b7211725ea" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:04:33.334069 systemd[1]: Started cri-containerd-0344940757c0072f549741f45dd314c2de7103b82b119b7273b049bdc6040bf8.scope - libcontainer container 0344940757c0072f549741f45dd314c2de7103b82b119b7273b049bdc6040bf8. Aug 19 08:04:33.337420 systemd[1]: Started cri-containerd-daa43886dae5f25581e6b740865ad643a099e57e132a1874e70fcc462bc1826a.scope - libcontainer container daa43886dae5f25581e6b740865ad643a099e57e132a1874e70fcc462bc1826a. Aug 19 08:04:33.349852 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:04:33.353701 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:04:33.468048 containerd[1591]: time="2025-08-19T08:04:33.468004323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mjdth,Uid:7bdcc2e0-fc11-4f8b-ad81-b401af6f5f9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0344940757c0072f549741f45dd314c2de7103b82b119b7273b049bdc6040bf8\"" Aug 19 08:04:33.471772 kubelet[2765]: E0819 08:04:33.471731 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:33.473596 containerd[1591]: time="2025-08-19T08:04:33.473542210Z" level=info msg="CreateContainer within sandbox \"0344940757c0072f549741f45dd314c2de7103b82b119b7273b049bdc6040bf8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:04:33.523774 containerd[1591]: time="2025-08-19T08:04:33.523708462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jdpt7,Uid:eeee4932-f624-4718-93e5-6abf06c6d52d,Namespace:kube-system,Attempt:0,} returns sandbox id \"daa43886dae5f25581e6b740865ad643a099e57e132a1874e70fcc462bc1826a\"" Aug 19 08:04:33.524441 kubelet[2765]: E0819 08:04:33.524400 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:33.526185 containerd[1591]: time="2025-08-19T08:04:33.526146497Z" level=info msg="CreateContainer within sandbox \"daa43886dae5f25581e6b740865ad643a099e57e132a1874e70fcc462bc1826a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:04:33.589259 containerd[1591]: time="2025-08-19T08:04:33.589198597Z" level=info msg="Container 0aa5c38518ecd3cc4b79558f5dc39d078b5d3c825a76676d5def8f4c14f26c01: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:33.592417 containerd[1591]: time="2025-08-19T08:04:33.592364001Z" level=info msg="Container ee559e64a8295f64f5411d7920993f0b0be860af771901f7bc0e73851e8a4f80: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:04:33.598397 containerd[1591]: time="2025-08-19T08:04:33.598340182Z" level=info msg="CreateContainer within sandbox \"0344940757c0072f549741f45dd314c2de7103b82b119b7273b049bdc6040bf8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0aa5c38518ecd3cc4b79558f5dc39d078b5d3c825a76676d5def8f4c14f26c01\"" Aug 19 08:04:33.598989 containerd[1591]: time="2025-08-19T08:04:33.598952915Z" level=info msg="StartContainer for \"0aa5c38518ecd3cc4b79558f5dc39d078b5d3c825a76676d5def8f4c14f26c01\"" Aug 19 08:04:33.600062 containerd[1591]: time="2025-08-19T08:04:33.600017527Z" level=info msg="connecting to shim 0aa5c38518ecd3cc4b79558f5dc39d078b5d3c825a76676d5def8f4c14f26c01" address="unix:///run/containerd/s/94c1c0275b69333ba8b08a74815b056e6ca5395d854109a1a3cae5b7211725ea" protocol=ttrpc version=3 Aug 19 08:04:33.601704 containerd[1591]: time="2025-08-19T08:04:33.601676087Z" level=info msg="CreateContainer within sandbox \"daa43886dae5f25581e6b740865ad643a099e57e132a1874e70fcc462bc1826a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee559e64a8295f64f5411d7920993f0b0be860af771901f7bc0e73851e8a4f80\"" Aug 19 08:04:33.602294 containerd[1591]: time="2025-08-19T08:04:33.602264814Z" level=info msg="StartContainer for \"ee559e64a8295f64f5411d7920993f0b0be860af771901f7bc0e73851e8a4f80\"" Aug 19 08:04:33.603219 containerd[1591]: time="2025-08-19T08:04:33.603184454Z" level=info msg="connecting to shim ee559e64a8295f64f5411d7920993f0b0be860af771901f7bc0e73851e8a4f80" address="unix:///run/containerd/s/52c226c3b86e75a5a230396cee5ee01ae98644642c7aa3fbc555cdccf6c44bce" protocol=ttrpc version=3 Aug 19 08:04:33.626193 systemd[1]: Started cri-containerd-0aa5c38518ecd3cc4b79558f5dc39d078b5d3c825a76676d5def8f4c14f26c01.scope - libcontainer container 0aa5c38518ecd3cc4b79558f5dc39d078b5d3c825a76676d5def8f4c14f26c01. Aug 19 08:04:33.630818 systemd[1]: Started cri-containerd-ee559e64a8295f64f5411d7920993f0b0be860af771901f7bc0e73851e8a4f80.scope - libcontainer container ee559e64a8295f64f5411d7920993f0b0be860af771901f7bc0e73851e8a4f80. Aug 19 08:04:33.675016 containerd[1591]: time="2025-08-19T08:04:33.674932952Z" level=info msg="StartContainer for \"0aa5c38518ecd3cc4b79558f5dc39d078b5d3c825a76676d5def8f4c14f26c01\" returns successfully" Aug 19 08:04:33.685022 containerd[1591]: time="2025-08-19T08:04:33.684975632Z" level=info msg="StartContainer for \"ee559e64a8295f64f5411d7920993f0b0be860af771901f7bc0e73851e8a4f80\" returns successfully" Aug 19 08:04:34.005686 kubelet[2765]: E0819 08:04:34.004699 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:34.006749 kubelet[2765]: E0819 08:04:34.006706 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:34.046542 kubelet[2765]: I0819 08:04:34.046438 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mjdth" podStartSLOduration=21.046411191 podStartE2EDuration="21.046411191s" podCreationTimestamp="2025-08-19 08:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:04:34.023698135 +0000 UTC m=+26.382849164" watchObservedRunningTime="2025-08-19 08:04:34.046411191 +0000 UTC m=+26.405562209" Aug 19 08:04:34.066654 kubelet[2765]: I0819 08:04:34.066526 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jdpt7" podStartSLOduration=21.066499901 podStartE2EDuration="21.066499901s" podCreationTimestamp="2025-08-19 08:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:04:34.065970897 +0000 UTC m=+26.425121925" watchObservedRunningTime="2025-08-19 08:04:34.066499901 +0000 UTC m=+26.425650919" Aug 19 08:04:35.008834 kubelet[2765]: E0819 08:04:35.008777 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:35.009394 kubelet[2765]: E0819 08:04:35.008934 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:36.010554 kubelet[2765]: E0819 08:04:36.010502 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:36.010554 kubelet[2765]: E0819 08:04:36.010566 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:39.343550 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:50104.service - OpenSSH per-connection server daemon (10.0.0.1:50104). Aug 19 08:04:39.410869 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 50104 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:04:39.413355 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:04:39.419875 systemd-logind[1568]: New session 10 of user core. Aug 19 08:04:39.427181 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 08:04:39.580127 sshd[4110]: Connection closed by 10.0.0.1 port 50104 Aug 19 08:04:39.580502 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Aug 19 08:04:39.586012 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:50104.service: Deactivated successfully. Aug 19 08:04:39.588671 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 08:04:39.589639 systemd-logind[1568]: Session 10 logged out. Waiting for processes to exit. Aug 19 08:04:39.591261 systemd-logind[1568]: Removed session 10. Aug 19 08:04:42.828328 kubelet[2765]: I0819 08:04:42.828260 2765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 08:04:42.828919 kubelet[2765]: E0819 08:04:42.828747 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:43.025183 kubelet[2765]: E0819 08:04:43.025124 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:04:44.597391 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:50114.service - OpenSSH per-connection server daemon (10.0.0.1:50114). Aug 19 08:04:44.653222 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 50114 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:04:44.654689 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:04:44.659444 systemd-logind[1568]: New session 11 of user core. Aug 19 08:04:44.672077 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 08:04:44.799130 sshd[4133]: Connection closed by 10.0.0.1 port 50114 Aug 19 08:04:44.799557 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Aug 19 08:04:44.804386 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:50114.service: Deactivated successfully. Aug 19 08:04:44.806849 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 08:04:44.808176 systemd-logind[1568]: Session 11 logged out. Waiting for processes to exit. Aug 19 08:04:44.809567 systemd-logind[1568]: Removed session 11. Aug 19 08:04:49.821973 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:40864.service - OpenSSH per-connection server daemon (10.0.0.1:40864). Aug 19 08:04:49.889987 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 40864 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:04:49.892221 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:04:49.897421 systemd-logind[1568]: New session 12 of user core. Aug 19 08:04:49.905095 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 08:04:50.037714 sshd[4150]: Connection closed by 10.0.0.1 port 40864 Aug 19 08:04:50.038139 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Aug 19 08:04:50.044920 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:40864.service: Deactivated successfully. Aug 19 08:04:50.047675 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 08:04:50.049500 systemd-logind[1568]: Session 12 logged out. Waiting for processes to exit. Aug 19 08:04:50.051670 systemd-logind[1568]: Removed session 12. Aug 19 08:04:55.052223 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:40868.service - OpenSSH per-connection server daemon (10.0.0.1:40868). Aug 19 08:04:55.109843 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 40868 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:04:55.112005 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:04:55.116770 systemd-logind[1568]: New session 13 of user core. Aug 19 08:04:55.126056 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 08:04:55.276084 sshd[4167]: Connection closed by 10.0.0.1 port 40868 Aug 19 08:04:55.276613 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Aug 19 08:04:55.283227 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:40868.service: Deactivated successfully. Aug 19 08:04:55.285772 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 08:04:55.286833 systemd-logind[1568]: Session 13 logged out. Waiting for processes to exit. Aug 19 08:04:55.288707 systemd-logind[1568]: Removed session 13. Aug 19 08:05:00.295936 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:38800.service - OpenSSH per-connection server daemon (10.0.0.1:38800). Aug 19 08:05:00.354030 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 38800 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:00.356416 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:00.362258 systemd-logind[1568]: New session 14 of user core. Aug 19 08:05:00.372312 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 08:05:00.489525 sshd[4185]: Connection closed by 10.0.0.1 port 38800 Aug 19 08:05:00.490041 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:00.499624 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:38800.service: Deactivated successfully. Aug 19 08:05:00.501629 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 08:05:00.502437 systemd-logind[1568]: Session 14 logged out. Waiting for processes to exit. Aug 19 08:05:00.506002 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:38812.service - OpenSSH per-connection server daemon (10.0.0.1:38812). Aug 19 08:05:00.506796 systemd-logind[1568]: Removed session 14. Aug 19 08:05:00.566264 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 38812 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:00.568669 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:00.574765 systemd-logind[1568]: New session 15 of user core. Aug 19 08:05:00.585252 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 08:05:00.748978 sshd[4203]: Connection closed by 10.0.0.1 port 38812 Aug 19 08:05:00.749686 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:00.766342 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:38812.service: Deactivated successfully. Aug 19 08:05:00.770782 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 08:05:00.773570 systemd-logind[1568]: Session 15 logged out. Waiting for processes to exit. Aug 19 08:05:00.776272 systemd-logind[1568]: Removed session 15. Aug 19 08:05:00.780570 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:38828.service - OpenSSH per-connection server daemon (10.0.0.1:38828). Aug 19 08:05:00.830714 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 38828 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:00.832956 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:00.840109 systemd-logind[1568]: New session 16 of user core. Aug 19 08:05:00.857306 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 08:05:00.983807 sshd[4217]: Connection closed by 10.0.0.1 port 38828 Aug 19 08:05:00.984305 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:00.989148 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:38828.service: Deactivated successfully. Aug 19 08:05:00.991695 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 08:05:00.992724 systemd-logind[1568]: Session 16 logged out. Waiting for processes to exit. Aug 19 08:05:00.994281 systemd-logind[1568]: Removed session 16. Aug 19 08:05:06.007267 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:38834.service - OpenSSH per-connection server daemon (10.0.0.1:38834). Aug 19 08:05:06.072019 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 38834 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:06.074561 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:06.079741 systemd-logind[1568]: New session 17 of user core. Aug 19 08:05:06.089107 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 08:05:06.205856 sshd[4234]: Connection closed by 10.0.0.1 port 38834 Aug 19 08:05:06.206328 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:06.211952 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:38834.service: Deactivated successfully. Aug 19 08:05:06.214147 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 08:05:06.215031 systemd-logind[1568]: Session 17 logged out. Waiting for processes to exit. Aug 19 08:05:06.216577 systemd-logind[1568]: Removed session 17. Aug 19 08:05:11.226363 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:45876.service - OpenSSH per-connection server daemon (10.0.0.1:45876). Aug 19 08:05:11.293520 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 45876 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:11.294822 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:11.299256 systemd-logind[1568]: New session 18 of user core. Aug 19 08:05:11.308042 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 08:05:11.436074 sshd[4253]: Connection closed by 10.0.0.1 port 45876 Aug 19 08:05:11.436497 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:11.441972 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:45876.service: Deactivated successfully. Aug 19 08:05:11.444524 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 08:05:11.445380 systemd-logind[1568]: Session 18 logged out. Waiting for processes to exit. Aug 19 08:05:11.446475 systemd-logind[1568]: Removed session 18. Aug 19 08:05:14.853112 kubelet[2765]: E0819 08:05:14.853028 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:16.449912 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:45884.service - OpenSSH per-connection server daemon (10.0.0.1:45884). Aug 19 08:05:16.509089 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 45884 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:16.511279 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:16.518252 systemd-logind[1568]: New session 19 of user core. Aug 19 08:05:16.526155 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 08:05:16.653252 sshd[4271]: Connection closed by 10.0.0.1 port 45884 Aug 19 08:05:16.653542 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:16.668053 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:45884.service: Deactivated successfully. Aug 19 08:05:16.671171 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 08:05:16.672181 systemd-logind[1568]: Session 19 logged out. Waiting for processes to exit. Aug 19 08:05:16.676039 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:45888.service - OpenSSH per-connection server daemon (10.0.0.1:45888). Aug 19 08:05:16.676756 systemd-logind[1568]: Removed session 19. Aug 19 08:05:16.738999 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 45888 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:16.740828 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:16.748168 systemd-logind[1568]: New session 20 of user core. Aug 19 08:05:16.759293 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 08:05:17.252191 sshd[4287]: Connection closed by 10.0.0.1 port 45888 Aug 19 08:05:17.252711 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:17.266714 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:45888.service: Deactivated successfully. Aug 19 08:05:17.269126 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 08:05:17.270156 systemd-logind[1568]: Session 20 logged out. Waiting for processes to exit. Aug 19 08:05:17.273713 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:45890.service - OpenSSH per-connection server daemon (10.0.0.1:45890). Aug 19 08:05:17.274705 systemd-logind[1568]: Removed session 20. Aug 19 08:05:17.344265 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 45890 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:17.345803 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:17.351468 systemd-logind[1568]: New session 21 of user core. Aug 19 08:05:17.366088 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 08:05:18.681479 sshd[4301]: Connection closed by 10.0.0.1 port 45890 Aug 19 08:05:18.682058 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:18.692166 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:45890.service: Deactivated successfully. Aug 19 08:05:18.695070 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 08:05:18.696373 systemd-logind[1568]: Session 21 logged out. Waiting for processes to exit. Aug 19 08:05:18.701316 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:45906.service - OpenSSH per-connection server daemon (10.0.0.1:45906). Aug 19 08:05:18.702200 systemd-logind[1568]: Removed session 21. Aug 19 08:05:18.755546 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 45906 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:18.757421 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:18.762620 systemd-logind[1568]: New session 22 of user core. Aug 19 08:05:18.777050 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 08:05:19.171845 sshd[4323]: Connection closed by 10.0.0.1 port 45906 Aug 19 08:05:19.172283 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:19.184908 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:45906.service: Deactivated successfully. Aug 19 08:05:19.186879 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 08:05:19.187741 systemd-logind[1568]: Session 22 logged out. Waiting for processes to exit. Aug 19 08:05:19.190722 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:47310.service - OpenSSH per-connection server daemon (10.0.0.1:47310). Aug 19 08:05:19.191637 systemd-logind[1568]: Removed session 22. Aug 19 08:05:19.242173 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 47310 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:19.243590 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:19.249008 systemd-logind[1568]: New session 23 of user core. Aug 19 08:05:19.254042 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 08:05:19.447026 sshd[4338]: Connection closed by 10.0.0.1 port 47310 Aug 19 08:05:19.447288 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:19.451835 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:47310.service: Deactivated successfully. Aug 19 08:05:19.453936 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 08:05:19.454774 systemd-logind[1568]: Session 23 logged out. Waiting for processes to exit. Aug 19 08:05:19.455908 systemd-logind[1568]: Removed session 23. Aug 19 08:05:24.468162 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:47312.service - OpenSSH per-connection server daemon (10.0.0.1:47312). Aug 19 08:05:24.526880 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 47312 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:24.528335 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:24.533672 systemd-logind[1568]: New session 24 of user core. Aug 19 08:05:24.542067 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 19 08:05:24.654954 sshd[4354]: Connection closed by 10.0.0.1 port 47312 Aug 19 08:05:24.655329 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:24.659602 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:47312.service: Deactivated successfully. Aug 19 08:05:24.661547 systemd[1]: session-24.scope: Deactivated successfully. Aug 19 08:05:24.662339 systemd-logind[1568]: Session 24 logged out. Waiting for processes to exit. Aug 19 08:05:24.663494 systemd-logind[1568]: Removed session 24. Aug 19 08:05:29.682344 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:59732.service - OpenSSH per-connection server daemon (10.0.0.1:59732). Aug 19 08:05:29.738613 sshd[4370]: Accepted publickey for core from 10.0.0.1 port 59732 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:29.740249 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:29.745244 systemd-logind[1568]: New session 25 of user core. Aug 19 08:05:29.755154 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 19 08:05:29.852840 kubelet[2765]: E0819 08:05:29.852783 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:29.852840 kubelet[2765]: E0819 08:05:29.852783 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:29.942664 sshd[4373]: Connection closed by 10.0.0.1 port 59732 Aug 19 08:05:29.942935 sshd-session[4370]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:29.948189 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:59732.service: Deactivated successfully. Aug 19 08:05:29.950471 systemd[1]: session-25.scope: Deactivated successfully. Aug 19 08:05:29.951286 systemd-logind[1568]: Session 25 logged out. Waiting for processes to exit. Aug 19 08:05:29.953024 systemd-logind[1568]: Removed session 25. Aug 19 08:05:34.959519 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:59774.service - OpenSSH per-connection server daemon (10.0.0.1:59774). Aug 19 08:05:35.018869 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 59774 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:35.021157 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:35.027507 systemd-logind[1568]: New session 26 of user core. Aug 19 08:05:35.037065 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 19 08:05:35.154013 sshd[4391]: Connection closed by 10.0.0.1 port 59774 Aug 19 08:05:35.154401 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:35.159133 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:59774.service: Deactivated successfully. Aug 19 08:05:35.161845 systemd[1]: session-26.scope: Deactivated successfully. Aug 19 08:05:35.164500 systemd-logind[1568]: Session 26 logged out. Waiting for processes to exit. Aug 19 08:05:35.165862 systemd-logind[1568]: Removed session 26. Aug 19 08:05:40.172528 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:52400.service - OpenSSH per-connection server daemon (10.0.0.1:52400). Aug 19 08:05:40.227792 sshd[4405]: Accepted publickey for core from 10.0.0.1 port 52400 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:40.229290 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:40.234362 systemd-logind[1568]: New session 27 of user core. Aug 19 08:05:40.244028 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 19 08:05:40.370747 sshd[4408]: Connection closed by 10.0.0.1 port 52400 Aug 19 08:05:40.371339 sshd-session[4405]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:40.381226 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:52400.service: Deactivated successfully. Aug 19 08:05:40.383479 systemd[1]: session-27.scope: Deactivated successfully. Aug 19 08:05:40.384619 systemd-logind[1568]: Session 27 logged out. Waiting for processes to exit. Aug 19 08:05:40.388095 systemd[1]: Started sshd@27-10.0.0.16:22-10.0.0.1:52404.service - OpenSSH per-connection server daemon (10.0.0.1:52404). Aug 19 08:05:40.389064 systemd-logind[1568]: Removed session 27. Aug 19 08:05:40.455011 sshd[4422]: Accepted publickey for core from 10.0.0.1 port 52404 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:40.456759 sshd-session[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:40.462377 systemd-logind[1568]: New session 28 of user core. Aug 19 08:05:40.470114 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 19 08:05:40.852334 kubelet[2765]: E0819 08:05:40.852274 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:41.852211 kubelet[2765]: E0819 08:05:41.851906 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:41.892118 containerd[1591]: time="2025-08-19T08:05:41.892027527Z" level=info msg="StopContainer for \"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\" with timeout 30 (s)" Aug 19 08:05:41.899915 containerd[1591]: time="2025-08-19T08:05:41.899848923Z" level=info msg="Stop container \"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\" with signal terminated" Aug 19 08:05:41.903291 containerd[1591]: time="2025-08-19T08:05:41.903207894Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:05:41.912194 containerd[1591]: time="2025-08-19T08:05:41.912123945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" id:\"d6ac41e64af3c65ca8eeb06e5b8f57b700b24f86db086ee14e8b005b6bd395f2\" pid:4448 exited_at:{seconds:1755590741 nanos:911663491}" Aug 19 08:05:41.916701 systemd[1]: cri-containerd-ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca.scope: Deactivated successfully. Aug 19 08:05:41.918376 containerd[1591]: time="2025-08-19T08:05:41.918337510Z" level=info msg="StopContainer for \"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" with timeout 2 (s)" Aug 19 08:05:41.919366 containerd[1591]: time="2025-08-19T08:05:41.919324552Z" level=info msg="Stop container \"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" with signal terminated" Aug 19 08:05:41.920249 containerd[1591]: time="2025-08-19T08:05:41.920220281Z" level=info msg="received exit event container_id:\"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\" id:\"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\" pid:3349 exited_at:{seconds:1755590741 nanos:918496932}" Aug 19 08:05:41.920345 containerd[1591]: time="2025-08-19T08:05:41.920281998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\" id:\"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\" pid:3349 exited_at:{seconds:1755590741 nanos:918496932}" Aug 19 08:05:41.929835 systemd-networkd[1496]: lxc_health: Link DOWN Aug 19 08:05:41.929848 systemd-networkd[1496]: lxc_health: Lost carrier Aug 19 08:05:41.949074 systemd[1]: cri-containerd-65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a.scope: Deactivated successfully. Aug 19 08:05:41.949526 systemd[1]: cri-containerd-65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a.scope: Consumed 7.205s CPU time, 126.1M memory peak, 548K read from disk, 13.3M written to disk. Aug 19 08:05:41.950835 containerd[1591]: time="2025-08-19T08:05:41.950774604Z" level=info msg="received exit event container_id:\"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" id:\"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" pid:3418 exited_at:{seconds:1755590741 nanos:950467821}" Aug 19 08:05:41.951212 containerd[1591]: time="2025-08-19T08:05:41.951165595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" id:\"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" pid:3418 exited_at:{seconds:1755590741 nanos:950467821}" Aug 19 08:05:41.954650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca-rootfs.mount: Deactivated successfully. Aug 19 08:05:41.974728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a-rootfs.mount: Deactivated successfully. Aug 19 08:05:42.195860 containerd[1591]: time="2025-08-19T08:05:42.195721574Z" level=info msg="StopContainer for \"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" returns successfully" Aug 19 08:05:42.196875 containerd[1591]: time="2025-08-19T08:05:42.196826048Z" level=info msg="StopContainer for \"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\" returns successfully" Aug 19 08:05:42.199725 containerd[1591]: time="2025-08-19T08:05:42.199692454Z" level=info msg="StopPodSandbox for \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\"" Aug 19 08:05:42.205745 containerd[1591]: time="2025-08-19T08:05:42.205700796Z" level=info msg="StopPodSandbox for \"9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0\"" Aug 19 08:05:42.207379 containerd[1591]: time="2025-08-19T08:05:42.207343942Z" level=info msg="Container to stop \"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:05:42.210570 containerd[1591]: time="2025-08-19T08:05:42.210515106Z" level=info msg="Container to stop \"58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:05:42.210570 containerd[1591]: time="2025-08-19T08:05:42.210562145Z" level=info msg="Container to stop \"43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:05:42.210570 containerd[1591]: time="2025-08-19T08:05:42.210575650Z" level=info msg="Container to stop \"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:05:42.210730 containerd[1591]: time="2025-08-19T08:05:42.210587874Z" level=info msg="Container to stop \"fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:05:42.210730 containerd[1591]: time="2025-08-19T08:05:42.210609785Z" level=info msg="Container to stop \"bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:05:42.216321 systemd[1]: cri-containerd-9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0.scope: Deactivated successfully. Aug 19 08:05:42.217781 containerd[1591]: time="2025-08-19T08:05:42.217343134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0\" id:\"9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0\" pid:3056 exit_status:137 exited_at:{seconds:1755590742 nanos:216937694}" Aug 19 08:05:42.218023 systemd[1]: cri-containerd-88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd.scope: Deactivated successfully. Aug 19 08:05:42.243040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd-rootfs.mount: Deactivated successfully. Aug 19 08:05:42.251657 containerd[1591]: time="2025-08-19T08:05:42.251538236Z" level=info msg="shim disconnected" id=88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd namespace=k8s.io Aug 19 08:05:42.251657 containerd[1591]: time="2025-08-19T08:05:42.251617175Z" level=warning msg="cleaning up after shim disconnected" id=88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd namespace=k8s.io Aug 19 08:05:42.254053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0-rootfs.mount: Deactivated successfully. Aug 19 08:05:42.259637 containerd[1591]: time="2025-08-19T08:05:42.251630280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 08:05:42.259757 containerd[1591]: time="2025-08-19T08:05:42.251880725Z" level=info msg="shim disconnected" id=9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0 namespace=k8s.io Aug 19 08:05:42.259801 containerd[1591]: time="2025-08-19T08:05:42.259766568Z" level=warning msg="cleaning up after shim disconnected" id=9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0 namespace=k8s.io Aug 19 08:05:42.259840 containerd[1591]: time="2025-08-19T08:05:42.259780885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 08:05:42.287072 containerd[1591]: time="2025-08-19T08:05:42.286984079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" id:\"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" pid:2932 exit_status:137 exited_at:{seconds:1755590742 nanos:218950542}" Aug 19 08:05:42.290103 containerd[1591]: time="2025-08-19T08:05:42.289144536Z" level=info msg="TearDown network for sandbox \"9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0\" successfully" Aug 19 08:05:42.290103 containerd[1591]: time="2025-08-19T08:05:42.289179482Z" level=info msg="StopPodSandbox for \"9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0\" returns successfully" Aug 19 08:05:42.291199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0-shm.mount: Deactivated successfully. Aug 19 08:05:42.291344 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd-shm.mount: Deactivated successfully. Aug 19 08:05:42.303414 containerd[1591]: time="2025-08-19T08:05:42.303362496Z" level=info msg="received exit event sandbox_id:\"9cdcb3be2ade36a22160b73397796dbbaa61df78e832aadd930ef360f35a4ad0\" exit_status:137 exited_at:{seconds:1755590742 nanos:216937694}" Aug 19 08:05:42.303621 containerd[1591]: time="2025-08-19T08:05:42.303582243Z" level=info msg="received exit event sandbox_id:\"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" exit_status:137 exited_at:{seconds:1755590742 nanos:218950542}" Aug 19 08:05:42.311716 containerd[1591]: time="2025-08-19T08:05:42.311653458Z" level=info msg="TearDown network for sandbox \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" successfully" Aug 19 08:05:42.311716 containerd[1591]: time="2025-08-19T08:05:42.311702982Z" level=info msg="StopPodSandbox for \"88ca21eb37fe527a7831b71c95e8b4eb282800691d719ec0e78cd6511f7de5dd\" returns successfully" Aug 19 08:05:42.383942 kubelet[2765]: I0819 08:05:42.382244 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-lib-modules\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.383942 kubelet[2765]: I0819 08:05:42.382332 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-xtables-lock\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.383942 kubelet[2765]: I0819 08:05:42.382390 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-clustermesh-secrets\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.383942 kubelet[2765]: I0819 08:05:42.382421 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-host-proc-sys-net\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.383942 kubelet[2765]: I0819 08:05:42.382468 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94ee9dfa-db75-44eb-8a3a-8c734a14a7ee-cilium-config-path\") pod \"94ee9dfa-db75-44eb-8a3a-8c734a14a7ee\" (UID: \"94ee9dfa-db75-44eb-8a3a-8c734a14a7ee\") " Aug 19 08:05:42.383942 kubelet[2765]: I0819 08:05:42.382406 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:05:42.385492 kubelet[2765]: I0819 08:05:42.382508 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-etc-cni-netd\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.385492 kubelet[2765]: I0819 08:05:42.382413 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:05:42.385492 kubelet[2765]: I0819 08:05:42.382471 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:05:42.385492 kubelet[2765]: I0819 08:05:42.382544 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-run\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.385492 kubelet[2765]: I0819 08:05:42.382586 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:05:42.385808 kubelet[2765]: I0819 08:05:42.382635 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-cgroup\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.385808 kubelet[2765]: I0819 08:05:42.382699 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-bpf-maps\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.385808 kubelet[2765]: I0819 08:05:42.382762 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px7qx\" (UniqueName: \"kubernetes.io/projected/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-kube-api-access-px7qx\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.385808 kubelet[2765]: I0819 08:05:42.382809 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-config-path\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.385808 kubelet[2765]: I0819 08:05:42.382851 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-hubble-tls\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.385808 kubelet[2765]: I0819 08:05:42.382927 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-host-proc-sys-kernel\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.386239 kubelet[2765]: I0819 08:05:42.382989 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-hostproc\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.386239 kubelet[2765]: I0819 08:05:42.383039 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cni-path\") pod \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\" (UID: \"ae3fb5fb-db74-4f6d-a7e4-7cb428729cab\") " Aug 19 08:05:42.386239 kubelet[2765]: I0819 08:05:42.383090 2765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6r2q\" (UniqueName: \"kubernetes.io/projected/94ee9dfa-db75-44eb-8a3a-8c734a14a7ee-kube-api-access-x6r2q\") pod \"94ee9dfa-db75-44eb-8a3a-8c734a14a7ee\" (UID: \"94ee9dfa-db75-44eb-8a3a-8c734a14a7ee\") " Aug 19 08:05:42.386239 kubelet[2765]: I0819 08:05:42.383160 2765 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.386239 kubelet[2765]: I0819 08:05:42.383203 2765 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.386239 kubelet[2765]: I0819 08:05:42.383219 2765 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.386239 kubelet[2765]: I0819 08:05:42.383251 2765 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.387538 kubelet[2765]: I0819 08:05:42.387071 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:05:42.387538 kubelet[2765]: I0819 08:05:42.387108 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-hostproc" (OuterVolumeSpecName: "hostproc") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:05:42.387538 kubelet[2765]: I0819 08:05:42.387137 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cni-path" (OuterVolumeSpecName: "cni-path") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:05:42.387538 kubelet[2765]: I0819 08:05:42.387153 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:05:42.387538 kubelet[2765]: I0819 08:05:42.387168 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:05:42.388349 kubelet[2765]: I0819 08:05:42.387205 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:05:42.390640 kubelet[2765]: I0819 08:05:42.390543 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ee9dfa-db75-44eb-8a3a-8c734a14a7ee-kube-api-access-x6r2q" (OuterVolumeSpecName: "kube-api-access-x6r2q") pod "94ee9dfa-db75-44eb-8a3a-8c734a14a7ee" (UID: "94ee9dfa-db75-44eb-8a3a-8c734a14a7ee"). InnerVolumeSpecName "kube-api-access-x6r2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 19 08:05:42.391411 kubelet[2765]: I0819 08:05:42.391371 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 19 08:05:42.393448 kubelet[2765]: I0819 08:05:42.393412 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 19 08:05:42.393592 kubelet[2765]: I0819 08:05:42.393552 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94ee9dfa-db75-44eb-8a3a-8c734a14a7ee-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "94ee9dfa-db75-44eb-8a3a-8c734a14a7ee" (UID: "94ee9dfa-db75-44eb-8a3a-8c734a14a7ee"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 19 08:05:42.394087 kubelet[2765]: I0819 08:05:42.394049 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 19 08:05:42.394350 kubelet[2765]: I0819 08:05:42.394250 2765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-kube-api-access-px7qx" (OuterVolumeSpecName: "kube-api-access-px7qx") pod "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" (UID: "ae3fb5fb-db74-4f6d-a7e4-7cb428729cab"). InnerVolumeSpecName "kube-api-access-px7qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 19 08:05:42.483674 kubelet[2765]: I0819 08:05:42.483465 2765 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.483674 kubelet[2765]: I0819 08:05:42.483511 2765 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.483674 kubelet[2765]: I0819 08:05:42.483531 2765 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.483674 kubelet[2765]: I0819 08:05:42.483543 2765 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.483674 kubelet[2765]: I0819 08:05:42.483554 2765 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.483674 kubelet[2765]: I0819 08:05:42.483563 2765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6r2q\" (UniqueName: \"kubernetes.io/projected/94ee9dfa-db75-44eb-8a3a-8c734a14a7ee-kube-api-access-x6r2q\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.483674 kubelet[2765]: I0819 08:05:42.483574 2765 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.483674 kubelet[2765]: I0819 08:05:42.483583 2765 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.484273 kubelet[2765]: I0819 08:05:42.483591 2765 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94ee9dfa-db75-44eb-8a3a-8c734a14a7ee-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.484273 kubelet[2765]: I0819 08:05:42.483608 2765 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.484273 kubelet[2765]: I0819 08:05:42.483622 2765 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.484273 kubelet[2765]: I0819 08:05:42.483632 2765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px7qx\" (UniqueName: \"kubernetes.io/projected/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab-kube-api-access-px7qx\") on node \"localhost\" DevicePath \"\"" Aug 19 08:05:42.944021 kubelet[2765]: E0819 08:05:42.943836 2765 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 19 08:05:42.956656 systemd[1]: var-lib-kubelet-pods-94ee9dfa\x2ddb75\x2d44eb\x2d8a3a\x2d8c734a14a7ee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx6r2q.mount: Deactivated successfully. Aug 19 08:05:42.956992 systemd[1]: var-lib-kubelet-pods-ae3fb5fb\x2ddb74\x2d4f6d\x2da7e4\x2d7cb428729cab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpx7qx.mount: Deactivated successfully. Aug 19 08:05:42.957240 systemd[1]: var-lib-kubelet-pods-ae3fb5fb\x2ddb74\x2d4f6d\x2da7e4\x2d7cb428729cab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 19 08:05:42.957395 systemd[1]: var-lib-kubelet-pods-ae3fb5fb\x2ddb74\x2d4f6d\x2da7e4\x2d7cb428729cab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 19 08:05:43.204404 kubelet[2765]: I0819 08:05:43.204153 2765 scope.go:117] "RemoveContainer" containerID="65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a" Aug 19 08:05:43.208934 containerd[1591]: time="2025-08-19T08:05:43.208607581Z" level=info msg="RemoveContainer for \"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\"" Aug 19 08:05:43.217781 containerd[1591]: time="2025-08-19T08:05:43.217734103Z" level=info msg="RemoveContainer for \"65a63f19040d114496214f0bfd4e359a590072ced9cfe01cfb2fbe090bd1484a\" returns successfully" Aug 19 08:05:43.218232 kubelet[2765]: I0819 08:05:43.218178 2765 scope.go:117] "RemoveContainer" containerID="43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be" Aug 19 08:05:43.219149 systemd[1]: Removed slice kubepods-besteffort-pod94ee9dfa_db75_44eb_8a3a_8c734a14a7ee.slice - libcontainer container kubepods-besteffort-pod94ee9dfa_db75_44eb_8a3a_8c734a14a7ee.slice. Aug 19 08:05:43.220654 containerd[1591]: time="2025-08-19T08:05:43.220331908Z" level=info msg="RemoveContainer for \"43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be\"" Aug 19 08:05:43.223583 systemd[1]: Removed slice kubepods-burstable-podae3fb5fb_db74_4f6d_a7e4_7cb428729cab.slice - libcontainer container kubepods-burstable-podae3fb5fb_db74_4f6d_a7e4_7cb428729cab.slice. Aug 19 08:05:43.224028 systemd[1]: kubepods-burstable-podae3fb5fb_db74_4f6d_a7e4_7cb428729cab.slice: Consumed 7.333s CPU time, 126.5M memory peak, 552K read from disk, 13.3M written to disk. Aug 19 08:05:43.228972 containerd[1591]: time="2025-08-19T08:05:43.228839247Z" level=info msg="RemoveContainer for \"43e8abf0d2a1f7760c6111c973f2428b667ef891b5569f492f04f3e1774fa5be\" returns successfully" Aug 19 08:05:43.230062 kubelet[2765]: I0819 08:05:43.229921 2765 scope.go:117] "RemoveContainer" containerID="58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f" Aug 19 08:05:43.234654 containerd[1591]: time="2025-08-19T08:05:43.234592342Z" level=info msg="RemoveContainer for \"58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f\"" Aug 19 08:05:43.251810 containerd[1591]: time="2025-08-19T08:05:43.251696759Z" level=info msg="RemoveContainer for \"58a7fb8f08fc232c8953d0ffd54f999848ffef06fb8ded7260302e9e07677c1f\" returns successfully" Aug 19 08:05:43.252121 kubelet[2765]: I0819 08:05:43.252071 2765 scope.go:117] "RemoveContainer" containerID="fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67" Aug 19 08:05:43.254912 containerd[1591]: time="2025-08-19T08:05:43.254831932Z" level=info msg="RemoveContainer for \"fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67\"" Aug 19 08:05:43.261558 containerd[1591]: time="2025-08-19T08:05:43.261483352Z" level=info msg="RemoveContainer for \"fcb928d69599228aadce012a5a70ba8dfcd42d1bc25e9655d94e65e965b66f67\" returns successfully" Aug 19 08:05:43.262014 kubelet[2765]: I0819 08:05:43.261968 2765 scope.go:117] "RemoveContainer" containerID="bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d" Aug 19 08:05:43.264808 containerd[1591]: time="2025-08-19T08:05:43.264737592Z" level=info msg="RemoveContainer for \"bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d\"" Aug 19 08:05:43.270490 containerd[1591]: time="2025-08-19T08:05:43.270402412Z" level=info msg="RemoveContainer for \"bd8312d4ac1580daad2aeda7835973db1224a6dad72e752fb643912b9f34079d\" returns successfully" Aug 19 08:05:43.270958 kubelet[2765]: I0819 08:05:43.270871 2765 scope.go:117] "RemoveContainer" containerID="ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca" Aug 19 08:05:43.273406 containerd[1591]: time="2025-08-19T08:05:43.273275097Z" level=info msg="RemoveContainer for \"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\"" Aug 19 08:05:43.278153 containerd[1591]: time="2025-08-19T08:05:43.278081439Z" level=info msg="RemoveContainer for \"ecc276045cbc0a6d6816a922edc0e36d34357f5e88917eeb839a1e5614ea21ca\" returns successfully" Aug 19 08:05:43.814460 sshd[4426]: Connection closed by 10.0.0.1 port 52404 Aug 19 08:05:43.815232 sshd-session[4422]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:43.827814 systemd[1]: sshd@27-10.0.0.16:22-10.0.0.1:52404.service: Deactivated successfully. Aug 19 08:05:43.831152 systemd[1]: session-28.scope: Deactivated successfully. Aug 19 08:05:43.832533 systemd-logind[1568]: Session 28 logged out. Waiting for processes to exit. Aug 19 08:05:43.838336 systemd[1]: Started sshd@28-10.0.0.16:22-10.0.0.1:52408.service - OpenSSH per-connection server daemon (10.0.0.1:52408). Aug 19 08:05:43.839137 systemd-logind[1568]: Removed session 28. Aug 19 08:05:43.852392 kubelet[2765]: E0819 08:05:43.852315 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:43.856020 kubelet[2765]: I0819 08:05:43.855943 2765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ee9dfa-db75-44eb-8a3a-8c734a14a7ee" path="/var/lib/kubelet/pods/94ee9dfa-db75-44eb-8a3a-8c734a14a7ee/volumes" Aug 19 08:05:43.856749 kubelet[2765]: I0819 08:05:43.856704 2765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" path="/var/lib/kubelet/pods/ae3fb5fb-db74-4f6d-a7e4-7cb428729cab/volumes" Aug 19 08:05:43.904145 sshd[4584]: Accepted publickey for core from 10.0.0.1 port 52408 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:43.906346 sshd-session[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:43.914168 systemd-logind[1568]: New session 29 of user core. Aug 19 08:05:43.922203 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 19 08:05:44.340119 sshd[4588]: Connection closed by 10.0.0.1 port 52408 Aug 19 08:05:44.340617 sshd-session[4584]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:44.359808 systemd[1]: sshd@28-10.0.0.16:22-10.0.0.1:52408.service: Deactivated successfully. Aug 19 08:05:44.364707 systemd[1]: session-29.scope: Deactivated successfully. Aug 19 08:05:44.367364 kubelet[2765]: E0819 08:05:44.367322 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94ee9dfa-db75-44eb-8a3a-8c734a14a7ee" containerName="cilium-operator" Aug 19 08:05:44.367364 kubelet[2765]: E0819 08:05:44.367357 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" containerName="clean-cilium-state" Aug 19 08:05:44.367364 kubelet[2765]: E0819 08:05:44.367366 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" containerName="mount-cgroup" Aug 19 08:05:44.367522 kubelet[2765]: E0819 08:05:44.367372 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" containerName="apply-sysctl-overwrites" Aug 19 08:05:44.367522 kubelet[2765]: E0819 08:05:44.367380 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" containerName="mount-bpf-fs" Aug 19 08:05:44.367522 kubelet[2765]: E0819 08:05:44.367386 2765 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" containerName="cilium-agent" Aug 19 08:05:44.367522 kubelet[2765]: I0819 08:05:44.367429 2765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3fb5fb-db74-4f6d-a7e4-7cb428729cab" containerName="cilium-agent" Aug 19 08:05:44.367522 kubelet[2765]: I0819 08:05:44.367437 2765 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ee9dfa-db75-44eb-8a3a-8c734a14a7ee" containerName="cilium-operator" Aug 19 08:05:44.367717 systemd-logind[1568]: Session 29 logged out. Waiting for processes to exit. Aug 19 08:05:44.371532 systemd[1]: Started sshd@29-10.0.0.16:22-10.0.0.1:52420.service - OpenSSH per-connection server daemon (10.0.0.1:52420). Aug 19 08:05:44.374873 systemd-logind[1568]: Removed session 29. Aug 19 08:05:44.387235 systemd[1]: Created slice kubepods-burstable-podfd608aa4_d2ea_4c05_b696_65934dc07a19.slice - libcontainer container kubepods-burstable-podfd608aa4_d2ea_4c05_b696_65934dc07a19.slice. Aug 19 08:05:44.397724 kubelet[2765]: I0819 08:05:44.397168 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd608aa4-d2ea-4c05-b696-65934dc07a19-etc-cni-netd\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.397724 kubelet[2765]: I0819 08:05:44.397235 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd608aa4-d2ea-4c05-b696-65934dc07a19-lib-modules\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.397724 kubelet[2765]: I0819 08:05:44.397270 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd608aa4-d2ea-4c05-b696-65934dc07a19-bpf-maps\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.397724 kubelet[2765]: I0819 08:05:44.397291 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd608aa4-d2ea-4c05-b696-65934dc07a19-clustermesh-secrets\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.397724 kubelet[2765]: I0819 08:05:44.397313 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd608aa4-d2ea-4c05-b696-65934dc07a19-hubble-tls\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.397724 kubelet[2765]: I0819 08:05:44.397340 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd608aa4-d2ea-4c05-b696-65934dc07a19-xtables-lock\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.398127 kubelet[2765]: I0819 08:05:44.397367 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd608aa4-d2ea-4c05-b696-65934dc07a19-cilium-config-path\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.398127 kubelet[2765]: I0819 08:05:44.397391 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd608aa4-d2ea-4c05-b696-65934dc07a19-hostproc\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.398127 kubelet[2765]: I0819 08:05:44.397411 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fd608aa4-d2ea-4c05-b696-65934dc07a19-cilium-ipsec-secrets\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.398127 kubelet[2765]: I0819 08:05:44.397433 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd608aa4-d2ea-4c05-b696-65934dc07a19-cilium-cgroup\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.398127 kubelet[2765]: I0819 08:05:44.397457 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd608aa4-d2ea-4c05-b696-65934dc07a19-cni-path\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.398127 kubelet[2765]: I0819 08:05:44.397484 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd608aa4-d2ea-4c05-b696-65934dc07a19-host-proc-sys-kernel\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.398312 kubelet[2765]: I0819 08:05:44.397507 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9p7r\" (UniqueName: \"kubernetes.io/projected/fd608aa4-d2ea-4c05-b696-65934dc07a19-kube-api-access-m9p7r\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.398312 kubelet[2765]: I0819 08:05:44.397532 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd608aa4-d2ea-4c05-b696-65934dc07a19-cilium-run\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.398312 kubelet[2765]: I0819 08:05:44.397556 2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd608aa4-d2ea-4c05-b696-65934dc07a19-host-proc-sys-net\") pod \"cilium-8l8pn\" (UID: \"fd608aa4-d2ea-4c05-b696-65934dc07a19\") " pod="kube-system/cilium-8l8pn" Aug 19 08:05:44.440274 sshd[4603]: Accepted publickey for core from 10.0.0.1 port 52420 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:44.442250 sshd-session[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:44.447231 systemd-logind[1568]: New session 30 of user core. Aug 19 08:05:44.454041 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 19 08:05:44.509566 sshd[4606]: Connection closed by 10.0.0.1 port 52420 Aug 19 08:05:44.511646 sshd-session[4603]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:44.518773 systemd[1]: sshd@29-10.0.0.16:22-10.0.0.1:52420.service: Deactivated successfully. Aug 19 08:05:44.521183 systemd[1]: session-30.scope: Deactivated successfully. Aug 19 08:05:44.539412 systemd-logind[1568]: Session 30 logged out. Waiting for processes to exit. Aug 19 08:05:44.541767 systemd[1]: Started sshd@30-10.0.0.16:22-10.0.0.1:52426.service - OpenSSH per-connection server daemon (10.0.0.1:52426). Aug 19 08:05:44.542899 systemd-logind[1568]: Removed session 30. Aug 19 08:05:44.600318 sshd[4617]: Accepted publickey for core from 10.0.0.1 port 52426 ssh2: RSA SHA256:neQ5eQUE5/WKaU1NfEShYExgQq7e24sTKK6uf7QwMLQ Aug 19 08:05:44.602551 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:05:44.608274 systemd-logind[1568]: New session 31 of user core. Aug 19 08:05:44.618234 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 19 08:05:44.693985 kubelet[2765]: E0819 08:05:44.693866 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:44.695314 containerd[1591]: time="2025-08-19T08:05:44.695157191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8l8pn,Uid:fd608aa4-d2ea-4c05-b696-65934dc07a19,Namespace:kube-system,Attempt:0,}" Aug 19 08:05:44.723936 containerd[1591]: time="2025-08-19T08:05:44.723798838Z" level=info msg="connecting to shim 96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179" address="unix:///run/containerd/s/4e4b97731eeaf199bba1fcf97d64f8eda936aa8b737a412a6790cd8ecf59cb9f" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:05:44.769238 systemd[1]: Started cri-containerd-96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179.scope - libcontainer container 96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179. Aug 19 08:05:44.809357 containerd[1591]: time="2025-08-19T08:05:44.809285655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8l8pn,Uid:fd608aa4-d2ea-4c05-b696-65934dc07a19,Namespace:kube-system,Attempt:0,} returns sandbox id \"96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179\"" Aug 19 08:05:44.810343 kubelet[2765]: E0819 08:05:44.810294 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:44.813400 containerd[1591]: time="2025-08-19T08:05:44.813351922Z" level=info msg="CreateContainer within sandbox \"96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 08:05:44.823133 containerd[1591]: time="2025-08-19T08:05:44.823063299Z" level=info msg="Container 6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:05:44.832448 containerd[1591]: time="2025-08-19T08:05:44.832388394Z" level=info msg="CreateContainer within sandbox \"96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877\"" Aug 19 08:05:44.833262 containerd[1591]: time="2025-08-19T08:05:44.833193869Z" level=info msg="StartContainer for \"6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877\"" Aug 19 08:05:44.835002 containerd[1591]: time="2025-08-19T08:05:44.834906145Z" level=info msg="connecting to shim 6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877" address="unix:///run/containerd/s/4e4b97731eeaf199bba1fcf97d64f8eda936aa8b737a412a6790cd8ecf59cb9f" protocol=ttrpc version=3 Aug 19 08:05:44.859333 systemd[1]: Started cri-containerd-6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877.scope - libcontainer container 6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877. Aug 19 08:05:44.902365 containerd[1591]: time="2025-08-19T08:05:44.902307614Z" level=info msg="StartContainer for \"6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877\" returns successfully" Aug 19 08:05:44.915295 systemd[1]: cri-containerd-6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877.scope: Deactivated successfully. Aug 19 08:05:44.918321 containerd[1591]: time="2025-08-19T08:05:44.918244701Z" level=info msg="received exit event container_id:\"6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877\" id:\"6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877\" pid:4686 exited_at:{seconds:1755590744 nanos:917812151}" Aug 19 08:05:44.918491 containerd[1591]: time="2025-08-19T08:05:44.918411187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877\" id:\"6dce9e65ec3285e959151ddd01b29b9c213e3119fdc724b7f6e1efe2ea346877\" pid:4686 exited_at:{seconds:1755590744 nanos:917812151}" Aug 19 08:05:45.218220 kubelet[2765]: E0819 08:05:45.218039 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:45.220356 containerd[1591]: time="2025-08-19T08:05:45.220310561Z" level=info msg="CreateContainer within sandbox \"96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 08:05:45.230052 containerd[1591]: time="2025-08-19T08:05:45.229973301Z" level=info msg="Container e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:05:45.240080 containerd[1591]: time="2025-08-19T08:05:45.240018136Z" level=info msg="CreateContainer within sandbox \"96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22\"" Aug 19 08:05:45.241051 containerd[1591]: time="2025-08-19T08:05:45.240984336Z" level=info msg="StartContainer for \"e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22\"" Aug 19 08:05:45.242333 containerd[1591]: time="2025-08-19T08:05:45.242302695Z" level=info msg="connecting to shim e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22" address="unix:///run/containerd/s/4e4b97731eeaf199bba1fcf97d64f8eda936aa8b737a412a6790cd8ecf59cb9f" protocol=ttrpc version=3 Aug 19 08:05:45.271243 systemd[1]: Started cri-containerd-e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22.scope - libcontainer container e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22. Aug 19 08:05:45.316593 systemd[1]: cri-containerd-e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22.scope: Deactivated successfully. Aug 19 08:05:45.317408 containerd[1591]: time="2025-08-19T08:05:45.317349553Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22\" id:\"e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22\" pid:4732 exited_at:{seconds:1755590745 nanos:316791997}" Aug 19 08:05:45.365257 containerd[1591]: time="2025-08-19T08:05:45.365168486Z" level=info msg="received exit event container_id:\"e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22\" id:\"e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22\" pid:4732 exited_at:{seconds:1755590745 nanos:316791997}" Aug 19 08:05:45.378255 containerd[1591]: time="2025-08-19T08:05:45.378203888Z" level=info msg="StartContainer for \"e0dad48c593069e070e694ffd66cf56292bf6fca27f2df66ee8140574860ec22\" returns successfully" Aug 19 08:05:46.222368 kubelet[2765]: E0819 08:05:46.222322 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:46.225189 containerd[1591]: time="2025-08-19T08:05:46.225127540Z" level=info msg="CreateContainer within sandbox \"96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 08:05:46.250135 containerd[1591]: time="2025-08-19T08:05:46.250059264Z" level=info msg="Container 78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:05:46.260023 containerd[1591]: time="2025-08-19T08:05:46.259949221Z" level=info msg="CreateContainer within sandbox \"96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b\"" Aug 19 08:05:46.260752 containerd[1591]: time="2025-08-19T08:05:46.260673042Z" level=info msg="StartContainer for \"78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b\"" Aug 19 08:05:46.262641 containerd[1591]: time="2025-08-19T08:05:46.262596596Z" level=info msg="connecting to shim 78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b" address="unix:///run/containerd/s/4e4b97731eeaf199bba1fcf97d64f8eda936aa8b737a412a6790cd8ecf59cb9f" protocol=ttrpc version=3 Aug 19 08:05:46.291055 systemd[1]: Started cri-containerd-78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b.scope - libcontainer container 78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b. Aug 19 08:05:46.343231 containerd[1591]: time="2025-08-19T08:05:46.343177018Z" level=info msg="StartContainer for \"78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b\" returns successfully" Aug 19 08:05:46.353354 systemd[1]: cri-containerd-78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b.scope: Deactivated successfully. Aug 19 08:05:46.354720 containerd[1591]: time="2025-08-19T08:05:46.354683317Z" level=info msg="received exit event container_id:\"78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b\" id:\"78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b\" pid:4777 exited_at:{seconds:1755590746 nanos:354354835}" Aug 19 08:05:46.355010 containerd[1591]: time="2025-08-19T08:05:46.354987033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b\" id:\"78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b\" pid:4777 exited_at:{seconds:1755590746 nanos:354354835}" Aug 19 08:05:46.379084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78195918391729a2e52a6094d4dcc4e0839d8e5baecf5562f7e5eabbbc56290b-rootfs.mount: Deactivated successfully. Aug 19 08:05:46.852150 kubelet[2765]: E0819 08:05:46.852057 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-jdpt7" podUID="eeee4932-f624-4718-93e5-6abf06c6d52d" Aug 19 08:05:47.227237 kubelet[2765]: E0819 08:05:47.226982 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:47.229658 containerd[1591]: time="2025-08-19T08:05:47.229065247Z" level=info msg="CreateContainer within sandbox \"96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 08:05:47.239621 containerd[1591]: time="2025-08-19T08:05:47.239556579Z" level=info msg="Container 729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:05:47.249989 containerd[1591]: time="2025-08-19T08:05:47.249918436Z" level=info msg="CreateContainer within sandbox \"96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66\"" Aug 19 08:05:47.250601 containerd[1591]: time="2025-08-19T08:05:47.250549089Z" level=info msg="StartContainer for \"729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66\"" Aug 19 08:05:47.251531 containerd[1591]: time="2025-08-19T08:05:47.251494120Z" level=info msg="connecting to shim 729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66" address="unix:///run/containerd/s/4e4b97731eeaf199bba1fcf97d64f8eda936aa8b737a412a6790cd8ecf59cb9f" protocol=ttrpc version=3 Aug 19 08:05:47.284032 systemd[1]: Started cri-containerd-729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66.scope - libcontainer container 729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66. Aug 19 08:05:47.321110 systemd[1]: cri-containerd-729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66.scope: Deactivated successfully. Aug 19 08:05:47.322323 containerd[1591]: time="2025-08-19T08:05:47.321558521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66\" id:\"729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66\" pid:4818 exited_at:{seconds:1755590747 nanos:321248494}" Aug 19 08:05:47.321619 systemd[1]: cri-containerd-729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66.scope: Consumed 18ms CPU time, 4.2M memory peak, 1.3M read from disk. Aug 19 08:05:47.326996 containerd[1591]: time="2025-08-19T08:05:47.326943972Z" level=info msg="received exit event container_id:\"729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66\" id:\"729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66\" pid:4818 exited_at:{seconds:1755590747 nanos:321248494}" Aug 19 08:05:47.338715 containerd[1591]: time="2025-08-19T08:05:47.338630758Z" level=info msg="StartContainer for \"729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66\" returns successfully" Aug 19 08:05:47.354034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-729d475cd6b7f87ebdc635545b1727b77942928a0b71613dd4876a1b82d2eb66-rootfs.mount: Deactivated successfully. Aug 19 08:05:47.945181 kubelet[2765]: E0819 08:05:47.945130 2765 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 19 08:05:48.233337 kubelet[2765]: E0819 08:05:48.233176 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:48.236355 containerd[1591]: time="2025-08-19T08:05:48.236310762Z" level=info msg="CreateContainer within sandbox \"96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 08:05:48.254493 containerd[1591]: time="2025-08-19T08:05:48.254422521Z" level=info msg="Container 6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:05:48.264187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1243200724.mount: Deactivated successfully. Aug 19 08:05:48.286043 containerd[1591]: time="2025-08-19T08:05:48.285853673Z" level=info msg="CreateContainer within sandbox \"96c87d05a6cf22bf85ed8ddd957d18f8998361b8350a55a4302d70da96736179\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c\"" Aug 19 08:05:48.287071 containerd[1591]: time="2025-08-19T08:05:48.287013199Z" level=info msg="StartContainer for \"6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c\"" Aug 19 08:05:48.289210 containerd[1591]: time="2025-08-19T08:05:48.289150777Z" level=info msg="connecting to shim 6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c" address="unix:///run/containerd/s/4e4b97731eeaf199bba1fcf97d64f8eda936aa8b737a412a6790cd8ecf59cb9f" protocol=ttrpc version=3 Aug 19 08:05:48.317038 systemd[1]: Started cri-containerd-6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c.scope - libcontainer container 6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c. Aug 19 08:05:48.362511 containerd[1591]: time="2025-08-19T08:05:48.362422104Z" level=info msg="StartContainer for \"6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c\" returns successfully" Aug 19 08:05:48.447547 containerd[1591]: time="2025-08-19T08:05:48.447483129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c\" id:\"f7788c84606ed545c3267e89a32a51588f1d8a56a309c44fa3bb5cc5bb4ebfeb\" pid:4885 exited_at:{seconds:1755590748 nanos:447059867}" Aug 19 08:05:48.842939 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 19 08:05:48.851454 kubelet[2765]: E0819 08:05:48.851385 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-jdpt7" podUID="eeee4932-f624-4718-93e5-6abf06c6d52d" Aug 19 08:05:49.240516 kubelet[2765]: E0819 08:05:49.240129 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:49.256076 kubelet[2765]: I0819 08:05:49.255997 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8l8pn" podStartSLOduration=5.2559765370000004 podStartE2EDuration="5.255976537s" podCreationTimestamp="2025-08-19 08:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:05:49.255475229 +0000 UTC m=+101.614626257" watchObservedRunningTime="2025-08-19 08:05:49.255976537 +0000 UTC m=+101.615127555" Aug 19 08:05:50.570669 kubelet[2765]: I0819 08:05:50.570270 2765 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-19T08:05:50Z","lastTransitionTime":"2025-08-19T08:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 19 08:05:50.694855 kubelet[2765]: E0819 08:05:50.694742 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:50.852314 kubelet[2765]: E0819 08:05:50.852135 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-jdpt7" podUID="eeee4932-f624-4718-93e5-6abf06c6d52d" Aug 19 08:05:50.984226 containerd[1591]: time="2025-08-19T08:05:50.984140453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c\" id:\"6b961650e4e75b3f293ccf87c603bdc9ad7856cbe1dcac616d1825abc34a404f\" pid:5115 exit_status:1 exited_at:{seconds:1755590750 nanos:983473290}" Aug 19 08:05:52.083921 systemd-networkd[1496]: lxc_health: Link UP Aug 19 08:05:52.086067 systemd-networkd[1496]: lxc_health: Gained carrier Aug 19 08:05:52.696274 kubelet[2765]: E0819 08:05:52.696222 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:52.851674 kubelet[2765]: E0819 08:05:52.851598 2765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-jdpt7" podUID="eeee4932-f624-4718-93e5-6abf06c6d52d" Aug 19 08:05:53.102387 containerd[1591]: time="2025-08-19T08:05:53.102329601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c\" id:\"82cbb9f965492b0ca7996650e41f04d5f56ad8a34eb4b271cb4e4564d63d1caa\" pid:5421 exited_at:{seconds:1755590753 nanos:101797254}" Aug 19 08:05:53.252236 kubelet[2765]: E0819 08:05:53.252165 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:53.717660 systemd-networkd[1496]: lxc_health: Gained IPv6LL Aug 19 08:05:54.254680 kubelet[2765]: E0819 08:05:54.254632 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:54.852033 kubelet[2765]: E0819 08:05:54.851979 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:05:55.204901 containerd[1591]: time="2025-08-19T08:05:55.204737413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c\" id:\"a69ca6661d9166062493774e800da24423a6640fc590b9b2029602db75e5843f\" pid:5453 exited_at:{seconds:1755590755 nanos:204405875}" Aug 19 08:05:57.297039 containerd[1591]: time="2025-08-19T08:05:57.296827805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6824e5fc8feb19fd0390683ab5dcba0d25e81847f063918d9e02592c95c2629c\" id:\"788831dd8a20fc685c490e3358cd2f7e5f0bd610250fb42d4dcb8309943204fe\" pid:5484 exited_at:{seconds:1755590757 nanos:296328019}" Aug 19 08:05:57.315792 sshd[4620]: Connection closed by 10.0.0.1 port 52426 Aug 19 08:05:57.316248 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Aug 19 08:05:57.321472 systemd[1]: sshd@30-10.0.0.16:22-10.0.0.1:52426.service: Deactivated successfully. Aug 19 08:05:57.324057 systemd[1]: session-31.scope: Deactivated successfully. Aug 19 08:05:57.325005 systemd-logind[1568]: Session 31 logged out. Waiting for processes to exit. Aug 19 08:05:57.326659 systemd-logind[1568]: Removed session 31.