Aug 12 23:58:19.138563 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 12 23:58:19.138603 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:58:19.138617 kernel: BIOS-provided physical RAM map: Aug 12 23:58:19.138625 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 12 23:58:19.138660 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 12 23:58:19.138669 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 12 23:58:19.138679 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Aug 12 23:58:19.138687 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Aug 12 23:58:19.138695 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 12 23:58:19.138707 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 12 23:58:19.138715 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 12 23:58:19.138723 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 12 23:58:19.138735 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 12 23:58:19.138743 kernel: NX (Execute Disable) protection: active Aug 12 23:58:19.138753 kernel: APIC: Static calls initialized Aug 12 23:58:19.138769 kernel: SMBIOS 2.8 present. Aug 12 23:58:19.138778 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 12 23:58:19.138787 kernel: Hypervisor detected: KVM Aug 12 23:58:19.138796 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 12 23:58:19.138804 kernel: kvm-clock: using sched offset of 3210135529 cycles Aug 12 23:58:19.138813 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 12 23:58:19.138823 kernel: tsc: Detected 2794.750 MHz processor Aug 12 23:58:19.138832 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 12 23:58:19.138842 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 12 23:58:19.138851 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Aug 12 23:58:19.138863 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 12 23:58:19.138872 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 12 23:58:19.138881 kernel: Using GB pages for direct mapping Aug 12 23:58:19.138891 kernel: ACPI: Early table checksum verification disabled Aug 12 23:58:19.138900 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Aug 12 23:58:19.138910 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:19.138919 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:19.138928 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:19.138937 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 12 23:58:19.138949 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:19.138959 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:19.138968 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:19.138977 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:58:19.138994 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Aug 12 23:58:19.139005 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Aug 12 23:58:19.139032 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 12 23:58:19.139044 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Aug 12 23:58:19.139055 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Aug 12 23:58:19.139065 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Aug 12 23:58:19.139083 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Aug 12 23:58:19.139094 kernel: No NUMA configuration found Aug 12 23:58:19.139104 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Aug 12 23:58:19.139119 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Aug 12 23:58:19.139138 kernel: Zone ranges: Aug 12 23:58:19.139148 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 12 23:58:19.139158 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Aug 12 23:58:19.139168 kernel: Normal empty Aug 12 23:58:19.139178 kernel: Movable zone start for each node Aug 12 23:58:19.139188 kernel: Early memory node ranges Aug 12 23:58:19.139205 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 12 23:58:19.139216 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Aug 12 23:58:19.139227 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Aug 12 23:58:19.139249 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 12 23:58:19.139273 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 12 23:58:19.139284 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Aug 12 23:58:19.139294 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 12 23:58:19.139311 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 12 23:58:19.139323 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 12 23:58:19.139334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 12 23:58:19.139348 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 12 23:58:19.139366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 12 23:58:19.139385 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 12 23:58:19.139408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 12 23:58:19.139423 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 12 23:58:19.139434 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 12 23:58:19.139444 kernel: TSC deadline timer available Aug 12 23:58:19.139483 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 12 23:58:19.139498 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 12 23:58:19.139507 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 12 23:58:19.139521 kernel: kvm-guest: setup PV sched yield Aug 12 23:58:19.139530 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 12 23:58:19.139545 kernel: Booting paravirtualized kernel on KVM Aug 12 23:58:19.139555 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 12 23:58:19.139564 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 12 23:58:19.139574 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 12 23:58:19.139583 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 12 23:58:19.139592 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 12 23:58:19.139601 kernel: kvm-guest: PV spinlocks enabled Aug 12 23:58:19.139611 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 12 23:58:19.139622 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:58:19.139650 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:58:19.139659 kernel: random: crng init done Aug 12 23:58:19.139668 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:58:19.139678 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:58:19.139687 kernel: Fallback order for Node 0: 0 Aug 12 23:58:19.139697 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Aug 12 23:58:19.139706 kernel: Policy zone: DMA32 Aug 12 23:58:19.139715 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:58:19.139728 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 138948K reserved, 0K cma-reserved) Aug 12 23:58:19.139738 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 12 23:58:19.139747 kernel: ftrace: allocating 37942 entries in 149 pages Aug 12 23:58:19.139757 kernel: ftrace: allocated 149 pages with 4 groups Aug 12 23:58:19.139766 kernel: Dynamic Preempt: voluntary Aug 12 23:58:19.139776 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:58:19.139786 kernel: rcu: RCU event tracing is enabled. Aug 12 23:58:19.139796 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 12 23:58:19.139805 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:58:19.139818 kernel: Rude variant of Tasks RCU enabled. Aug 12 23:58:19.139827 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:58:19.139836 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:58:19.139849 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 12 23:58:19.139858 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 12 23:58:19.139868 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:58:19.139877 kernel: Console: colour VGA+ 80x25 Aug 12 23:58:19.139886 kernel: printk: console [ttyS0] enabled Aug 12 23:58:19.139895 kernel: ACPI: Core revision 20230628 Aug 12 23:58:19.139908 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 12 23:58:19.139917 kernel: APIC: Switch to symmetric I/O mode setup Aug 12 23:58:19.139927 kernel: x2apic enabled Aug 12 23:58:19.139936 kernel: APIC: Switched APIC routing to: physical x2apic Aug 12 23:58:19.139945 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 12 23:58:19.139955 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 12 23:58:19.139965 kernel: kvm-guest: setup PV IPIs Aug 12 23:58:19.139986 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 12 23:58:19.139996 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 12 23:58:19.140015 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 12 23:58:19.140026 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 12 23:58:19.140038 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 12 23:58:19.140069 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 12 23:58:19.140080 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 12 23:58:19.140091 kernel: Spectre V2 : Mitigation: Retpolines Aug 12 23:58:19.140103 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 12 23:58:19.140113 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 12 23:58:19.140127 kernel: RETBleed: Mitigation: untrained return thunk Aug 12 23:58:19.140142 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 12 23:58:19.140153 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 12 23:58:19.140164 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 12 23:58:19.140176 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 12 23:58:19.140187 kernel: x86/bugs: return thunk changed Aug 12 23:58:19.140198 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 12 23:58:19.140209 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 12 23:58:19.140224 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 12 23:58:19.140238 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 12 23:58:19.140254 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 12 23:58:19.140267 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 12 23:58:19.140285 kernel: Freeing SMP alternatives memory: 32K Aug 12 23:58:19.140303 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:58:19.140317 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 12 23:58:19.140334 kernel: landlock: Up and running. Aug 12 23:58:19.140351 kernel: SELinux: Initializing. Aug 12 23:58:19.140377 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:58:19.140396 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:58:19.140410 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 12 23:58:19.140421 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:58:19.140439 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:58:19.140451 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:58:19.140462 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 12 23:58:19.140478 kernel: ... version: 0 Aug 12 23:58:19.140493 kernel: ... bit width: 48 Aug 12 23:58:19.140512 kernel: ... generic registers: 6 Aug 12 23:58:19.140524 kernel: ... value mask: 0000ffffffffffff Aug 12 23:58:19.140534 kernel: ... max period: 00007fffffffffff Aug 12 23:58:19.140544 kernel: ... fixed-purpose events: 0 Aug 12 23:58:19.140555 kernel: ... event mask: 000000000000003f Aug 12 23:58:19.140565 kernel: signal: max sigframe size: 1776 Aug 12 23:58:19.140576 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:58:19.140587 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:58:19.140598 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:58:19.140614 kernel: smpboot: x86: Booting SMP configuration: Aug 12 23:58:19.140624 kernel: .... node #0, CPUs: #1 #2 #3 Aug 12 23:58:19.140672 kernel: smp: Brought up 1 node, 4 CPUs Aug 12 23:58:19.140684 kernel: smpboot: Max logical packages: 1 Aug 12 23:58:19.140695 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 12 23:58:19.140706 kernel: devtmpfs: initialized Aug 12 23:58:19.140717 kernel: x86/mm: Memory block size: 128MB Aug 12 23:58:19.140728 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:58:19.140738 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 12 23:58:19.140754 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:58:19.140765 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:58:19.140776 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:58:19.140787 kernel: audit: type=2000 audit(1755043097.739:1): state=initialized audit_enabled=0 res=1 Aug 12 23:58:19.140798 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:58:19.140809 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 12 23:58:19.140820 kernel: cpuidle: using governor menu Aug 12 23:58:19.140831 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:58:19.140841 kernel: dca service started, version 1.12.1 Aug 12 23:58:19.140857 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 12 23:58:19.140868 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 12 23:58:19.140879 kernel: PCI: Using configuration type 1 for base access Aug 12 23:58:19.140890 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 12 23:58:19.140904 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:58:19.140919 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 12 23:58:19.140930 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:58:19.140941 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:58:19.140952 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:58:19.140974 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:58:19.140988 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:58:19.140999 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:58:19.142050 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 12 23:58:19.142065 kernel: ACPI: Interpreter enabled Aug 12 23:58:19.142075 kernel: ACPI: PM: (supports S0 S3 S5) Aug 12 23:58:19.142086 kernel: ACPI: Using IOAPIC for interrupt routing Aug 12 23:58:19.142096 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 12 23:58:19.142108 kernel: PCI: Using E820 reservations for host bridge windows Aug 12 23:58:19.142126 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 12 23:58:19.142137 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:58:19.142521 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:58:19.142770 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 12 23:58:19.142999 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 12 23:58:19.143030 kernel: PCI host bridge to bus 0000:00 Aug 12 23:58:19.143241 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 12 23:58:19.143408 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 12 23:58:19.143567 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 12 23:58:19.143757 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 12 23:58:19.143927 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 12 23:58:19.144097 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Aug 12 23:58:19.144235 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:58:19.144428 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 12 23:58:19.144619 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 12 23:58:19.144811 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 12 23:58:19.144969 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 12 23:58:19.145140 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 12 23:58:19.145299 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 12 23:58:19.145489 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 12 23:58:19.145713 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 12 23:58:19.145891 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 12 23:58:19.146082 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 12 23:58:19.146279 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 12 23:58:19.146455 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Aug 12 23:58:19.146683 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 12 23:58:19.146852 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 12 23:58:19.147059 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 12 23:58:19.147240 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Aug 12 23:58:19.147428 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Aug 12 23:58:19.147596 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 12 23:58:19.147809 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 12 23:58:19.148002 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 12 23:58:19.148184 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 12 23:58:19.148439 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 12 23:58:19.148628 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Aug 12 23:58:19.148912 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Aug 12 23:58:19.149112 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 12 23:58:19.149296 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 12 23:58:19.149318 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 12 23:58:19.149333 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 12 23:58:19.149352 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 12 23:58:19.149362 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 12 23:58:19.149373 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 12 23:58:19.149389 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 12 23:58:19.149401 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 12 23:58:19.149416 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 12 23:58:19.149428 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 12 23:58:19.149445 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 12 23:58:19.149456 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 12 23:58:19.149473 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 12 23:58:19.149484 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 12 23:58:19.149495 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 12 23:58:19.149505 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 12 23:58:19.149522 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 12 23:58:19.149535 kernel: iommu: Default domain type: Translated Aug 12 23:58:19.149546 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 12 23:58:19.149557 kernel: PCI: Using ACPI for IRQ routing Aug 12 23:58:19.149573 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 12 23:58:19.149597 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 12 23:58:19.149612 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Aug 12 23:58:19.149851 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 12 23:58:19.150067 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 12 23:58:19.150287 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 12 23:58:19.150316 kernel: vgaarb: loaded Aug 12 23:58:19.150340 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 12 23:58:19.150352 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 12 23:58:19.150379 kernel: clocksource: Switched to clocksource kvm-clock Aug 12 23:58:19.150389 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:58:19.150400 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:58:19.150410 kernel: pnp: PnP ACPI init Aug 12 23:58:19.150605 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 12 23:58:19.150625 kernel: pnp: PnP ACPI: found 6 devices Aug 12 23:58:19.150653 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 12 23:58:19.150665 kernel: NET: Registered PF_INET protocol family Aug 12 23:58:19.150687 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:58:19.150702 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:58:19.150713 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:58:19.150723 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:58:19.150734 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 12 23:58:19.150744 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:58:19.150754 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:58:19.150764 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:58:19.150775 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:58:19.150791 kernel: NET: Registered PF_XDP protocol family Aug 12 23:58:19.150959 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 12 23:58:19.151136 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 12 23:58:19.151312 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 12 23:58:19.151545 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 12 23:58:19.151783 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 12 23:58:19.151984 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Aug 12 23:58:19.152002 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:58:19.152033 kernel: Initialise system trusted keyrings Aug 12 23:58:19.152044 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:58:19.152055 kernel: Key type asymmetric registered Aug 12 23:58:19.152065 kernel: Asymmetric key parser 'x509' registered Aug 12 23:58:19.152075 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 12 23:58:19.152085 kernel: io scheduler mq-deadline registered Aug 12 23:58:19.152095 kernel: io scheduler kyber registered Aug 12 23:58:19.152105 kernel: io scheduler bfq registered Aug 12 23:58:19.152114 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 12 23:58:19.152125 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 12 23:58:19.152139 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 12 23:58:19.152149 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 12 23:58:19.152160 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:58:19.152185 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 12 23:58:19.152205 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 12 23:58:19.152225 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 12 23:58:19.152237 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 12 23:58:19.152435 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 12 23:58:19.152462 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 12 23:58:19.152630 kernel: rtc_cmos 00:04: registered as rtc0 Aug 12 23:58:19.152901 kernel: rtc_cmos 00:04: setting system clock to 2025-08-12T23:58:18 UTC (1755043098) Aug 12 23:58:19.153102 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 12 23:58:19.153121 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 12 23:58:19.153132 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:58:19.153143 kernel: Segment Routing with IPv6 Aug 12 23:58:19.153154 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:58:19.153171 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:58:19.153181 kernel: Key type dns_resolver registered Aug 12 23:58:19.153192 kernel: IPI shorthand broadcast: enabled Aug 12 23:58:19.153203 kernel: sched_clock: Marking stable (884002763, 103407871)->(1004856667, -17446033) Aug 12 23:58:19.153213 kernel: registered taskstats version 1 Aug 12 23:58:19.153225 kernel: Loading compiled-in X.509 certificates Aug 12 23:58:19.153235 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 12 23:58:19.153246 kernel: Key type .fscrypt registered Aug 12 23:58:19.153256 kernel: Key type fscrypt-provisioning registered Aug 12 23:58:19.153270 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:58:19.153281 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:58:19.153292 kernel: ima: No architecture policies found Aug 12 23:58:19.153303 kernel: clk: Disabling unused clocks Aug 12 23:58:19.153313 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 12 23:58:19.153324 kernel: Write protecting the kernel read-only data: 38912k Aug 12 23:58:19.153336 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 12 23:58:19.153347 kernel: Run /init as init process Aug 12 23:58:19.153357 kernel: with arguments: Aug 12 23:58:19.153374 kernel: /init Aug 12 23:58:19.153386 kernel: with environment: Aug 12 23:58:19.153397 kernel: HOME=/ Aug 12 23:58:19.153407 kernel: TERM=linux Aug 12 23:58:19.153417 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:58:19.153434 systemd[1]: Successfully made /usr/ read-only. Aug 12 23:58:19.153451 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:58:19.153462 systemd[1]: Detected virtualization kvm. Aug 12 23:58:19.153477 systemd[1]: Detected architecture x86-64. Aug 12 23:58:19.153488 systemd[1]: Running in initrd. Aug 12 23:58:19.153498 systemd[1]: No hostname configured, using default hostname. Aug 12 23:58:19.153510 systemd[1]: Hostname set to . Aug 12 23:58:19.153520 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:58:19.153531 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:58:19.153542 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:58:19.153569 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:58:19.153587 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:58:19.153652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:58:19.153692 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:58:19.153708 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:58:19.153722 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:58:19.153750 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:58:19.153783 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:58:19.153798 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:58:19.153831 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:58:19.153855 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:58:19.153869 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:58:19.153893 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:58:19.153915 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:58:19.153934 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:58:19.153947 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:58:19.153959 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 12 23:58:19.153971 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:58:19.153982 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:58:19.153994 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:58:19.154016 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:58:19.154029 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:58:19.154040 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:58:19.154056 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:58:19.154069 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:58:19.154080 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:58:19.154093 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:58:19.154105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:58:19.154117 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:58:19.154182 systemd-journald[194]: Collecting audit messages is disabled. Aug 12 23:58:19.154218 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:58:19.154231 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:58:19.154246 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:58:19.154259 systemd-journald[194]: Journal started Aug 12 23:58:19.154292 systemd-journald[194]: Runtime Journal (/run/log/journal/202dd1a2aeb944b2970a798e21d73f94) is 6M, max 48.4M, 42.3M free. Aug 12 23:58:19.158745 systemd-modules-load[195]: Inserted module 'overlay' Aug 12 23:58:19.197399 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:58:19.199186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:58:19.201490 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:58:19.229714 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:58:19.234728 kernel: Bridge firewalling registered Aug 12 23:58:19.236403 systemd-modules-load[195]: Inserted module 'br_netfilter' Aug 12 23:58:19.237844 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:58:19.258949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:58:19.272054 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:58:19.280454 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:58:19.285232 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:58:19.287247 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:58:19.307031 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:58:19.313542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:58:19.333561 dracut-cmdline[222]: dracut-dracut-053 Aug 12 23:58:19.340290 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:58:19.343839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:58:19.345702 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 12 23:58:19.369975 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:58:19.428375 systemd-resolved[248]: Positive Trust Anchors: Aug 12 23:58:19.428411 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:58:19.428450 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:58:19.433914 systemd-resolved[248]: Defaulting to hostname 'linux'. Aug 12 23:58:19.435789 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:58:19.447332 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:58:19.541067 kernel: SCSI subsystem initialized Aug 12 23:58:19.558977 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:58:19.582443 kernel: iscsi: registered transport (tcp) Aug 12 23:58:19.622293 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:58:19.622374 kernel: QLogic iSCSI HBA Driver Aug 12 23:58:19.695866 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:58:19.710921 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:58:19.757031 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:58:19.757125 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:58:19.757144 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 12 23:58:19.844705 kernel: raid6: avx2x4 gen() 18944 MB/s Aug 12 23:58:19.860720 kernel: raid6: avx2x2 gen() 17898 MB/s Aug 12 23:58:19.880721 kernel: raid6: avx2x1 gen() 15083 MB/s Aug 12 23:58:19.880835 kernel: raid6: using algorithm avx2x4 gen() 18944 MB/s Aug 12 23:58:19.900040 kernel: raid6: .... xor() 5428 MB/s, rmw enabled Aug 12 23:58:19.900131 kernel: raid6: using avx2x2 recovery algorithm Aug 12 23:58:19.928724 kernel: xor: automatically using best checksumming function avx Aug 12 23:58:20.215756 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:58:20.236734 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:58:20.249992 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:58:20.277165 systemd-udevd[415]: Using default interface naming scheme 'v255'. Aug 12 23:58:20.286738 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:58:20.294016 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:58:20.320306 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Aug 12 23:58:20.393034 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:58:20.406934 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:58:20.513495 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:58:20.528993 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:58:20.548773 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:58:20.553766 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:58:20.556920 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:58:20.557407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:58:20.569317 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:58:20.582019 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:58:20.591680 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 12 23:58:20.591955 kernel: cryptd: max_cpu_qlen set to 1000 Aug 12 23:58:20.605085 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 12 23:58:20.609718 kernel: AVX2 version of gcm_enc/dec engaged. Aug 12 23:58:20.614629 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:58:20.614691 kernel: GPT:9289727 != 19775487 Aug 12 23:58:20.614707 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:58:20.614733 kernel: GPT:9289727 != 19775487 Aug 12 23:58:20.614746 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:58:20.614760 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:58:20.616665 kernel: AES CTR mode by8 optimization enabled Aug 12 23:58:20.631714 kernel: libata version 3.00 loaded. Aug 12 23:58:20.644672 kernel: ahci 0000:00:1f.2: version 3.0 Aug 12 23:58:20.648674 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 12 23:58:20.651975 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 12 23:58:20.652267 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 12 23:58:20.654395 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:58:20.658798 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:58:20.662921 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:58:20.677135 kernel: scsi host0: ahci Aug 12 23:58:20.677502 kernel: scsi host1: ahci Aug 12 23:58:20.663458 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:58:20.663550 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:58:20.671459 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:58:20.688016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:58:20.688753 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:58:20.700671 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (467) Aug 12 23:58:20.700733 kernel: scsi host2: ahci Aug 12 23:58:20.705727 kernel: scsi host3: ahci Aug 12 23:58:20.713663 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/vda3 scanned by (udev-worker) (470) Aug 12 23:58:20.732062 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 12 23:58:20.736713 kernel: scsi host4: ahci Aug 12 23:58:20.737041 kernel: scsi host5: ahci Aug 12 23:58:20.737447 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Aug 12 23:58:20.739098 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Aug 12 23:58:20.739131 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Aug 12 23:58:20.742015 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Aug 12 23:58:20.742051 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Aug 12 23:58:20.742067 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Aug 12 23:58:20.757280 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 12 23:58:20.778671 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:58:20.791818 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 12 23:58:20.794897 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 12 23:58:21.061752 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 12 23:58:21.061964 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 12 23:58:21.062005 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 12 23:58:21.062038 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 12 23:58:21.062057 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 12 23:58:21.062102 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 12 23:58:21.351751 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 12 23:58:21.351859 kernel: ata3.00: applying bridge limits Aug 12 23:58:21.376704 kernel: ata3.00: configured for UDMA/100 Aug 12 23:58:21.376702 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:58:21.442808 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 12 23:58:21.427978 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:58:21.453425 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 12 23:58:21.453815 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 12 23:58:21.459096 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:58:21.481668 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 12 23:58:21.507589 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:58:21.604101 disk-uuid[567]: Primary Header is updated. Aug 12 23:58:21.604101 disk-uuid[567]: Secondary Entries is updated. Aug 12 23:58:21.604101 disk-uuid[567]: Secondary Header is updated. Aug 12 23:58:21.620552 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:58:21.627659 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:58:22.687827 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:58:22.687955 kernel: block device autoloading is deprecated and will be removed. Aug 12 23:58:22.691532 disk-uuid[580]: The operation has completed successfully. Aug 12 23:58:22.822951 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:58:22.823185 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:58:22.889209 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:58:22.897990 sh[596]: Success Aug 12 23:58:22.930693 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 12 23:58:23.024361 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:58:23.029368 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:58:23.063040 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:58:23.094937 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 12 23:58:23.095039 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:58:23.095054 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 12 23:58:23.097263 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 12 23:58:23.097299 kernel: BTRFS info (device dm-0): using free space tree Aug 12 23:58:23.119245 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:58:23.119950 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:58:23.143088 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:58:23.150830 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:58:23.213252 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:58:23.213350 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:58:23.213368 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:58:23.227281 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:58:23.238673 kernel: BTRFS info (device vda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:58:23.254242 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:58:23.271626 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:58:23.811963 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:58:23.850325 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:58:23.881759 ignition[679]: Ignition 2.20.0 Aug 12 23:58:23.881779 ignition[679]: Stage: fetch-offline Aug 12 23:58:23.881885 ignition[679]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:23.881906 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:23.882068 ignition[679]: parsed url from cmdline: "" Aug 12 23:58:23.882074 ignition[679]: no config URL provided Aug 12 23:58:23.882083 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:58:23.882099 ignition[679]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:58:23.882151 ignition[679]: op(1): [started] loading QEMU firmware config module Aug 12 23:58:23.882162 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 12 23:58:23.915990 ignition[679]: op(1): [finished] loading QEMU firmware config module Aug 12 23:58:23.935834 systemd-networkd[780]: lo: Link UP Aug 12 23:58:23.936112 systemd-networkd[780]: lo: Gained carrier Aug 12 23:58:23.951294 systemd-networkd[780]: Enumeration completed Aug 12 23:58:23.951775 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:58:23.953947 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:58:23.953953 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:58:23.958904 systemd-networkd[780]: eth0: Link UP Aug 12 23:58:23.958910 systemd-networkd[780]: eth0: Gained carrier Aug 12 23:58:23.958924 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:58:23.962660 systemd[1]: Reached target network.target - Network. Aug 12 23:58:23.995359 ignition[679]: parsing config with SHA512: 55a4e0b36d5d2781144b035ff03cc8a7523d0f3c56ca0c3fa9974f981d49c5137cbe369888836364b178af70afbb9004539e07948cd27989bb5ded7099748e2a Aug 12 23:58:23.995767 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:58:24.001777 unknown[679]: fetched base config from "system" Aug 12 23:58:24.001799 unknown[679]: fetched user config from "qemu" Aug 12 23:58:24.002357 ignition[679]: fetch-offline: fetch-offline passed Aug 12 23:58:24.002468 ignition[679]: Ignition finished successfully Aug 12 23:58:24.011074 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:58:24.012699 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 12 23:58:24.025164 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:58:24.055188 ignition[789]: Ignition 2.20.0 Aug 12 23:58:24.055208 ignition[789]: Stage: kargs Aug 12 23:58:24.055460 ignition[789]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:24.055475 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:24.056654 ignition[789]: kargs: kargs passed Aug 12 23:58:24.056736 ignition[789]: Ignition finished successfully Aug 12 23:58:24.067884 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:58:24.081689 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:58:24.118466 ignition[798]: Ignition 2.20.0 Aug 12 23:58:24.118484 ignition[798]: Stage: disks Aug 12 23:58:24.123255 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:58:24.118789 ignition[798]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:24.118806 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:24.120033 ignition[798]: disks: disks passed Aug 12 23:58:24.120102 ignition[798]: Ignition finished successfully Aug 12 23:58:24.162030 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:58:24.198572 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:58:24.203659 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:58:24.203997 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:58:24.210974 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:58:24.247281 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:58:24.329931 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 12 23:58:24.525051 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:58:24.550906 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:58:24.792602 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:58:24.801986 kernel: EXT4-fs (vda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 12 23:58:24.797750 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:58:24.817819 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:58:24.833995 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:58:24.847789 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (816) Aug 12 23:58:24.847850 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:58:24.847867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:58:24.847882 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:58:24.839018 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 12 23:58:24.857093 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:58:24.839102 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:58:24.839142 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:58:24.858819 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:58:24.873760 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:58:24.887911 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:58:24.978993 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:58:24.999345 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:58:25.008557 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:58:25.021809 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:58:25.215667 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:58:25.235873 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:58:25.244147 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:58:25.257992 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:58:25.259926 kernel: BTRFS info (device vda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:58:25.324521 systemd-networkd[780]: eth0: Gained IPv6LL Aug 12 23:58:25.351597 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:58:25.419705 ignition[932]: INFO : Ignition 2.20.0 Aug 12 23:58:25.419705 ignition[932]: INFO : Stage: mount Aug 12 23:58:25.419705 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:25.419705 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:25.440054 ignition[932]: INFO : mount: mount passed Aug 12 23:58:25.440054 ignition[932]: INFO : Ignition finished successfully Aug 12 23:58:25.431951 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:58:25.466842 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:58:25.821254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:58:25.837883 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (942) Aug 12 23:58:25.842159 kernel: BTRFS info (device vda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 12 23:58:25.842242 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 12 23:58:25.846903 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:58:25.873813 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:58:25.889842 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:58:26.159289 ignition[959]: INFO : Ignition 2.20.0 Aug 12 23:58:26.159289 ignition[959]: INFO : Stage: files Aug 12 23:58:26.166744 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:26.166744 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:26.166744 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:58:26.166744 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:58:26.166744 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:58:26.183844 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:58:26.183844 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:58:26.190191 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:58:26.184238 unknown[959]: wrote ssh authorized keys file for user: core Aug 12 23:58:26.195544 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 12 23:58:26.195544 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 12 23:58:26.394251 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:58:27.078848 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 12 23:58:27.078848 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:58:27.086006 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 12 23:58:27.299978 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 12 23:58:27.703795 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:58:27.703795 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 12 23:58:27.711074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 12 23:58:28.056717 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 12 23:58:29.955313 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 12 23:58:29.955313 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 12 23:58:29.963153 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:58:29.963153 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:58:29.963153 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 12 23:58:29.963153 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 12 23:58:29.963153 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:58:29.963153 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:58:29.963153 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 12 23:58:29.963153 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 12 23:58:30.008265 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:58:30.019872 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:58:30.022046 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 12 23:58:30.022046 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:58:30.022046 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:58:30.022046 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:58:30.022046 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:58:30.022046 ignition[959]: INFO : files: files passed Aug 12 23:58:30.022046 ignition[959]: INFO : Ignition finished successfully Aug 12 23:58:30.029390 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:58:30.048944 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:58:30.054597 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:58:30.064162 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:58:30.064348 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:58:30.091594 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Aug 12 23:58:30.095516 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:58:30.095516 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:58:30.101359 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:58:30.105778 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:58:30.108129 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:58:30.123255 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:58:30.170964 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:58:30.171163 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:58:30.172268 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:58:30.176118 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:58:30.181384 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:58:30.192015 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:58:30.219399 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:58:30.227116 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:58:30.245827 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:58:30.246736 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:58:30.250237 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:58:30.253073 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:58:30.253289 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:58:30.258924 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:58:30.259558 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:58:30.260433 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:58:30.261042 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:58:30.261425 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:58:30.262029 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:58:30.262416 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:58:30.263043 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:58:30.278205 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:58:30.280387 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:58:30.280812 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:58:30.281037 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:58:30.285978 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:58:30.289404 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:58:30.290168 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:58:30.290907 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:58:30.294498 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:58:30.294757 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:58:30.301662 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:58:30.301966 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:58:30.303180 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:58:30.309336 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:58:30.313882 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:58:30.319444 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:58:30.323973 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:58:30.324495 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:58:30.324703 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:58:30.327789 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:58:30.327989 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:58:30.342702 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:58:30.343111 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:58:30.345364 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:58:30.345561 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:58:30.374107 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:58:30.374544 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:58:30.374775 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:58:30.380992 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:58:30.381385 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:58:30.381585 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:58:30.386618 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:58:30.386845 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:58:30.418048 ignition[1013]: INFO : Ignition 2.20.0 Aug 12 23:58:30.418048 ignition[1013]: INFO : Stage: umount Aug 12 23:58:30.418048 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:58:30.418048 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:58:30.418048 ignition[1013]: INFO : umount: umount passed Aug 12 23:58:30.418048 ignition[1013]: INFO : Ignition finished successfully Aug 12 23:58:30.405340 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:58:30.405528 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:58:30.416816 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:58:30.417017 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:58:30.422850 systemd[1]: Stopped target network.target - Network. Aug 12 23:58:30.430190 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:58:30.430330 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:58:30.433345 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:58:30.433445 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:58:30.437259 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:58:30.437367 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:58:30.438584 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:58:30.438687 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:58:30.439394 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:58:30.446368 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:58:30.455981 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:58:30.481662 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:58:30.481912 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:58:30.495533 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 12 23:58:30.497857 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:58:30.498073 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:58:30.505100 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:58:30.505261 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:58:30.507097 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:58:30.507172 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:58:30.527607 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:58:30.528049 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:58:30.528233 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:58:30.533033 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 12 23:58:30.534199 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:58:30.534348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:58:30.551920 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:58:30.552215 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:58:30.552329 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:58:30.556420 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:58:30.556540 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:58:30.561422 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:58:30.562574 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:58:30.565421 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:58:30.569074 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:58:30.583230 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:58:30.583483 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:58:30.585064 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:58:30.585358 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:58:30.590624 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:58:30.590859 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:58:30.591543 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:58:30.591606 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:58:30.597487 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:58:30.597618 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:58:30.601265 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:58:30.601350 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:58:30.602336 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:58:30.602401 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:58:30.621029 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:58:30.624723 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:58:30.626089 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:58:30.629460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:58:30.631701 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:58:30.636579 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:58:30.636778 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:58:30.641594 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:58:30.654093 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:58:30.668621 systemd[1]: Switching root. Aug 12 23:58:30.710780 systemd-journald[194]: Journal stopped Aug 12 23:58:33.576517 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Aug 12 23:58:33.576650 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:58:33.576691 kernel: SELinux: policy capability open_perms=1 Aug 12 23:58:33.576708 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:58:33.576724 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:58:33.576741 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:58:33.576769 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:58:33.576785 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:58:33.576802 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:58:33.576819 kernel: audit: type=1403 audit(1755043111.856:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:58:33.576837 systemd[1]: Successfully loaded SELinux policy in 56.705ms. Aug 12 23:58:33.576869 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 22.862ms. Aug 12 23:58:33.576889 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:58:33.576907 systemd[1]: Detected virtualization kvm. Aug 12 23:58:33.576924 systemd[1]: Detected architecture x86-64. Aug 12 23:58:33.576949 systemd[1]: Detected first boot. Aug 12 23:58:33.576967 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:58:33.576991 zram_generator::config[1060]: No configuration found. Aug 12 23:58:33.577010 kernel: Guest personality initialized and is inactive Aug 12 23:58:33.577027 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 12 23:58:33.577043 kernel: Initialized host personality Aug 12 23:58:33.577059 kernel: NET: Registered PF_VSOCK protocol family Aug 12 23:58:33.577076 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:58:33.577098 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 12 23:58:33.577115 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:58:33.577133 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:58:33.577151 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:58:33.577168 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:58:33.577189 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:58:33.577207 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:58:33.577225 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:58:33.577242 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:58:33.577264 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:58:33.577281 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:58:33.577299 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:58:33.577316 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:58:33.577334 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:58:33.577352 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:58:33.577374 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:58:33.577392 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:58:33.577414 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:58:33.577432 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 12 23:58:33.577450 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:58:33.577467 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:58:33.577485 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:58:33.577503 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:58:33.577520 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:58:33.577538 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:58:33.577559 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:58:33.577593 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:58:33.577612 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:58:33.577630 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:58:33.577662 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:58:33.577680 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 12 23:58:33.577698 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:58:33.577715 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:58:33.577733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:58:33.577754 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:58:33.577772 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:58:33.577792 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:58:33.577809 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:58:33.577827 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:58:33.577844 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:58:33.577862 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:58:33.577887 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:58:33.577905 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:58:33.577926 systemd[1]: Reached target machines.target - Containers. Aug 12 23:58:33.577944 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:58:33.577961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:58:33.580488 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:58:33.580519 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:58:33.580537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:58:33.580555 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:58:33.580586 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:58:33.580611 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:58:33.580644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:58:33.580665 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:58:33.580683 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:58:33.580884 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:58:33.580905 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:58:33.580922 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:58:33.580942 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:58:33.581956 kernel: fuse: init (API version 7.39) Aug 12 23:58:33.581988 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:58:33.582003 kernel: loop: module loaded Aug 12 23:58:33.582018 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:58:33.582033 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:58:33.582049 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:58:33.582064 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 12 23:58:33.582121 systemd-journald[1145]: Collecting audit messages is disabled. Aug 12 23:58:33.582166 systemd-journald[1145]: Journal started Aug 12 23:58:33.582194 systemd-journald[1145]: Runtime Journal (/run/log/journal/202dd1a2aeb944b2970a798e21d73f94) is 6M, max 48.4M, 42.3M free. Aug 12 23:58:33.122566 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:58:33.142024 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 12 23:58:33.143250 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:58:33.590112 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:58:33.593672 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:58:33.593731 systemd[1]: Stopped verity-setup.service. Aug 12 23:58:33.602684 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:58:33.608146 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:58:33.617865 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:58:33.620928 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:58:33.623011 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:58:33.628497 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:58:33.630025 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:58:33.631502 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:58:33.636942 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:58:33.638953 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:58:33.642551 kernel: ACPI: bus type drm_connector registered Aug 12 23:58:33.641985 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:58:33.642274 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:58:33.644138 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:58:33.644421 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:58:33.646325 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:58:33.647201 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:58:33.651330 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:58:33.651721 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:58:33.653690 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:58:33.653948 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:58:33.655794 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:58:33.656133 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:58:33.658166 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:58:33.660649 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:58:33.663087 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:58:33.668287 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 12 23:58:33.691989 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:58:33.703243 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:58:33.708672 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:58:33.710136 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:58:33.710185 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:58:33.713277 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 12 23:58:33.722582 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:58:33.748756 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:58:33.750440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:58:33.763737 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:58:33.768589 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:58:33.771823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:58:33.773734 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:58:33.777811 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:58:33.785835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:58:33.793860 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:58:33.800455 systemd-journald[1145]: Time spent on flushing to /var/log/journal/202dd1a2aeb944b2970a798e21d73f94 is 19.844ms for 966 entries. Aug 12 23:58:33.800455 systemd-journald[1145]: System Journal (/var/log/journal/202dd1a2aeb944b2970a798e21d73f94) is 8M, max 195.6M, 187.6M free. Aug 12 23:58:33.876058 systemd-journald[1145]: Received client request to flush runtime journal. Aug 12 23:58:33.876146 kernel: loop0: detected capacity change from 0 to 224512 Aug 12 23:58:33.803419 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:58:33.813608 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:58:33.825858 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:58:33.836367 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:58:33.850946 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:58:33.854248 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:58:33.874558 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:58:33.892421 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 12 23:58:33.898943 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 12 23:58:33.904722 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:58:33.908486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:58:33.918822 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:58:33.938488 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 12 23:58:33.940080 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 12 23:58:33.957149 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:58:33.966723 kernel: loop1: detected capacity change from 0 to 147912 Aug 12 23:58:33.968817 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:58:34.033431 kernel: loop2: detected capacity change from 0 to 138176 Aug 12 23:58:34.032936 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Aug 12 23:58:34.032992 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Aug 12 23:58:34.044863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:58:34.092680 kernel: loop3: detected capacity change from 0 to 224512 Aug 12 23:58:34.148760 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:58:34.153682 kernel: loop4: detected capacity change from 0 to 147912 Aug 12 23:58:34.217630 kernel: loop5: detected capacity change from 0 to 138176 Aug 12 23:58:34.279544 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 12 23:58:34.280492 (sd-merge)[1204]: Merged extensions into '/usr'. Aug 12 23:58:34.299623 systemd[1]: Reload requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:58:34.300273 systemd[1]: Reloading... Aug 12 23:58:34.423181 zram_generator::config[1235]: No configuration found. Aug 12 23:58:34.681461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:58:34.742023 ldconfig[1175]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:58:34.790690 systemd[1]: Reloading finished in 485 ms. Aug 12 23:58:34.830038 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:58:34.835773 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:58:34.860315 systemd[1]: Starting ensure-sysext.service... Aug 12 23:58:34.866023 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:58:34.895566 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:58:34.897613 systemd[1]: Reloading... Aug 12 23:58:34.960306 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:58:34.960855 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:58:34.962377 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:58:34.962868 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Aug 12 23:58:34.963001 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Aug 12 23:58:35.012548 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:58:35.012574 systemd-tmpfiles[1270]: Skipping /boot Aug 12 23:58:35.049706 zram_generator::config[1299]: No configuration found. Aug 12 23:58:35.056571 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:58:35.056600 systemd-tmpfiles[1270]: Skipping /boot Aug 12 23:58:35.243688 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:58:35.354779 systemd[1]: Reloading finished in 454 ms. Aug 12 23:58:35.379983 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:58:35.413287 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:58:35.449282 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:58:35.457017 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:58:35.461332 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:58:35.469427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:58:35.488092 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:58:35.517168 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:58:35.524373 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:58:35.535287 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:58:35.535730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:58:35.549197 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:58:35.560810 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:58:35.568473 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:58:35.573902 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:58:35.574123 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:58:35.576608 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Aug 12 23:58:35.576919 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:58:35.583912 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:58:35.587900 augenrules[1368]: No rules Aug 12 23:58:35.584323 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:58:35.590397 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:58:35.591162 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:58:35.594281 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:58:35.597254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:58:35.597668 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:58:35.600844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:58:35.601210 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:58:35.606482 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:58:35.606848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:58:35.617301 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:58:35.635502 systemd[1]: Finished ensure-sysext.service. Aug 12 23:58:35.642305 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:58:35.654875 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:58:35.660971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:58:35.668997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:58:35.694168 augenrules[1381]: /sbin/augenrules: No change Aug 12 23:58:35.698312 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:58:35.702876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:58:35.706818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:58:35.711782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:58:35.711845 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:58:35.716070 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:58:35.719979 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 12 23:58:35.720715 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:58:35.728417 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:58:35.733797 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:58:35.737315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:58:35.737689 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:58:35.744222 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:58:35.746911 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:58:35.748974 augenrules[1422]: No rules Aug 12 23:58:35.749545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:58:35.749961 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:58:35.753044 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:58:35.753382 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:58:35.756369 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:58:35.757340 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:58:35.795966 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:58:35.801501 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:58:35.801666 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:58:35.801710 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:58:35.801942 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 12 23:58:35.922972 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (1390) Aug 12 23:58:36.031963 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 12 23:58:36.070129 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:58:36.072381 systemd-resolved[1342]: Positive Trust Anchors: Aug 12 23:58:36.072411 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:58:36.072451 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:58:36.080866 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:58:36.086441 systemd-resolved[1342]: Defaulting to hostname 'linux'. Aug 12 23:58:36.091338 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:58:36.093047 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:58:36.100718 kernel: ACPI: button: Power Button [PWRF] Aug 12 23:58:36.110311 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 12 23:58:36.112729 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 12 23:58:36.112986 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 12 23:58:36.113192 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 12 23:58:36.160769 systemd-networkd[1438]: lo: Link UP Aug 12 23:58:36.160782 systemd-networkd[1438]: lo: Gained carrier Aug 12 23:58:36.165362 systemd-networkd[1438]: Enumeration completed Aug 12 23:58:36.165526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:58:36.166084 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:58:36.166092 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:58:36.166954 systemd-networkd[1438]: eth0: Link UP Aug 12 23:58:36.166961 systemd-networkd[1438]: eth0: Gained carrier Aug 12 23:58:36.166978 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:58:36.175889 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:58:36.185825 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:58:36.210566 systemd[1]: Reached target network.target - Network. Aug 12 23:58:36.229666 kernel: mousedev: PS/2 mouse device common for all mice Aug 12 23:58:36.238804 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:58:36.241917 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:58:36.244385 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Aug 12 23:58:36.246075 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 12 23:58:36.247798 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 12 23:58:36.247879 systemd-timesyncd[1416]: Initial clock synchronization to Tue 2025-08-12 23:58:36.360955 UTC. Aug 12 23:58:36.256625 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:58:36.333467 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:58:36.366397 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 12 23:58:36.510421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:58:36.589921 kernel: kvm_amd: TSC scaling supported Aug 12 23:58:36.590060 kernel: kvm_amd: Nested Virtualization enabled Aug 12 23:58:36.590118 kernel: kvm_amd: Nested Paging enabled Aug 12 23:58:36.592628 kernel: kvm_amd: LBR virtualization supported Aug 12 23:58:36.592708 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 12 23:58:36.592748 kernel: kvm_amd: Virtual GIF supported Aug 12 23:58:36.673955 kernel: EDAC MC: Ver: 3.0.0 Aug 12 23:58:36.708882 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 12 23:58:36.731373 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 12 23:58:36.749219 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:58:36.799827 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 12 23:58:36.804070 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:58:36.805490 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:58:36.807533 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:58:36.810165 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:58:36.812184 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:58:36.813651 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:58:36.815232 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:58:36.816730 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:58:36.816780 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:58:36.818178 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:58:36.821430 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:58:36.827236 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:58:36.832577 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 12 23:58:36.835059 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 12 23:58:36.836692 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 12 23:58:36.843206 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:58:36.848490 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 12 23:58:36.853131 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 12 23:58:36.855766 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:58:36.857493 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:58:36.858855 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:58:36.861076 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:58:36.861131 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:58:36.863119 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:58:36.868067 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:58:36.868277 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:58:36.874856 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:58:36.880451 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:58:36.883728 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:58:36.887236 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:58:36.890827 jq[1473]: false Aug 12 23:58:36.892032 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:58:36.898990 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:58:36.906911 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:58:36.916952 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:58:36.921054 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:58:36.921997 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:58:36.931546 extend-filesystems[1474]: Found loop3 Aug 12 23:58:36.931546 extend-filesystems[1474]: Found loop4 Aug 12 23:58:36.931546 extend-filesystems[1474]: Found loop5 Aug 12 23:58:36.931546 extend-filesystems[1474]: Found sr0 Aug 12 23:58:36.931546 extend-filesystems[1474]: Found vda Aug 12 23:58:36.931546 extend-filesystems[1474]: Found vda1 Aug 12 23:58:36.931546 extend-filesystems[1474]: Found vda2 Aug 12 23:58:36.931546 extend-filesystems[1474]: Found vda3 Aug 12 23:58:36.931546 extend-filesystems[1474]: Found usr Aug 12 23:58:36.931546 extend-filesystems[1474]: Found vda4 Aug 12 23:58:36.931546 extend-filesystems[1474]: Found vda6 Aug 12 23:58:36.931546 extend-filesystems[1474]: Found vda7 Aug 12 23:58:36.930714 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:58:36.940797 dbus-daemon[1472]: [system] SELinux support is enabled Aug 12 23:58:36.967437 extend-filesystems[1474]: Found vda9 Aug 12 23:58:36.967437 extend-filesystems[1474]: Checking size of /dev/vda9 Aug 12 23:58:36.935845 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:58:36.940524 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 12 23:58:36.978878 jq[1490]: true Aug 12 23:58:36.941116 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:58:36.947304 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:58:36.947684 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:58:36.987923 jq[1495]: true Aug 12 23:58:36.948173 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:58:36.948529 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:58:36.959347 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:58:36.991848 extend-filesystems[1474]: Resized partition /dev/vda9 Aug 12 23:58:36.959891 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:58:37.000856 update_engine[1483]: I20250812 23:58:36.993799 1483 main.cc:92] Flatcar Update Engine starting Aug 12 23:58:36.977847 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:58:37.014101 update_engine[1483]: I20250812 23:58:37.013403 1483 update_check_scheduler.cc:74] Next update check in 5m16s Aug 12 23:58:37.014126 extend-filesystems[1509]: resize2fs 1.47.1 (20-May-2024) Aug 12 23:58:37.018584 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (1407) Aug 12 23:58:36.977899 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:58:37.018946 tar[1494]: linux-amd64/LICENSE Aug 12 23:58:37.018946 tar[1494]: linux-amd64/helm Aug 12 23:58:36.980548 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:58:36.980971 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:58:36.981001 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:58:37.014458 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:58:37.023884 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:58:37.110938 systemd-logind[1480]: Watching system buttons on /dev/input/event1 (Power Button) Aug 12 23:58:37.110991 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 12 23:58:37.111874 systemd-logind[1480]: New seat seat0. Aug 12 23:58:37.125370 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:58:37.152745 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 12 23:58:37.200115 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:58:37.244702 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 12 23:58:37.357754 extend-filesystems[1509]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:58:37.357754 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:58:37.357754 extend-filesystems[1509]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 12 23:58:37.378487 bash[1526]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:58:37.371697 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:58:37.383703 extend-filesystems[1474]: Resized filesystem in /dev/vda9 Aug 12 23:58:37.382438 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:58:37.382884 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:58:37.391797 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 12 23:58:37.480730 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:58:37.573434 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:58:37.599728 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:58:37.613562 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:58:37.614227 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:58:37.641324 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:58:37.692345 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:58:37.705945 containerd[1497]: time="2025-08-12T23:58:37.705530424Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 12 23:58:37.709778 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:58:37.715595 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 12 23:58:37.717916 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:58:37.749455 containerd[1497]: time="2025-08-12T23:58:37.748666520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:58:37.753147 containerd[1497]: time="2025-08-12T23:58:37.753081186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:58:37.753147 containerd[1497]: time="2025-08-12T23:58:37.753129859Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 12 23:58:37.753147 containerd[1497]: time="2025-08-12T23:58:37.753154075Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 12 23:58:37.753977 containerd[1497]: time="2025-08-12T23:58:37.753438009Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 12 23:58:37.753977 containerd[1497]: time="2025-08-12T23:58:37.753473586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 12 23:58:37.753977 containerd[1497]: time="2025-08-12T23:58:37.753580537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:58:37.753977 containerd[1497]: time="2025-08-12T23:58:37.753600766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:58:37.754273 containerd[1497]: time="2025-08-12T23:58:37.754010167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:58:37.754273 containerd[1497]: time="2025-08-12T23:58:37.754034362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 12 23:58:37.754273 containerd[1497]: time="2025-08-12T23:58:37.754054310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:58:37.754273 containerd[1497]: time="2025-08-12T23:58:37.754068657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 12 23:58:37.754273 containerd[1497]: time="2025-08-12T23:58:37.754219155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:58:37.754763 containerd[1497]: time="2025-08-12T23:58:37.754585896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:58:37.754928 containerd[1497]: time="2025-08-12T23:58:37.754892543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:58:37.754928 containerd[1497]: time="2025-08-12T23:58:37.754920966Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 12 23:58:37.755400 containerd[1497]: time="2025-08-12T23:58:37.755069880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 12 23:58:37.755400 containerd[1497]: time="2025-08-12T23:58:37.755177094Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.768945637Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769045040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769065866Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769085501Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769103500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769351405Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769622375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769781207Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769797331Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769811102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769825551Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769851350Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769890630Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 12 23:58:37.771676 containerd[1497]: time="2025-08-12T23:58:37.769907086Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.769953327Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.769967766Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.769981790Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.769993394Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.770014128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.770027285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.770040896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.770053589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.770065696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.770077996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.770092838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.770123642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.770147787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772047 containerd[1497]: time="2025-08-12T23:58:37.770167008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770185007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770217537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770234851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770252397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770276400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770291938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770307114Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770364605Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770387407Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770401028Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770417384Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770430319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770445787Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 12 23:58:37.772354 containerd[1497]: time="2025-08-12T23:58:37.770459357Z" level=info msg="NRI interface is disabled by configuration." Aug 12 23:58:37.772630 containerd[1497]: time="2025-08-12T23:58:37.770474663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 12 23:58:37.772676 containerd[1497]: time="2025-08-12T23:58:37.770816301Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 12 23:58:37.772676 containerd[1497]: time="2025-08-12T23:58:37.771008753Z" level=info msg="Connect containerd service" Aug 12 23:58:37.772676 containerd[1497]: time="2025-08-12T23:58:37.771051160Z" level=info msg="using legacy CRI server" Aug 12 23:58:37.772676 containerd[1497]: time="2025-08-12T23:58:37.771059090Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:58:37.772676 containerd[1497]: time="2025-08-12T23:58:37.771255870Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 12 23:58:37.773739 containerd[1497]: time="2025-08-12T23:58:37.773714916Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:58:37.774109 containerd[1497]: time="2025-08-12T23:58:37.774010484Z" level=info msg="Start subscribing containerd event" Aug 12 23:58:37.774290 containerd[1497]: time="2025-08-12T23:58:37.774146373Z" level=info msg="Start recovering state" Aug 12 23:58:37.774441 containerd[1497]: time="2025-08-12T23:58:37.774398727Z" level=info msg="Start event monitor" Aug 12 23:58:37.774496 containerd[1497]: time="2025-08-12T23:58:37.774450971Z" level=info msg="Start snapshots syncer" Aug 12 23:58:37.774496 containerd[1497]: time="2025-08-12T23:58:37.774470062Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:58:37.774496 containerd[1497]: time="2025-08-12T23:58:37.774482018Z" level=info msg="Start streaming server" Aug 12 23:58:37.774923 containerd[1497]: time="2025-08-12T23:58:37.774902274Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:58:37.775051 containerd[1497]: time="2025-08-12T23:58:37.775036821Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:58:37.775290 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:58:37.776377 containerd[1497]: time="2025-08-12T23:58:37.776355830Z" level=info msg="containerd successfully booted in 0.072343s" Aug 12 23:58:37.928170 systemd-networkd[1438]: eth0: Gained IPv6LL Aug 12 23:58:37.932160 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:58:37.934273 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:58:37.945977 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 12 23:58:37.949128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:58:37.951962 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:58:38.102639 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 12 23:58:38.103762 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 12 23:58:38.106620 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:58:38.111368 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:58:38.309441 tar[1494]: linux-amd64/README.md Aug 12 23:58:38.334694 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:58:40.103603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:58:40.105602 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:58:40.107096 systemd[1]: Startup finished in 1.071s (kernel) + 13.086s (initrd) + 8.304s (userspace) = 22.462s. Aug 12 23:58:40.138170 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:58:40.894257 kubelet[1586]: E0812 23:58:40.893832 1586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:58:40.900038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:58:40.900326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:58:40.901011 systemd[1]: kubelet.service: Consumed 2.642s CPU time, 268.3M memory peak. Aug 12 23:58:46.489233 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:58:46.506068 systemd[1]: Started sshd@0-10.0.0.83:22-10.0.0.1:56230.service - OpenSSH per-connection server daemon (10.0.0.1:56230). Aug 12 23:58:46.561873 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 56230 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:58:46.563668 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:46.577097 systemd-logind[1480]: New session 1 of user core. Aug 12 23:58:46.578903 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:58:46.593961 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:58:46.609148 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:58:46.612197 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:58:46.620938 (systemd)[1603]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:58:46.623439 systemd-logind[1480]: New session c1 of user core. Aug 12 23:58:46.769954 systemd[1603]: Queued start job for default target default.target. Aug 12 23:58:46.779161 systemd[1603]: Created slice app.slice - User Application Slice. Aug 12 23:58:46.779191 systemd[1603]: Reached target paths.target - Paths. Aug 12 23:58:46.779240 systemd[1603]: Reached target timers.target - Timers. Aug 12 23:58:46.781132 systemd[1603]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:58:46.793940 systemd[1603]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:58:46.794120 systemd[1603]: Reached target sockets.target - Sockets. Aug 12 23:58:46.794172 systemd[1603]: Reached target basic.target - Basic System. Aug 12 23:58:46.794224 systemd[1603]: Reached target default.target - Main User Target. Aug 12 23:58:46.794264 systemd[1603]: Startup finished in 163ms. Aug 12 23:58:46.794699 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:58:46.796686 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:58:46.859067 systemd[1]: Started sshd@1-10.0.0.83:22-10.0.0.1:56244.service - OpenSSH per-connection server daemon (10.0.0.1:56244). Aug 12 23:58:46.901041 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 56244 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:58:46.902676 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:46.907743 systemd-logind[1480]: New session 2 of user core. Aug 12 23:58:46.914792 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:58:46.967948 sshd[1616]: Connection closed by 10.0.0.1 port 56244 Aug 12 23:58:46.968267 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:46.976313 systemd[1]: sshd@1-10.0.0.83:22-10.0.0.1:56244.service: Deactivated successfully. Aug 12 23:58:46.978215 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:58:46.979755 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:58:46.993948 systemd[1]: Started sshd@2-10.0.0.83:22-10.0.0.1:56248.service - OpenSSH per-connection server daemon (10.0.0.1:56248). Aug 12 23:58:46.995102 systemd-logind[1480]: Removed session 2. Aug 12 23:58:47.033115 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 56248 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:58:47.034699 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:47.039068 systemd-logind[1480]: New session 3 of user core. Aug 12 23:58:47.048784 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:58:47.098465 sshd[1624]: Connection closed by 10.0.0.1 port 56248 Aug 12 23:58:47.098888 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:47.114028 systemd[1]: sshd@2-10.0.0.83:22-10.0.0.1:56248.service: Deactivated successfully. Aug 12 23:58:47.116190 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:58:47.117800 systemd-logind[1480]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:58:47.126901 systemd[1]: Started sshd@3-10.0.0.83:22-10.0.0.1:56258.service - OpenSSH per-connection server daemon (10.0.0.1:56258). Aug 12 23:58:47.128008 systemd-logind[1480]: Removed session 3. Aug 12 23:58:47.158593 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 56258 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:58:47.160070 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:47.165037 systemd-logind[1480]: New session 4 of user core. Aug 12 23:58:47.174795 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:58:47.229247 sshd[1632]: Connection closed by 10.0.0.1 port 56258 Aug 12 23:58:47.229696 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:47.241953 systemd[1]: sshd@3-10.0.0.83:22-10.0.0.1:56258.service: Deactivated successfully. Aug 12 23:58:47.244123 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:58:47.245949 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:58:47.247424 systemd[1]: Started sshd@4-10.0.0.83:22-10.0.0.1:56268.service - OpenSSH per-connection server daemon (10.0.0.1:56268). Aug 12 23:58:47.248487 systemd-logind[1480]: Removed session 4. Aug 12 23:58:47.296965 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 56268 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:58:47.298433 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:47.303165 systemd-logind[1480]: New session 5 of user core. Aug 12 23:58:47.317778 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:58:47.379543 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:58:47.379912 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:58:47.404375 sudo[1641]: pam_unix(sudo:session): session closed for user root Aug 12 23:58:47.406080 sshd[1640]: Connection closed by 10.0.0.1 port 56268 Aug 12 23:58:47.406568 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:47.415509 systemd[1]: sshd@4-10.0.0.83:22-10.0.0.1:56268.service: Deactivated successfully. Aug 12 23:58:47.417279 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:58:47.419003 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:58:47.420555 systemd[1]: Started sshd@5-10.0.0.83:22-10.0.0.1:56274.service - OpenSSH per-connection server daemon (10.0.0.1:56274). Aug 12 23:58:47.421289 systemd-logind[1480]: Removed session 5. Aug 12 23:58:47.462849 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 56274 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:58:47.464781 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:47.469627 systemd-logind[1480]: New session 6 of user core. Aug 12 23:58:47.479787 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:58:47.534798 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:58:47.535154 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:58:47.539249 sudo[1651]: pam_unix(sudo:session): session closed for user root Aug 12 23:58:47.546378 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 12 23:58:47.546752 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:58:47.565943 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:58:47.599624 augenrules[1673]: No rules Aug 12 23:58:47.601751 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:58:47.602068 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:58:47.603320 sudo[1650]: pam_unix(sudo:session): session closed for user root Aug 12 23:58:47.604962 sshd[1649]: Connection closed by 10.0.0.1 port 56274 Aug 12 23:58:47.605423 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Aug 12 23:58:47.617724 systemd[1]: sshd@5-10.0.0.83:22-10.0.0.1:56274.service: Deactivated successfully. Aug 12 23:58:47.619728 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:58:47.621484 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:58:47.629984 systemd[1]: Started sshd@6-10.0.0.83:22-10.0.0.1:56290.service - OpenSSH per-connection server daemon (10.0.0.1:56290). Aug 12 23:58:47.631008 systemd-logind[1480]: Removed session 6. Aug 12 23:58:47.661447 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 56290 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:58:47.663177 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:58:47.667606 systemd-logind[1480]: New session 7 of user core. Aug 12 23:58:47.676809 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:58:47.732878 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:58:47.733239 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:58:48.286883 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:58:48.287078 (dockerd)[1704]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:58:48.844719 dockerd[1704]: time="2025-08-12T23:58:48.844608662Z" level=info msg="Starting up" Aug 12 23:58:49.119184 dockerd[1704]: time="2025-08-12T23:58:49.119051227Z" level=info msg="Loading containers: start." Aug 12 23:58:49.309666 kernel: Initializing XFRM netlink socket Aug 12 23:58:49.401174 systemd-networkd[1438]: docker0: Link UP Aug 12 23:58:49.435108 dockerd[1704]: time="2025-08-12T23:58:49.435065544Z" level=info msg="Loading containers: done." Aug 12 23:58:49.456177 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1937216953-merged.mount: Deactivated successfully. Aug 12 23:58:49.458210 dockerd[1704]: time="2025-08-12T23:58:49.458169014Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:58:49.458296 dockerd[1704]: time="2025-08-12T23:58:49.458278223Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 12 23:58:49.458440 dockerd[1704]: time="2025-08-12T23:58:49.458418843Z" level=info msg="Daemon has completed initialization" Aug 12 23:58:49.496837 dockerd[1704]: time="2025-08-12T23:58:49.496756860Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:58:49.496976 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:58:50.428705 containerd[1497]: time="2025-08-12T23:58:50.428658354Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 12 23:58:50.916698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:58:50.923850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:58:51.046039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4035738436.mount: Deactivated successfully. Aug 12 23:58:51.130056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:58:51.134727 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:58:51.304172 kubelet[1913]: E0812 23:58:51.304033 1913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:58:51.311280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:58:51.311510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:58:51.311954 systemd[1]: kubelet.service: Consumed 289ms CPU time, 111.3M memory peak. Aug 12 23:58:52.338848 containerd[1497]: time="2025-08-12T23:58:52.338779082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:52.339573 containerd[1497]: time="2025-08-12T23:58:52.339533406Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 12 23:58:52.340960 containerd[1497]: time="2025-08-12T23:58:52.340920700Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:52.344235 containerd[1497]: time="2025-08-12T23:58:52.344209429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:52.345548 containerd[1497]: time="2025-08-12T23:58:52.345512731Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 1.916812665s" Aug 12 23:58:52.345592 containerd[1497]: time="2025-08-12T23:58:52.345550453Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 12 23:58:52.346543 containerd[1497]: time="2025-08-12T23:58:52.346521403Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 12 23:58:53.977981 containerd[1497]: time="2025-08-12T23:58:53.977910932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:53.978742 containerd[1497]: time="2025-08-12T23:58:53.978703157Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 12 23:58:53.980091 containerd[1497]: time="2025-08-12T23:58:53.980026790Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:53.983131 containerd[1497]: time="2025-08-12T23:58:53.983097917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:53.984112 containerd[1497]: time="2025-08-12T23:58:53.984080917Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.637521282s" Aug 12 23:58:53.984112 containerd[1497]: time="2025-08-12T23:58:53.984108197Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 12 23:58:53.984912 containerd[1497]: time="2025-08-12T23:58:53.984886218Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 12 23:58:56.251355 containerd[1497]: time="2025-08-12T23:58:56.251261096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:56.267782 containerd[1497]: time="2025-08-12T23:58:56.267717145Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 12 23:58:56.306516 containerd[1497]: time="2025-08-12T23:58:56.306435263Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:56.407234 containerd[1497]: time="2025-08-12T23:58:56.407165939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:56.409087 containerd[1497]: time="2025-08-12T23:58:56.409019529Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 2.424099972s" Aug 12 23:58:56.409087 containerd[1497]: time="2025-08-12T23:58:56.409080449Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 12 23:58:56.409713 containerd[1497]: time="2025-08-12T23:58:56.409668850Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 12 23:58:57.907786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566611967.mount: Deactivated successfully. Aug 12 23:58:58.638173 containerd[1497]: time="2025-08-12T23:58:58.638082131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:58.639111 containerd[1497]: time="2025-08-12T23:58:58.639073182Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 12 23:58:58.640349 containerd[1497]: time="2025-08-12T23:58:58.640317457Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:58.642727 containerd[1497]: time="2025-08-12T23:58:58.642656357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:58.643270 containerd[1497]: time="2025-08-12T23:58:58.643234321Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 2.233523739s" Aug 12 23:58:58.643270 containerd[1497]: time="2025-08-12T23:58:58.643263509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 12 23:58:58.644121 containerd[1497]: time="2025-08-12T23:58:58.644097008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:58:59.190311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582992218.mount: Deactivated successfully. Aug 12 23:58:59.986824 containerd[1497]: time="2025-08-12T23:58:59.986760900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:59.987722 containerd[1497]: time="2025-08-12T23:58:59.987653726Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 12 23:58:59.988781 containerd[1497]: time="2025-08-12T23:58:59.988729849Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:59.991615 containerd[1497]: time="2025-08-12T23:58:59.991585644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:58:59.992844 containerd[1497]: time="2025-08-12T23:58:59.992802500Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.348679166s" Aug 12 23:58:59.992844 containerd[1497]: time="2025-08-12T23:58:59.992837205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 12 23:58:59.993425 containerd[1497]: time="2025-08-12T23:58:59.993384765Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:59:00.496743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896282713.mount: Deactivated successfully. Aug 12 23:59:00.507735 containerd[1497]: time="2025-08-12T23:59:00.507658637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:00.508669 containerd[1497]: time="2025-08-12T23:59:00.508601673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 12 23:59:00.509884 containerd[1497]: time="2025-08-12T23:59:00.509829990Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:00.513940 containerd[1497]: time="2025-08-12T23:59:00.513886683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:00.514809 containerd[1497]: time="2025-08-12T23:59:00.514768064Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 521.351606ms" Aug 12 23:59:00.514878 containerd[1497]: time="2025-08-12T23:59:00.514804852Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 12 23:59:00.515459 containerd[1497]: time="2025-08-12T23:59:00.515269406Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 12 23:59:01.139557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011299886.mount: Deactivated successfully. Aug 12 23:59:01.416919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 12 23:59:01.427926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:59:01.666899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:59:01.692238 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:59:02.098983 kubelet[2063]: E0812 23:59:02.098811 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:59:02.103775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:59:02.104045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:59:02.104455 systemd[1]: kubelet.service: Consumed 277ms CPU time, 112.8M memory peak. Aug 12 23:59:04.963406 containerd[1497]: time="2025-08-12T23:59:04.961059522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:04.964803 containerd[1497]: time="2025-08-12T23:59:04.964226432Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 12 23:59:04.968178 containerd[1497]: time="2025-08-12T23:59:04.967553350Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:04.973679 containerd[1497]: time="2025-08-12T23:59:04.973569172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:04.976238 containerd[1497]: time="2025-08-12T23:59:04.976161154Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.46086577s" Aug 12 23:59:04.976238 containerd[1497]: time="2025-08-12T23:59:04.976208020Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 12 23:59:07.772308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:59:07.772517 systemd[1]: kubelet.service: Consumed 277ms CPU time, 112.8M memory peak. Aug 12 23:59:07.783893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:59:07.811927 systemd[1]: Reload requested from client PID 2142 ('systemctl') (unit session-7.scope)... Aug 12 23:59:07.811948 systemd[1]: Reloading... Aug 12 23:59:07.924705 zram_generator::config[2189]: No configuration found. Aug 12 23:59:08.305715 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:59:08.424328 systemd[1]: Reloading finished in 611 ms. Aug 12 23:59:08.482877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:59:08.488162 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:59:08.491508 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:59:08.497172 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:59:08.497560 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:59:08.497671 systemd[1]: kubelet.service: Consumed 212ms CPU time, 99.4M memory peak. Aug 12 23:59:08.513987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:59:08.714607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:59:08.719106 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:59:08.906620 kubelet[2237]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:59:08.906620 kubelet[2237]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:59:08.906620 kubelet[2237]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:59:08.907164 kubelet[2237]: I0812 23:59:08.906704 2237 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:59:09.719588 kubelet[2237]: I0812 23:59:09.719526 2237 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:59:09.719588 kubelet[2237]: I0812 23:59:09.719561 2237 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:59:09.719902 kubelet[2237]: I0812 23:59:09.719877 2237 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:59:09.814363 kubelet[2237]: E0812 23:59:09.814313 2237 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:59:09.814833 kubelet[2237]: I0812 23:59:09.814799 2237 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:59:09.821490 kubelet[2237]: E0812 23:59:09.821461 2237 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:59:09.821490 kubelet[2237]: I0812 23:59:09.821484 2237 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:59:09.828141 kubelet[2237]: I0812 23:59:09.828111 2237 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:59:09.829554 kubelet[2237]: I0812 23:59:09.829503 2237 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:59:09.829724 kubelet[2237]: I0812 23:59:09.829543 2237 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:59:09.829724 kubelet[2237]: I0812 23:59:09.829723 2237 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:59:09.829938 kubelet[2237]: I0812 23:59:09.829734 2237 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:59:09.829938 kubelet[2237]: I0812 23:59:09.829874 2237 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:59:09.834123 kubelet[2237]: I0812 23:59:09.834089 2237 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:59:09.834123 kubelet[2237]: I0812 23:59:09.834122 2237 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:59:09.834181 kubelet[2237]: I0812 23:59:09.834146 2237 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:59:09.834181 kubelet[2237]: I0812 23:59:09.834176 2237 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:59:09.839659 kubelet[2237]: W0812 23:59:09.838349 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Aug 12 23:59:09.839659 kubelet[2237]: E0812 23:59:09.838424 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:59:09.839659 kubelet[2237]: I0812 23:59:09.839130 2237 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 12 23:59:09.839659 kubelet[2237]: W0812 23:59:09.839134 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Aug 12 23:59:09.839659 kubelet[2237]: E0812 23:59:09.839184 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:59:09.839825 kubelet[2237]: I0812 23:59:09.839710 2237 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:59:09.840281 kubelet[2237]: W0812 23:59:09.840254 2237 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:59:09.843496 kubelet[2237]: I0812 23:59:09.843459 2237 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:59:09.843547 kubelet[2237]: I0812 23:59:09.843517 2237 server.go:1287] "Started kubelet" Aug 12 23:59:09.843828 kubelet[2237]: I0812 23:59:09.843757 2237 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:59:09.844902 kubelet[2237]: I0812 23:59:09.844872 2237 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:59:09.846407 kubelet[2237]: I0812 23:59:09.846332 2237 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:59:09.846722 kubelet[2237]: I0812 23:59:09.846686 2237 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:59:09.847667 kubelet[2237]: I0812 23:59:09.847568 2237 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:59:09.848769 kubelet[2237]: I0812 23:59:09.848736 2237 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:59:09.849660 kubelet[2237]: E0812 23:59:09.849444 2237 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:59:09.849660 kubelet[2237]: I0812 23:59:09.849488 2237 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:59:09.849660 kubelet[2237]: I0812 23:59:09.849626 2237 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:59:09.849798 kubelet[2237]: I0812 23:59:09.849705 2237 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:59:09.850272 kubelet[2237]: W0812 23:59:09.850078 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Aug 12 23:59:09.850272 kubelet[2237]: E0812 23:59:09.850144 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:59:09.850563 kubelet[2237]: I0812 23:59:09.850543 2237 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:59:09.850664 kubelet[2237]: I0812 23:59:09.850646 2237 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:59:09.851792 kubelet[2237]: E0812 23:59:09.851767 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="200ms" Aug 12 23:59:09.851893 kubelet[2237]: I0812 23:59:09.851877 2237 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:59:09.852073 kubelet[2237]: E0812 23:59:09.852050 2237 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:59:09.855017 kubelet[2237]: E0812 23:59:09.853950 2237 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.83:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.83:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2a7237e7653d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:59:09.843490109 +0000 UTC m=+1.077696224,LastTimestamp:2025-08-12 23:59:09.843490109 +0000 UTC m=+1.077696224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:59:09.869225 kubelet[2237]: I0812 23:59:09.869172 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:59:09.869508 kubelet[2237]: I0812 23:59:09.869441 2237 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:59:09.869508 kubelet[2237]: I0812 23:59:09.869462 2237 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:59:09.869508 kubelet[2237]: I0812 23:59:09.869483 2237 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:59:09.870608 kubelet[2237]: I0812 23:59:09.870585 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:59:09.870678 kubelet[2237]: I0812 23:59:09.870613 2237 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:59:09.870678 kubelet[2237]: I0812 23:59:09.870649 2237 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:59:09.870678 kubelet[2237]: I0812 23:59:09.870661 2237 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:59:09.870762 kubelet[2237]: E0812 23:59:09.870720 2237 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:59:09.873451 kubelet[2237]: W0812 23:59:09.873280 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Aug 12 23:59:09.873451 kubelet[2237]: E0812 23:59:09.873357 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:59:09.873582 kubelet[2237]: I0812 23:59:09.873554 2237 policy_none.go:49] "None policy: Start" Aug 12 23:59:09.873611 kubelet[2237]: I0812 23:59:09.873588 2237 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:59:09.873611 kubelet[2237]: I0812 23:59:09.873602 2237 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:59:09.882599 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:59:09.897518 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:59:09.901067 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:59:09.915660 kubelet[2237]: I0812 23:59:09.915552 2237 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:59:09.916096 kubelet[2237]: I0812 23:59:09.916071 2237 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:59:09.916232 kubelet[2237]: I0812 23:59:09.916086 2237 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:59:09.916342 kubelet[2237]: I0812 23:59:09.916319 2237 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:59:09.917214 kubelet[2237]: E0812 23:59:09.917181 2237 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:59:09.917321 kubelet[2237]: E0812 23:59:09.917229 2237 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 12 23:59:09.979678 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice - libcontainer container kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Aug 12 23:59:09.989599 kubelet[2237]: E0812 23:59:09.989559 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:59:09.992488 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice - libcontainer container kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Aug 12 23:59:10.008894 kubelet[2237]: E0812 23:59:10.008857 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:59:10.010714 systemd[1]: Created slice kubepods-burstable-pod61ac8ebd0b06fdeaaf5feb946171be67.slice - libcontainer container kubepods-burstable-pod61ac8ebd0b06fdeaaf5feb946171be67.slice. Aug 12 23:59:10.012771 kubelet[2237]: E0812 23:59:10.012729 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:59:10.017624 kubelet[2237]: I0812 23:59:10.017571 2237 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:59:10.018056 kubelet[2237]: E0812 23:59:10.018021 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Aug 12 23:59:10.052978 kubelet[2237]: E0812 23:59:10.052939 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="400ms" Aug 12 23:59:10.151402 kubelet[2237]: I0812 23:59:10.151368 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:10.151481 kubelet[2237]: I0812 23:59:10.151424 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:10.151481 kubelet[2237]: I0812 23:59:10.151446 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:10.151481 kubelet[2237]: I0812 23:59:10.151462 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:10.151481 kubelet[2237]: I0812 23:59:10.151480 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:10.151580 kubelet[2237]: I0812 23:59:10.151499 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61ac8ebd0b06fdeaaf5feb946171be67-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"61ac8ebd0b06fdeaaf5feb946171be67\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:10.151580 kubelet[2237]: I0812 23:59:10.151517 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61ac8ebd0b06fdeaaf5feb946171be67-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"61ac8ebd0b06fdeaaf5feb946171be67\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:10.151580 kubelet[2237]: I0812 23:59:10.151534 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:59:10.151580 kubelet[2237]: I0812 23:59:10.151551 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61ac8ebd0b06fdeaaf5feb946171be67-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"61ac8ebd0b06fdeaaf5feb946171be67\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:10.220162 kubelet[2237]: I0812 23:59:10.220117 2237 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:59:10.220550 kubelet[2237]: E0812 23:59:10.220512 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Aug 12 23:59:10.291029 kubelet[2237]: E0812 23:59:10.290884 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:10.291736 containerd[1497]: time="2025-08-12T23:59:10.291683889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:10.309988 kubelet[2237]: E0812 23:59:10.309959 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:10.310482 containerd[1497]: time="2025-08-12T23:59:10.310453388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:10.313851 kubelet[2237]: E0812 23:59:10.313824 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:10.314245 containerd[1497]: time="2025-08-12T23:59:10.314188125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:61ac8ebd0b06fdeaaf5feb946171be67,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:10.453939 kubelet[2237]: E0812 23:59:10.453878 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="800ms" Aug 12 23:59:10.622885 kubelet[2237]: I0812 23:59:10.622752 2237 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:59:10.623280 kubelet[2237]: E0812 23:59:10.623177 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Aug 12 23:59:10.679354 kubelet[2237]: W0812 23:59:10.679280 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Aug 12 23:59:10.679354 kubelet[2237]: E0812 23:59:10.679345 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:59:10.762040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149342024.mount: Deactivated successfully. Aug 12 23:59:10.770618 containerd[1497]: time="2025-08-12T23:59:10.770545179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:59:10.774174 containerd[1497]: time="2025-08-12T23:59:10.774119479Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 12 23:59:10.774988 containerd[1497]: time="2025-08-12T23:59:10.774951217Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:59:10.776223 containerd[1497]: time="2025-08-12T23:59:10.776163166Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:59:10.777328 containerd[1497]: time="2025-08-12T23:59:10.777293002Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:59:10.777807 containerd[1497]: time="2025-08-12T23:59:10.777781066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:59:10.778770 containerd[1497]: time="2025-08-12T23:59:10.778733873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:59:10.780260 containerd[1497]: time="2025-08-12T23:59:10.780225010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:59:10.783804 containerd[1497]: time="2025-08-12T23:59:10.783775837Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 473.250903ms" Aug 12 23:59:10.785659 containerd[1497]: time="2025-08-12T23:59:10.785619620Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 493.809579ms" Aug 12 23:59:10.790142 containerd[1497]: time="2025-08-12T23:59:10.790108694Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 475.808114ms" Aug 12 23:59:10.857671 kubelet[2237]: W0812 23:59:10.856228 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Aug 12 23:59:10.857671 kubelet[2237]: E0812 23:59:10.856307 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:59:11.031804 containerd[1497]: time="2025-08-12T23:59:11.031456894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:59:11.031804 containerd[1497]: time="2025-08-12T23:59:11.031512823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:59:11.031804 containerd[1497]: time="2025-08-12T23:59:11.031526268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:11.031804 containerd[1497]: time="2025-08-12T23:59:11.031597447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:11.032068 containerd[1497]: time="2025-08-12T23:59:11.031998816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:59:11.032342 containerd[1497]: time="2025-08-12T23:59:11.032253534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:59:11.032444 containerd[1497]: time="2025-08-12T23:59:11.032309884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:11.032616 containerd[1497]: time="2025-08-12T23:59:11.032574046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:11.033673 kubelet[2237]: W0812 23:59:11.033557 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Aug 12 23:59:11.033673 kubelet[2237]: E0812 23:59:11.033659 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:59:11.035852 containerd[1497]: time="2025-08-12T23:59:11.035241236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:59:11.035852 containerd[1497]: time="2025-08-12T23:59:11.035375894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:59:11.035852 containerd[1497]: time="2025-08-12T23:59:11.035411900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:11.035852 containerd[1497]: time="2025-08-12T23:59:11.035535579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:11.141688 kubelet[2237]: W0812 23:59:11.140961 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Aug 12 23:59:11.141688 kubelet[2237]: E0812 23:59:11.141044 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:59:11.217961 systemd[1]: Started cri-containerd-238ccbb37705b31eb04ba8798207b2658df5a852270a55408564bbb69ad9eac8.scope - libcontainer container 238ccbb37705b31eb04ba8798207b2658df5a852270a55408564bbb69ad9eac8. Aug 12 23:59:11.223517 systemd[1]: Started cri-containerd-7b2c7edac2f6b356888e9f83201d50a5e828b604f72c8374162954249c72a5ad.scope - libcontainer container 7b2c7edac2f6b356888e9f83201d50a5e828b604f72c8374162954249c72a5ad. Aug 12 23:59:11.252802 systemd[1]: Started cri-containerd-e88d4e6b75b7d12f2008002ea2d1192513911e5eff3152167448898bc65dd268.scope - libcontainer container e88d4e6b75b7d12f2008002ea2d1192513911e5eff3152167448898bc65dd268. Aug 12 23:59:11.254828 kubelet[2237]: E0812 23:59:11.254786 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="1.6s" Aug 12 23:59:11.281212 containerd[1497]: time="2025-08-12T23:59:11.280869883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"238ccbb37705b31eb04ba8798207b2658df5a852270a55408564bbb69ad9eac8\"" Aug 12 23:59:11.282524 kubelet[2237]: E0812 23:59:11.282440 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:11.284375 containerd[1497]: time="2025-08-12T23:59:11.284328438Z" level=info msg="CreateContainer within sandbox \"238ccbb37705b31eb04ba8798207b2658df5a852270a55408564bbb69ad9eac8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:59:11.298535 containerd[1497]: time="2025-08-12T23:59:11.298479232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b2c7edac2f6b356888e9f83201d50a5e828b604f72c8374162954249c72a5ad\"" Aug 12 23:59:11.300063 kubelet[2237]: E0812 23:59:11.300031 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:11.301952 containerd[1497]: time="2025-08-12T23:59:11.301920932Z" level=info msg="CreateContainer within sandbox \"7b2c7edac2f6b356888e9f83201d50a5e828b604f72c8374162954249c72a5ad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:59:11.305389 containerd[1497]: time="2025-08-12T23:59:11.305261293Z" level=info msg="CreateContainer within sandbox \"238ccbb37705b31eb04ba8798207b2658df5a852270a55408564bbb69ad9eac8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0dae71758f5ef0182e6d195da259dba93a7ef78faa7d9124287246e25ab5caad\"" Aug 12 23:59:11.305853 containerd[1497]: time="2025-08-12T23:59:11.305833666Z" level=info msg="StartContainer for \"0dae71758f5ef0182e6d195da259dba93a7ef78faa7d9124287246e25ab5caad\"" Aug 12 23:59:11.317853 containerd[1497]: time="2025-08-12T23:59:11.317800967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:61ac8ebd0b06fdeaaf5feb946171be67,Namespace:kube-system,Attempt:0,} returns sandbox id \"e88d4e6b75b7d12f2008002ea2d1192513911e5eff3152167448898bc65dd268\"" Aug 12 23:59:11.318575 kubelet[2237]: E0812 23:59:11.318415 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:11.319843 containerd[1497]: time="2025-08-12T23:59:11.319815068Z" level=info msg="CreateContainer within sandbox \"e88d4e6b75b7d12f2008002ea2d1192513911e5eff3152167448898bc65dd268\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:59:11.323424 containerd[1497]: time="2025-08-12T23:59:11.323395328Z" level=info msg="CreateContainer within sandbox \"7b2c7edac2f6b356888e9f83201d50a5e828b604f72c8374162954249c72a5ad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40cecee427fdef5d7b648e09e8db782a7f4b88c7e820c5ec979896b79c253fbc\"" Aug 12 23:59:11.323808 containerd[1497]: time="2025-08-12T23:59:11.323776893Z" level=info msg="StartContainer for \"40cecee427fdef5d7b648e09e8db782a7f4b88c7e820c5ec979896b79c253fbc\"" Aug 12 23:59:11.336808 systemd[1]: Started cri-containerd-0dae71758f5ef0182e6d195da259dba93a7ef78faa7d9124287246e25ab5caad.scope - libcontainer container 0dae71758f5ef0182e6d195da259dba93a7ef78faa7d9124287246e25ab5caad. Aug 12 23:59:11.337840 containerd[1497]: time="2025-08-12T23:59:11.337657547Z" level=info msg="CreateContainer within sandbox \"e88d4e6b75b7d12f2008002ea2d1192513911e5eff3152167448898bc65dd268\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4c989b3edec159e164fcff3fd38ac5e34010152abc33d843b2190d3a8bb9e288\"" Aug 12 23:59:11.338586 containerd[1497]: time="2025-08-12T23:59:11.338032345Z" level=info msg="StartContainer for \"4c989b3edec159e164fcff3fd38ac5e34010152abc33d843b2190d3a8bb9e288\"" Aug 12 23:59:11.366825 systemd[1]: Started cri-containerd-40cecee427fdef5d7b648e09e8db782a7f4b88c7e820c5ec979896b79c253fbc.scope - libcontainer container 40cecee427fdef5d7b648e09e8db782a7f4b88c7e820c5ec979896b79c253fbc. Aug 12 23:59:11.372536 systemd[1]: Started cri-containerd-4c989b3edec159e164fcff3fd38ac5e34010152abc33d843b2190d3a8bb9e288.scope - libcontainer container 4c989b3edec159e164fcff3fd38ac5e34010152abc33d843b2190d3a8bb9e288. Aug 12 23:59:11.425679 containerd[1497]: time="2025-08-12T23:59:11.424827566Z" level=info msg="StartContainer for \"0dae71758f5ef0182e6d195da259dba93a7ef78faa7d9124287246e25ab5caad\" returns successfully" Aug 12 23:59:11.446475 kubelet[2237]: I0812 23:59:11.446444 2237 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:59:11.447454 kubelet[2237]: E0812 23:59:11.447423 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Aug 12 23:59:11.481763 containerd[1497]: time="2025-08-12T23:59:11.481718225Z" level=info msg="StartContainer for \"40cecee427fdef5d7b648e09e8db782a7f4b88c7e820c5ec979896b79c253fbc\" returns successfully" Aug 12 23:59:11.494421 containerd[1497]: time="2025-08-12T23:59:11.494362177Z" level=info msg="StartContainer for \"4c989b3edec159e164fcff3fd38ac5e34010152abc33d843b2190d3a8bb9e288\" returns successfully" Aug 12 23:59:11.888679 kubelet[2237]: E0812 23:59:11.888294 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:59:11.888679 kubelet[2237]: E0812 23:59:11.888449 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:11.896423 kubelet[2237]: E0812 23:59:11.896405 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:59:11.898516 kubelet[2237]: E0812 23:59:11.898311 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:11.899119 kubelet[2237]: E0812 23:59:11.898769 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:59:11.899119 kubelet[2237]: E0812 23:59:11.899025 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:12.899678 kubelet[2237]: E0812 23:59:12.899625 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:59:12.901097 kubelet[2237]: E0812 23:59:12.900881 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:12.901097 kubelet[2237]: E0812 23:59:12.900420 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:59:12.901097 kubelet[2237]: E0812 23:59:12.901064 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:13.009094 kubelet[2237]: E0812 23:59:13.009015 2237 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 12 23:59:13.049160 kubelet[2237]: I0812 23:59:13.049122 2237 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:59:13.181180 kubelet[2237]: I0812 23:59:13.181029 2237 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 12 23:59:13.251781 kubelet[2237]: I0812 23:59:13.251713 2237 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:13.257755 kubelet[2237]: E0812 23:59:13.257696 2237 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:13.257755 kubelet[2237]: I0812 23:59:13.257731 2237 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:59:13.259867 kubelet[2237]: E0812 23:59:13.259814 2237 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 12 23:59:13.259867 kubelet[2237]: I0812 23:59:13.259860 2237 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:13.261665 kubelet[2237]: E0812 23:59:13.261601 2237 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:13.813846 kubelet[2237]: I0812 23:59:13.813801 2237 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:13.815700 kubelet[2237]: E0812 23:59:13.815669 2237 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:13.815913 kubelet[2237]: E0812 23:59:13.815884 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:13.873534 kubelet[2237]: I0812 23:59:13.873476 2237 apiserver.go:52] "Watching apiserver" Aug 12 23:59:13.950396 kubelet[2237]: I0812 23:59:13.950187 2237 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:59:17.057757 kubelet[2237]: I0812 23:59:17.057713 2237 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:17.175203 kubelet[2237]: E0812 23:59:17.175156 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:17.919572 kubelet[2237]: E0812 23:59:17.919535 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:19.669454 systemd[1]: Reload requested from client PID 2518 ('systemctl') (unit session-7.scope)... Aug 12 23:59:19.669473 systemd[1]: Reloading... Aug 12 23:59:19.766673 zram_generator::config[2568]: No configuration found. Aug 12 23:59:19.880042 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:59:20.003171 systemd[1]: Reloading finished in 333 ms. Aug 12 23:59:20.027759 kubelet[2237]: I0812 23:59:20.027665 2237 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:59:20.027770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:59:20.053556 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:59:20.054014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:59:20.054105 systemd[1]: kubelet.service: Consumed 1.611s CPU time, 137.9M memory peak. Aug 12 23:59:20.065946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:59:20.266358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:59:20.283305 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:59:20.324655 kubelet[2609]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:59:20.324655 kubelet[2609]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:59:20.324655 kubelet[2609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:59:20.325165 kubelet[2609]: I0812 23:59:20.324738 2609 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:59:20.334084 kubelet[2609]: I0812 23:59:20.334042 2609 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:59:20.334084 kubelet[2609]: I0812 23:59:20.334073 2609 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:59:20.334346 kubelet[2609]: I0812 23:59:20.334320 2609 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:59:20.335740 kubelet[2609]: I0812 23:59:20.335717 2609 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:59:20.339451 kubelet[2609]: I0812 23:59:20.339411 2609 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:59:20.342678 kubelet[2609]: E0812 23:59:20.342608 2609 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:59:20.342759 kubelet[2609]: I0812 23:59:20.342681 2609 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:59:20.349553 kubelet[2609]: I0812 23:59:20.349429 2609 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:59:20.352036 kubelet[2609]: I0812 23:59:20.349892 2609 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:59:20.352036 kubelet[2609]: I0812 23:59:20.349949 2609 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:59:20.352036 kubelet[2609]: I0812 23:59:20.350247 2609 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:59:20.352036 kubelet[2609]: I0812 23:59:20.350257 2609 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:59:20.352230 kubelet[2609]: I0812 23:59:20.350317 2609 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:59:20.352230 kubelet[2609]: I0812 23:59:20.350500 2609 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:59:20.352230 kubelet[2609]: I0812 23:59:20.350535 2609 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:59:20.352230 kubelet[2609]: I0812 23:59:20.350561 2609 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:59:20.352230 kubelet[2609]: I0812 23:59:20.350574 2609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:59:20.352909 kubelet[2609]: I0812 23:59:20.352885 2609 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 12 23:59:20.353485 kubelet[2609]: I0812 23:59:20.353439 2609 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:59:20.354145 kubelet[2609]: I0812 23:59:20.353907 2609 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:59:20.354145 kubelet[2609]: I0812 23:59:20.353943 2609 server.go:1287] "Started kubelet" Aug 12 23:59:20.354648 kubelet[2609]: I0812 23:59:20.354565 2609 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:59:20.356862 kubelet[2609]: I0812 23:59:20.356832 2609 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:59:20.362657 kubelet[2609]: I0812 23:59:20.360374 2609 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:59:20.362657 kubelet[2609]: I0812 23:59:20.356845 2609 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:59:20.362943 kubelet[2609]: I0812 23:59:20.362912 2609 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:59:20.363293 kubelet[2609]: E0812 23:59:20.363236 2609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:59:20.364096 kubelet[2609]: I0812 23:59:20.363919 2609 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:59:20.364185 kubelet[2609]: I0812 23:59:20.354613 2609 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:59:20.364448 kubelet[2609]: I0812 23:59:20.364405 2609 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:59:20.364618 kubelet[2609]: I0812 23:59:20.364567 2609 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:59:20.368753 kubelet[2609]: E0812 23:59:20.368722 2609 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:59:20.370969 kubelet[2609]: I0812 23:59:20.370927 2609 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:59:20.370969 kubelet[2609]: I0812 23:59:20.370955 2609 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:59:20.371486 kubelet[2609]: I0812 23:59:20.371456 2609 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:59:20.377554 kubelet[2609]: I0812 23:59:20.377506 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:59:20.380906 kubelet[2609]: I0812 23:59:20.380847 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:59:20.380906 kubelet[2609]: I0812 23:59:20.380894 2609 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:59:20.380906 kubelet[2609]: I0812 23:59:20.380916 2609 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:59:20.381160 kubelet[2609]: I0812 23:59:20.380924 2609 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:59:20.381160 kubelet[2609]: E0812 23:59:20.380982 2609 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:59:20.412455 kubelet[2609]: I0812 23:59:20.412422 2609 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:59:20.412455 kubelet[2609]: I0812 23:59:20.412441 2609 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:59:20.412455 kubelet[2609]: I0812 23:59:20.412463 2609 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:59:20.412703 kubelet[2609]: I0812 23:59:20.412613 2609 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:59:20.412703 kubelet[2609]: I0812 23:59:20.412623 2609 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:59:20.412703 kubelet[2609]: I0812 23:59:20.412660 2609 policy_none.go:49] "None policy: Start" Aug 12 23:59:20.412703 kubelet[2609]: I0812 23:59:20.412671 2609 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:59:20.412703 kubelet[2609]: I0812 23:59:20.412684 2609 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:59:20.412822 kubelet[2609]: I0812 23:59:20.412794 2609 state_mem.go:75] "Updated machine memory state" Aug 12 23:59:20.417120 kubelet[2609]: I0812 23:59:20.416816 2609 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:59:20.417120 kubelet[2609]: I0812 23:59:20.416993 2609 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:59:20.417120 kubelet[2609]: I0812 23:59:20.417004 2609 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:59:20.417274 kubelet[2609]: I0812 23:59:20.417196 2609 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:59:20.418132 kubelet[2609]: E0812 23:59:20.418105 2609 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:59:20.482490 kubelet[2609]: I0812 23:59:20.482433 2609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:20.482490 kubelet[2609]: I0812 23:59:20.482477 2609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:59:20.482785 kubelet[2609]: I0812 23:59:20.482554 2609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:20.488714 kubelet[2609]: E0812 23:59:20.488683 2609 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:20.523349 kubelet[2609]: I0812 23:59:20.523228 2609 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:59:20.530858 kubelet[2609]: I0812 23:59:20.530822 2609 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 12 23:59:20.531020 kubelet[2609]: I0812 23:59:20.530930 2609 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 12 23:59:20.564853 kubelet[2609]: I0812 23:59:20.564800 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61ac8ebd0b06fdeaaf5feb946171be67-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"61ac8ebd0b06fdeaaf5feb946171be67\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:20.564853 kubelet[2609]: I0812 23:59:20.564852 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:20.565057 kubelet[2609]: I0812 23:59:20.564879 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:20.565057 kubelet[2609]: I0812 23:59:20.564898 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:59:20.565057 kubelet[2609]: I0812 23:59:20.564912 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61ac8ebd0b06fdeaaf5feb946171be67-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"61ac8ebd0b06fdeaaf5feb946171be67\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:20.565057 kubelet[2609]: I0812 23:59:20.564927 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61ac8ebd0b06fdeaaf5feb946171be67-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"61ac8ebd0b06fdeaaf5feb946171be67\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:20.565057 kubelet[2609]: I0812 23:59:20.564943 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:20.565174 kubelet[2609]: I0812 23:59:20.564958 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:20.565174 kubelet[2609]: I0812 23:59:20.564975 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:59:20.667245 sudo[2644]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 12 23:59:20.667656 sudo[2644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 12 23:59:20.788485 kubelet[2609]: E0812 23:59:20.788319 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:20.788485 kubelet[2609]: E0812 23:59:20.788413 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:20.789625 kubelet[2609]: E0812 23:59:20.789582 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:21.148478 sudo[2644]: pam_unix(sudo:session): session closed for user root Aug 12 23:59:21.351761 kubelet[2609]: I0812 23:59:21.351682 2609 apiserver.go:52] "Watching apiserver" Aug 12 23:59:21.364066 kubelet[2609]: I0812 23:59:21.364019 2609 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:59:21.395367 kubelet[2609]: I0812 23:59:21.395319 2609 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:21.395889 kubelet[2609]: E0812 23:59:21.395848 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:21.397345 kubelet[2609]: E0812 23:59:21.396596 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:21.414198 kubelet[2609]: E0812 23:59:21.413816 2609 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:59:21.414198 kubelet[2609]: E0812 23:59:21.414022 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:21.423438 kubelet[2609]: I0812 23:59:21.423370 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.423346695 podStartE2EDuration="1.423346695s" podCreationTimestamp="2025-08-12 23:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:59:21.423137326 +0000 UTC m=+1.134415793" watchObservedRunningTime="2025-08-12 23:59:21.423346695 +0000 UTC m=+1.134625162" Aug 12 23:59:22.400564 kubelet[2609]: E0812 23:59:22.396947 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:22.400564 kubelet[2609]: E0812 23:59:22.397539 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:22.647139 update_engine[1483]: I20250812 23:59:22.646903 1483 update_attempter.cc:509] Updating boot flags... Aug 12 23:59:22.710851 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (2669) Aug 12 23:59:22.779836 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 45 scanned by (udev-worker) (2667) Aug 12 23:59:23.240993 sudo[1685]: pam_unix(sudo:session): session closed for user root Aug 12 23:59:23.243093 sshd[1684]: Connection closed by 10.0.0.1 port 56290 Aug 12 23:59:23.243730 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:23.248328 systemd[1]: sshd@6-10.0.0.83:22-10.0.0.1:56290.service: Deactivated successfully. Aug 12 23:59:23.251240 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:59:23.251553 systemd[1]: session-7.scope: Consumed 5.905s CPU time, 252.9M memory peak. Aug 12 23:59:23.253168 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:59:23.254354 systemd-logind[1480]: Removed session 7. Aug 12 23:59:23.398400 kubelet[2609]: E0812 23:59:23.398358 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:25.181783 kubelet[2609]: I0812 23:59:25.181742 2609 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:59:25.182271 containerd[1497]: time="2025-08-12T23:59:25.182175317Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:59:25.182549 kubelet[2609]: I0812 23:59:25.182411 2609 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:59:25.822847 kubelet[2609]: I0812 23:59:25.822761 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.822738077 podStartE2EDuration="5.822738077s" podCreationTimestamp="2025-08-12 23:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:59:21.431380072 +0000 UTC m=+1.142658539" watchObservedRunningTime="2025-08-12 23:59:25.822738077 +0000 UTC m=+5.534016544" Aug 12 23:59:25.837717 systemd[1]: Created slice kubepods-besteffort-podce3860eb_c356_4497_b6e4_1b1634130b84.slice - libcontainer container kubepods-besteffort-podce3860eb_c356_4497_b6e4_1b1634130b84.slice. Aug 12 23:59:25.854471 systemd[1]: Created slice kubepods-burstable-podc12043ea_7643_4d34_b998_1e17da5d923e.slice - libcontainer container kubepods-burstable-podc12043ea_7643_4d34_b998_1e17da5d923e.slice. Aug 12 23:59:25.894198 kubelet[2609]: I0812 23:59:25.894137 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvxnl\" (UniqueName: \"kubernetes.io/projected/c12043ea-7643-4d34-b998-1e17da5d923e-kube-api-access-gvxnl\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894198 kubelet[2609]: I0812 23:59:25.894194 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c12043ea-7643-4d34-b998-1e17da5d923e-clustermesh-secrets\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894198 kubelet[2609]: I0812 23:59:25.894214 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c12043ea-7643-4d34-b998-1e17da5d923e-hubble-tls\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894480 kubelet[2609]: I0812 23:59:25.894267 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce3860eb-c356-4497-b6e4-1b1634130b84-xtables-lock\") pod \"kube-proxy-pfrt2\" (UID: \"ce3860eb-c356-4497-b6e4-1b1634130b84\") " pod="kube-system/kube-proxy-pfrt2" Aug 12 23:59:25.894480 kubelet[2609]: I0812 23:59:25.894313 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce3860eb-c356-4497-b6e4-1b1634130b84-kube-proxy\") pod \"kube-proxy-pfrt2\" (UID: \"ce3860eb-c356-4497-b6e4-1b1634130b84\") " pod="kube-system/kube-proxy-pfrt2" Aug 12 23:59:25.894480 kubelet[2609]: I0812 23:59:25.894345 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-cgroup\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894480 kubelet[2609]: I0812 23:59:25.894362 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-xtables-lock\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894480 kubelet[2609]: I0812 23:59:25.894380 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-host-proc-sys-net\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894480 kubelet[2609]: I0812 23:59:25.894403 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-config-path\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894753 kubelet[2609]: I0812 23:59:25.894445 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-run\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894753 kubelet[2609]: I0812 23:59:25.894503 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4db49\" (UniqueName: \"kubernetes.io/projected/ce3860eb-c356-4497-b6e4-1b1634130b84-kube-api-access-4db49\") pod \"kube-proxy-pfrt2\" (UID: \"ce3860eb-c356-4497-b6e4-1b1634130b84\") " pod="kube-system/kube-proxy-pfrt2" Aug 12 23:59:25.894753 kubelet[2609]: I0812 23:59:25.894546 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-bpf-maps\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894753 kubelet[2609]: I0812 23:59:25.894566 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-hostproc\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894753 kubelet[2609]: I0812 23:59:25.894581 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce3860eb-c356-4497-b6e4-1b1634130b84-lib-modules\") pod \"kube-proxy-pfrt2\" (UID: \"ce3860eb-c356-4497-b6e4-1b1634130b84\") " pod="kube-system/kube-proxy-pfrt2" Aug 12 23:59:25.894753 kubelet[2609]: I0812 23:59:25.894602 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-lib-modules\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894951 kubelet[2609]: I0812 23:59:25.894618 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-host-proc-sys-kernel\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894951 kubelet[2609]: I0812 23:59:25.894735 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cni-path\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:25.894951 kubelet[2609]: I0812 23:59:25.894790 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-etc-cni-netd\") pod \"cilium-bxnxf\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " pod="kube-system/cilium-bxnxf" Aug 12 23:59:26.005201 kubelet[2609]: E0812 23:59:26.004890 2609 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 12 23:59:26.005201 kubelet[2609]: E0812 23:59:26.004924 2609 projected.go:194] Error preparing data for projected volume kube-api-access-gvxnl for pod kube-system/cilium-bxnxf: configmap "kube-root-ca.crt" not found Aug 12 23:59:26.005201 kubelet[2609]: E0812 23:59:26.004981 2609 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c12043ea-7643-4d34-b998-1e17da5d923e-kube-api-access-gvxnl podName:c12043ea-7643-4d34-b998-1e17da5d923e nodeName:}" failed. No retries permitted until 2025-08-12 23:59:26.50495798 +0000 UTC m=+6.216236447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gvxnl" (UniqueName: "kubernetes.io/projected/c12043ea-7643-4d34-b998-1e17da5d923e-kube-api-access-gvxnl") pod "cilium-bxnxf" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e") : configmap "kube-root-ca.crt" not found Aug 12 23:59:26.009889 kubelet[2609]: E0812 23:59:26.009826 2609 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 12 23:59:26.009889 kubelet[2609]: E0812 23:59:26.009853 2609 projected.go:194] Error preparing data for projected volume kube-api-access-4db49 for pod kube-system/kube-proxy-pfrt2: configmap "kube-root-ca.crt" not found Aug 12 23:59:26.009889 kubelet[2609]: E0812 23:59:26.009892 2609 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce3860eb-c356-4497-b6e4-1b1634130b84-kube-api-access-4db49 podName:ce3860eb-c356-4497-b6e4-1b1634130b84 nodeName:}" failed. No retries permitted until 2025-08-12 23:59:26.509877574 +0000 UTC m=+6.221156041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4db49" (UniqueName: "kubernetes.io/projected/ce3860eb-c356-4497-b6e4-1b1634130b84-kube-api-access-4db49") pod "kube-proxy-pfrt2" (UID: "ce3860eb-c356-4497-b6e4-1b1634130b84") : configmap "kube-root-ca.crt" not found Aug 12 23:59:26.282194 systemd[1]: Created slice kubepods-besteffort-pod5a06979d_de8f_47c3_b87c_623c4a4b4952.slice - libcontainer container kubepods-besteffort-pod5a06979d_de8f_47c3_b87c_623c4a4b4952.slice. Aug 12 23:59:26.300758 kubelet[2609]: I0812 23:59:26.300658 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddsmv\" (UniqueName: \"kubernetes.io/projected/5a06979d-de8f-47c3-b87c-623c4a4b4952-kube-api-access-ddsmv\") pod \"cilium-operator-6c4d7847fc-rkk88\" (UID: \"5a06979d-de8f-47c3-b87c-623c4a4b4952\") " pod="kube-system/cilium-operator-6c4d7847fc-rkk88" Aug 12 23:59:26.300758 kubelet[2609]: I0812 23:59:26.300747 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a06979d-de8f-47c3-b87c-623c4a4b4952-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rkk88\" (UID: \"5a06979d-de8f-47c3-b87c-623c4a4b4952\") " pod="kube-system/cilium-operator-6c4d7847fc-rkk88" Aug 12 23:59:26.587121 kubelet[2609]: E0812 23:59:26.586916 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:26.587867 containerd[1497]: time="2025-08-12T23:59:26.587756454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rkk88,Uid:5a06979d-de8f-47c3-b87c-623c4a4b4952,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:26.749420 kubelet[2609]: E0812 23:59:26.749364 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:26.749842 containerd[1497]: time="2025-08-12T23:59:26.749797057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pfrt2,Uid:ce3860eb-c356-4497-b6e4-1b1634130b84,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:26.759396 kubelet[2609]: E0812 23:59:26.759326 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:26.760121 containerd[1497]: time="2025-08-12T23:59:26.760062932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxnxf,Uid:c12043ea-7643-4d34-b998-1e17da5d923e,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:26.828982 containerd[1497]: time="2025-08-12T23:59:26.828835941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:59:26.828982 containerd[1497]: time="2025-08-12T23:59:26.828930015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:59:26.828982 containerd[1497]: time="2025-08-12T23:59:26.828948576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:26.829199 containerd[1497]: time="2025-08-12T23:59:26.829061432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:26.835769 containerd[1497]: time="2025-08-12T23:59:26.835344829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:59:26.835769 containerd[1497]: time="2025-08-12T23:59:26.835421105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:59:26.835769 containerd[1497]: time="2025-08-12T23:59:26.835437661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:26.835769 containerd[1497]: time="2025-08-12T23:59:26.835518236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:26.839345 containerd[1497]: time="2025-08-12T23:59:26.839183674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:59:26.839345 containerd[1497]: time="2025-08-12T23:59:26.839249577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:59:26.839345 containerd[1497]: time="2025-08-12T23:59:26.839269480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:26.840051 containerd[1497]: time="2025-08-12T23:59:26.839463612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:26.858019 systemd[1]: Started cri-containerd-4913b189b156a000be9943344bea81c6ef91e83b74a213957608a8cfeabbc3eb.scope - libcontainer container 4913b189b156a000be9943344bea81c6ef91e83b74a213957608a8cfeabbc3eb. Aug 12 23:59:26.862201 systemd[1]: Started cri-containerd-887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c.scope - libcontainer container 887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c. Aug 12 23:59:26.868141 systemd[1]: Started cri-containerd-9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82.scope - libcontainer container 9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82. Aug 12 23:59:26.896163 containerd[1497]: time="2025-08-12T23:59:26.896109382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pfrt2,Uid:ce3860eb-c356-4497-b6e4-1b1634130b84,Namespace:kube-system,Attempt:0,} returns sandbox id \"4913b189b156a000be9943344bea81c6ef91e83b74a213957608a8cfeabbc3eb\"" Aug 12 23:59:26.897245 kubelet[2609]: E0812 23:59:26.897207 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:26.909356 containerd[1497]: time="2025-08-12T23:59:26.907715009Z" level=info msg="CreateContainer within sandbox \"4913b189b156a000be9943344bea81c6ef91e83b74a213957608a8cfeabbc3eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:59:26.915355 containerd[1497]: time="2025-08-12T23:59:26.915294474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxnxf,Uid:c12043ea-7643-4d34-b998-1e17da5d923e,Namespace:kube-system,Attempt:0,} returns sandbox id \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\"" Aug 12 23:59:26.916289 kubelet[2609]: E0812 23:59:26.916255 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:26.917871 containerd[1497]: time="2025-08-12T23:59:26.917723454Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 12 23:59:26.933219 containerd[1497]: time="2025-08-12T23:59:26.933185142Z" level=info msg="CreateContainer within sandbox \"4913b189b156a000be9943344bea81c6ef91e83b74a213957608a8cfeabbc3eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"efa8950a6c8729d56ee83bdbace724e6c72a474b9c93f518c83bd2a643f6e2f2\"" Aug 12 23:59:26.935898 containerd[1497]: time="2025-08-12T23:59:26.935834541Z" level=info msg="StartContainer for \"efa8950a6c8729d56ee83bdbace724e6c72a474b9c93f518c83bd2a643f6e2f2\"" Aug 12 23:59:26.943697 containerd[1497]: time="2025-08-12T23:59:26.943260954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rkk88,Uid:5a06979d-de8f-47c3-b87c-623c4a4b4952,Namespace:kube-system,Attempt:0,} returns sandbox id \"9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82\"" Aug 12 23:59:26.944313 kubelet[2609]: E0812 23:59:26.944291 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:26.974790 systemd[1]: Started cri-containerd-efa8950a6c8729d56ee83bdbace724e6c72a474b9c93f518c83bd2a643f6e2f2.scope - libcontainer container efa8950a6c8729d56ee83bdbace724e6c72a474b9c93f518c83bd2a643f6e2f2. Aug 12 23:59:27.017957 containerd[1497]: time="2025-08-12T23:59:27.017868986Z" level=info msg="StartContainer for \"efa8950a6c8729d56ee83bdbace724e6c72a474b9c93f518c83bd2a643f6e2f2\" returns successfully" Aug 12 23:59:27.407255 kubelet[2609]: E0812 23:59:27.407213 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:27.417122 kubelet[2609]: I0812 23:59:27.417052 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pfrt2" podStartSLOduration=2.417032087 podStartE2EDuration="2.417032087s" podCreationTimestamp="2025-08-12 23:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:59:27.41601704 +0000 UTC m=+7.127295507" watchObservedRunningTime="2025-08-12 23:59:27.417032087 +0000 UTC m=+7.128310554" Aug 12 23:59:28.201677 kubelet[2609]: E0812 23:59:28.201568 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:28.411552 kubelet[2609]: E0812 23:59:28.411499 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:29.414303 kubelet[2609]: E0812 23:59:29.414259 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:29.496255 kubelet[2609]: E0812 23:59:29.496164 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:29.682146 kubelet[2609]: E0812 23:59:29.682067 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:30.415657 kubelet[2609]: E0812 23:59:30.415600 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:30.416079 kubelet[2609]: E0812 23:59:30.415672 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:31.416875 kubelet[2609]: E0812 23:59:31.416825 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:42.868535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109625697.mount: Deactivated successfully. Aug 12 23:59:45.623600 containerd[1497]: time="2025-08-12T23:59:45.623512208Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:45.624392 containerd[1497]: time="2025-08-12T23:59:45.624218827Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 12 23:59:45.625462 containerd[1497]: time="2025-08-12T23:59:45.625417632Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:45.627291 containerd[1497]: time="2025-08-12T23:59:45.627123113Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.709368914s" Aug 12 23:59:45.627291 containerd[1497]: time="2025-08-12T23:59:45.627278678Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 12 23:59:45.637569 containerd[1497]: time="2025-08-12T23:59:45.637517514Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 12 23:59:45.643333 containerd[1497]: time="2025-08-12T23:59:45.643275223Z" level=info msg="CreateContainer within sandbox \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:59:45.657603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount207807618.mount: Deactivated successfully. Aug 12 23:59:45.661177 containerd[1497]: time="2025-08-12T23:59:45.661139586Z" level=info msg="CreateContainer within sandbox \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a\"" Aug 12 23:59:45.662442 containerd[1497]: time="2025-08-12T23:59:45.661904965Z" level=info msg="StartContainer for \"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a\"" Aug 12 23:59:45.702873 systemd[1]: Started cri-containerd-3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a.scope - libcontainer container 3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a. Aug 12 23:59:45.735962 containerd[1497]: time="2025-08-12T23:59:45.735889393Z" level=info msg="StartContainer for \"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a\" returns successfully" Aug 12 23:59:45.752369 systemd[1]: cri-containerd-3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a.scope: Deactivated successfully. Aug 12 23:59:46.654778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a-rootfs.mount: Deactivated successfully. Aug 12 23:59:46.673743 kubelet[2609]: E0812 23:59:46.673691 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:46.683220 containerd[1497]: time="2025-08-12T23:59:46.683136670Z" level=info msg="shim disconnected" id=3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a namespace=k8s.io Aug 12 23:59:46.683220 containerd[1497]: time="2025-08-12T23:59:46.683211400Z" level=warning msg="cleaning up after shim disconnected" id=3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a namespace=k8s.io Aug 12 23:59:46.683220 containerd[1497]: time="2025-08-12T23:59:46.683221781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:59:47.000721 systemd[1]: Started sshd@7-10.0.0.83:22-10.0.0.1:34610.service - OpenSSH per-connection server daemon (10.0.0.1:34610). Aug 12 23:59:47.051806 sshd[3085]: Accepted publickey for core from 10.0.0.1 port 34610 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:59:47.053979 sshd-session[3085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:59:47.060372 systemd-logind[1480]: New session 8 of user core. Aug 12 23:59:47.068943 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:59:47.210535 sshd[3087]: Connection closed by 10.0.0.1 port 34610 Aug 12 23:59:47.211016 sshd-session[3085]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:47.216994 systemd[1]: sshd@7-10.0.0.83:22-10.0.0.1:34610.service: Deactivated successfully. Aug 12 23:59:47.220284 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:59:47.221788 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:59:47.222998 systemd-logind[1480]: Removed session 8. Aug 12 23:59:47.567350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897965404.mount: Deactivated successfully. Aug 12 23:59:47.673591 kubelet[2609]: E0812 23:59:47.673546 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:47.676018 containerd[1497]: time="2025-08-12T23:59:47.675904424Z" level=info msg="CreateContainer within sandbox \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:59:47.698280 containerd[1497]: time="2025-08-12T23:59:47.698214725Z" level=info msg="CreateContainer within sandbox \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27\"" Aug 12 23:59:47.699539 containerd[1497]: time="2025-08-12T23:59:47.699290222Z" level=info msg="StartContainer for \"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27\"" Aug 12 23:59:47.739161 systemd[1]: Started cri-containerd-a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27.scope - libcontainer container a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27. Aug 12 23:59:47.841628 containerd[1497]: time="2025-08-12T23:59:47.841494617Z" level=info msg="StartContainer for \"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27\" returns successfully" Aug 12 23:59:47.857726 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:59:47.858627 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:59:47.858941 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:59:47.868108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:59:47.872777 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:59:47.873442 systemd[1]: cri-containerd-a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27.scope: Deactivated successfully. Aug 12 23:59:47.873918 systemd[1]: cri-containerd-a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27.scope: Consumed 29ms CPU time, 5.6M memory peak, 16K read from disk, 2.2M written to disk. Aug 12 23:59:47.892807 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:59:47.995999 containerd[1497]: time="2025-08-12T23:59:47.995902173Z" level=info msg="shim disconnected" id=a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27 namespace=k8s.io Aug 12 23:59:47.995999 containerd[1497]: time="2025-08-12T23:59:47.995986153Z" level=warning msg="cleaning up after shim disconnected" id=a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27 namespace=k8s.io Aug 12 23:59:47.995999 containerd[1497]: time="2025-08-12T23:59:47.995994760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:59:48.024447 containerd[1497]: time="2025-08-12T23:59:48.024391996Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:48.025259 containerd[1497]: time="2025-08-12T23:59:48.025210663Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 12 23:59:48.026400 containerd[1497]: time="2025-08-12T23:59:48.026360897Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:59:48.028128 containerd[1497]: time="2025-08-12T23:59:48.028084143Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.390519675s" Aug 12 23:59:48.028128 containerd[1497]: time="2025-08-12T23:59:48.028119283Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 12 23:59:48.032863 containerd[1497]: time="2025-08-12T23:59:48.032821621Z" level=info msg="CreateContainer within sandbox \"9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 12 23:59:48.050218 containerd[1497]: time="2025-08-12T23:59:48.050151219Z" level=info msg="CreateContainer within sandbox \"9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\"" Aug 12 23:59:48.050741 containerd[1497]: time="2025-08-12T23:59:48.050717628Z" level=info msg="StartContainer for \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\"" Aug 12 23:59:48.080822 systemd[1]: Started cri-containerd-3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a.scope - libcontainer container 3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a. Aug 12 23:59:48.149965 containerd[1497]: time="2025-08-12T23:59:48.149334788Z" level=info msg="StartContainer for \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\" returns successfully" Aug 12 23:59:48.678670 kubelet[2609]: E0812 23:59:48.676103 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:48.678670 kubelet[2609]: E0812 23:59:48.677927 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:48.679287 containerd[1497]: time="2025-08-12T23:59:48.679253384Z" level=info msg="CreateContainer within sandbox \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:59:48.690473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27-rootfs.mount: Deactivated successfully. Aug 12 23:59:48.707748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172557647.mount: Deactivated successfully. Aug 12 23:59:48.715195 containerd[1497]: time="2025-08-12T23:59:48.715131099Z" level=info msg="CreateContainer within sandbox \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1\"" Aug 12 23:59:48.715731 containerd[1497]: time="2025-08-12T23:59:48.715704723Z" level=info msg="StartContainer for \"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1\"" Aug 12 23:59:48.769831 systemd[1]: Started cri-containerd-b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1.scope - libcontainer container b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1. Aug 12 23:59:48.780893 kubelet[2609]: I0812 23:59:48.780807 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rkk88" podStartSLOduration=1.6993340319999999 podStartE2EDuration="22.780771549s" podCreationTimestamp="2025-08-12 23:59:26 +0000 UTC" firstStartedPulling="2025-08-12 23:59:26.947526416 +0000 UTC m=+6.658804883" lastFinishedPulling="2025-08-12 23:59:48.028963933 +0000 UTC m=+27.740242400" observedRunningTime="2025-08-12 23:59:48.778947671 +0000 UTC m=+28.490226138" watchObservedRunningTime="2025-08-12 23:59:48.780771549 +0000 UTC m=+28.492050016" Aug 12 23:59:48.833931 containerd[1497]: time="2025-08-12T23:59:48.833872517Z" level=info msg="StartContainer for \"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1\" returns successfully" Aug 12 23:59:48.845101 systemd[1]: cri-containerd-b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1.scope: Deactivated successfully. Aug 12 23:59:48.876741 containerd[1497]: time="2025-08-12T23:59:48.876663377Z" level=info msg="shim disconnected" id=b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1 namespace=k8s.io Aug 12 23:59:48.876741 containerd[1497]: time="2025-08-12T23:59:48.876735341Z" level=warning msg="cleaning up after shim disconnected" id=b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1 namespace=k8s.io Aug 12 23:59:48.876741 containerd[1497]: time="2025-08-12T23:59:48.876748658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:59:49.681823 kubelet[2609]: E0812 23:59:49.681555 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:49.681823 kubelet[2609]: E0812 23:59:49.681603 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:49.685262 containerd[1497]: time="2025-08-12T23:59:49.684833635Z" level=info msg="CreateContainer within sandbox \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:59:49.689885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1-rootfs.mount: Deactivated successfully. Aug 12 23:59:49.713063 containerd[1497]: time="2025-08-12T23:59:49.712992841Z" level=info msg="CreateContainer within sandbox \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508\"" Aug 12 23:59:49.714625 containerd[1497]: time="2025-08-12T23:59:49.713587665Z" level=info msg="StartContainer for \"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508\"" Aug 12 23:59:49.789881 systemd[1]: Started cri-containerd-007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508.scope - libcontainer container 007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508. Aug 12 23:59:49.815759 systemd[1]: cri-containerd-007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508.scope: Deactivated successfully. Aug 12 23:59:49.818808 containerd[1497]: time="2025-08-12T23:59:49.818761855Z" level=info msg="StartContainer for \"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508\" returns successfully" Aug 12 23:59:49.847063 containerd[1497]: time="2025-08-12T23:59:49.846966762Z" level=info msg="shim disconnected" id=007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508 namespace=k8s.io Aug 12 23:59:49.847063 containerd[1497]: time="2025-08-12T23:59:49.847038357Z" level=warning msg="cleaning up after shim disconnected" id=007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508 namespace=k8s.io Aug 12 23:59:49.847063 containerd[1497]: time="2025-08-12T23:59:49.847050270Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:59:50.688315 kubelet[2609]: E0812 23:59:50.688117 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:50.690318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508-rootfs.mount: Deactivated successfully. Aug 12 23:59:50.690928 containerd[1497]: time="2025-08-12T23:59:50.690779855Z" level=info msg="CreateContainer within sandbox \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:59:50.892922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3366411834.mount: Deactivated successfully. Aug 12 23:59:50.942443 containerd[1497]: time="2025-08-12T23:59:50.942267381Z" level=info msg="CreateContainer within sandbox \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\"" Aug 12 23:59:50.944683 containerd[1497]: time="2025-08-12T23:59:50.943256954Z" level=info msg="StartContainer for \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\"" Aug 12 23:59:50.982824 systemd[1]: Started cri-containerd-e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2.scope - libcontainer container e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2. Aug 12 23:59:51.119704 containerd[1497]: time="2025-08-12T23:59:51.119645459Z" level=info msg="StartContainer for \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\" returns successfully" Aug 12 23:59:51.287688 kubelet[2609]: I0812 23:59:51.287368 2609 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 12 23:59:51.328169 systemd[1]: Created slice kubepods-burstable-podbba775b3_0047_4829_9775_2701071ffb52.slice - libcontainer container kubepods-burstable-podbba775b3_0047_4829_9775_2701071ffb52.slice. Aug 12 23:59:51.337363 systemd[1]: Created slice kubepods-burstable-pod9c0ad8ee_1137_403e_94fc_0042ae3fcb6d.slice - libcontainer container kubepods-burstable-pod9c0ad8ee_1137_403e_94fc_0042ae3fcb6d.slice. Aug 12 23:59:51.373821 kubelet[2609]: I0812 23:59:51.373763 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bba775b3-0047-4829-9775-2701071ffb52-config-volume\") pod \"coredns-668d6bf9bc-m5scs\" (UID: \"bba775b3-0047-4829-9775-2701071ffb52\") " pod="kube-system/coredns-668d6bf9bc-m5scs" Aug 12 23:59:51.373821 kubelet[2609]: I0812 23:59:51.373819 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c0ad8ee-1137-403e-94fc-0042ae3fcb6d-config-volume\") pod \"coredns-668d6bf9bc-btjtt\" (UID: \"9c0ad8ee-1137-403e-94fc-0042ae3fcb6d\") " pod="kube-system/coredns-668d6bf9bc-btjtt" Aug 12 23:59:51.374045 kubelet[2609]: I0812 23:59:51.373868 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v28bs\" (UniqueName: \"kubernetes.io/projected/9c0ad8ee-1137-403e-94fc-0042ae3fcb6d-kube-api-access-v28bs\") pod \"coredns-668d6bf9bc-btjtt\" (UID: \"9c0ad8ee-1137-403e-94fc-0042ae3fcb6d\") " pod="kube-system/coredns-668d6bf9bc-btjtt" Aug 12 23:59:51.374045 kubelet[2609]: I0812 23:59:51.373910 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dr2\" (UniqueName: \"kubernetes.io/projected/bba775b3-0047-4829-9775-2701071ffb52-kube-api-access-z4dr2\") pod \"coredns-668d6bf9bc-m5scs\" (UID: \"bba775b3-0047-4829-9775-2701071ffb52\") " pod="kube-system/coredns-668d6bf9bc-m5scs" Aug 12 23:59:51.635942 kubelet[2609]: E0812 23:59:51.635795 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:51.636764 containerd[1497]: time="2025-08-12T23:59:51.636725654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m5scs,Uid:bba775b3-0047-4829-9775-2701071ffb52,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:51.641824 kubelet[2609]: E0812 23:59:51.641787 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:51.642250 containerd[1497]: time="2025-08-12T23:59:51.642221780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btjtt,Uid:9c0ad8ee-1137-403e-94fc-0042ae3fcb6d,Namespace:kube-system,Attempt:0,}" Aug 12 23:59:51.695728 kubelet[2609]: E0812 23:59:51.694296 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:52.226668 systemd[1]: Started sshd@8-10.0.0.83:22-10.0.0.1:42580.service - OpenSSH per-connection server daemon (10.0.0.1:42580). Aug 12 23:59:52.269202 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 42580 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:59:52.271302 sshd-session[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:59:52.276555 systemd-logind[1480]: New session 9 of user core. Aug 12 23:59:52.285795 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:59:52.414668 sshd[3479]: Connection closed by 10.0.0.1 port 42580 Aug 12 23:59:52.415183 sshd-session[3477]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:52.420662 systemd[1]: sshd@8-10.0.0.83:22-10.0.0.1:42580.service: Deactivated successfully. Aug 12 23:59:52.423163 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:59:52.423911 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:59:52.425074 systemd-logind[1480]: Removed session 9. Aug 12 23:59:52.695959 kubelet[2609]: E0812 23:59:52.695920 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:53.254053 systemd-networkd[1438]: cilium_host: Link UP Aug 12 23:59:53.254349 systemd-networkd[1438]: cilium_net: Link UP Aug 12 23:59:53.254658 systemd-networkd[1438]: cilium_net: Gained carrier Aug 12 23:59:53.254944 systemd-networkd[1438]: cilium_host: Gained carrier Aug 12 23:59:53.336805 systemd-networkd[1438]: cilium_net: Gained IPv6LL Aug 12 23:59:53.389479 systemd-networkd[1438]: cilium_vxlan: Link UP Aug 12 23:59:53.389491 systemd-networkd[1438]: cilium_vxlan: Gained carrier Aug 12 23:59:53.640678 kernel: NET: Registered PF_ALG protocol family Aug 12 23:59:53.697308 kubelet[2609]: E0812 23:59:53.697251 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:54.023938 systemd-networkd[1438]: cilium_host: Gained IPv6LL Aug 12 23:59:54.378217 systemd-networkd[1438]: lxc_health: Link UP Aug 12 23:59:54.379476 systemd-networkd[1438]: lxc_health: Gained carrier Aug 12 23:59:54.777679 kernel: eth0: renamed from tmp6849a Aug 12 23:59:54.778725 kubelet[2609]: E0812 23:59:54.778693 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:54.797207 kernel: eth0: renamed from tmp82c80 Aug 12 23:59:54.795900 systemd-networkd[1438]: lxc723a9b0c3f93: Link UP Aug 12 23:59:54.796179 systemd-networkd[1438]: lxc0f502b7f8a95: Link UP Aug 12 23:59:54.804233 systemd-networkd[1438]: lxc723a9b0c3f93: Gained carrier Aug 12 23:59:54.805082 systemd-networkd[1438]: lxc0f502b7f8a95: Gained carrier Aug 12 23:59:54.910462 kubelet[2609]: I0812 23:59:54.910389 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bxnxf" podStartSLOduration=11.190344957 podStartE2EDuration="29.910365608s" podCreationTimestamp="2025-08-12 23:59:25 +0000 UTC" firstStartedPulling="2025-08-12 23:59:26.9172387 +0000 UTC m=+6.628517177" lastFinishedPulling="2025-08-12 23:59:45.637259361 +0000 UTC m=+25.348537828" observedRunningTime="2025-08-12 23:59:51.717704796 +0000 UTC m=+31.428983283" watchObservedRunningTime="2025-08-12 23:59:54.910365608 +0000 UTC m=+34.621644075" Aug 12 23:59:55.175805 systemd-networkd[1438]: cilium_vxlan: Gained IPv6LL Aug 12 23:59:55.687846 systemd-networkd[1438]: lxc_health: Gained IPv6LL Aug 12 23:59:55.699875 kubelet[2609]: E0812 23:59:55.699848 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:56.519856 systemd-networkd[1438]: lxc723a9b0c3f93: Gained IPv6LL Aug 12 23:59:56.703668 kubelet[2609]: E0812 23:59:56.702153 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:56.775906 systemd-networkd[1438]: lxc0f502b7f8a95: Gained IPv6LL Aug 12 23:59:57.434791 systemd[1]: Started sshd@9-10.0.0.83:22-10.0.0.1:42594.service - OpenSSH per-connection server daemon (10.0.0.1:42594). Aug 12 23:59:57.476463 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 42594 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 12 23:59:57.478664 sshd-session[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:59:57.484182 systemd-logind[1480]: New session 10 of user core. Aug 12 23:59:57.491895 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:59:57.617264 sshd[3875]: Connection closed by 10.0.0.1 port 42594 Aug 12 23:59:57.617719 sshd-session[3873]: pam_unix(sshd:session): session closed for user core Aug 12 23:59:57.622441 systemd[1]: sshd@9-10.0.0.83:22-10.0.0.1:42594.service: Deactivated successfully. Aug 12 23:59:57.624974 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:59:57.625854 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:59:57.626991 systemd-logind[1480]: Removed session 10. Aug 12 23:59:58.621948 containerd[1497]: time="2025-08-12T23:59:58.621787379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:59:58.621948 containerd[1497]: time="2025-08-12T23:59:58.621888780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:59:58.621948 containerd[1497]: time="2025-08-12T23:59:58.621904251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:58.622560 containerd[1497]: time="2025-08-12T23:59:58.622006834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:58.647663 containerd[1497]: time="2025-08-12T23:59:58.647547746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:59:58.647663 containerd[1497]: time="2025-08-12T23:59:58.647613857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:59:58.647663 containerd[1497]: time="2025-08-12T23:59:58.647624828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:58.647906 containerd[1497]: time="2025-08-12T23:59:58.647789785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:59:58.647952 systemd[1]: Started cri-containerd-6849a324ee595f0d750df7340425c80da71d6b35482165b074cc01604f28ad27.scope - libcontainer container 6849a324ee595f0d750df7340425c80da71d6b35482165b074cc01604f28ad27. Aug 12 23:59:58.668000 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:59:58.673831 systemd[1]: Started cri-containerd-82c8022dd096a75b2371834e024d1c3b9d7f68402b80e3c8069e4ed04a06ada0.scope - libcontainer container 82c8022dd096a75b2371834e024d1c3b9d7f68402b80e3c8069e4ed04a06ada0. Aug 12 23:59:58.690728 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:59:58.694687 containerd[1497]: time="2025-08-12T23:59:58.694565445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btjtt,Uid:9c0ad8ee-1137-403e-94fc-0042ae3fcb6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6849a324ee595f0d750df7340425c80da71d6b35482165b074cc01604f28ad27\"" Aug 12 23:59:58.695684 kubelet[2609]: E0812 23:59:58.695620 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:58.697714 containerd[1497]: time="2025-08-12T23:59:58.697665040Z" level=info msg="CreateContainer within sandbox \"6849a324ee595f0d750df7340425c80da71d6b35482165b074cc01604f28ad27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:59:58.721175 containerd[1497]: time="2025-08-12T23:59:58.721081346Z" level=info msg="CreateContainer within sandbox \"6849a324ee595f0d750df7340425c80da71d6b35482165b074cc01604f28ad27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"595d7fc7c0efbe7f610061074ef4ba184928d324be859ca942c34d93a9ba7f73\"" Aug 12 23:59:58.721906 containerd[1497]: time="2025-08-12T23:59:58.721846351Z" level=info msg="StartContainer for \"595d7fc7c0efbe7f610061074ef4ba184928d324be859ca942c34d93a9ba7f73\"" Aug 12 23:59:58.723245 containerd[1497]: time="2025-08-12T23:59:58.723052167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m5scs,Uid:bba775b3-0047-4829-9775-2701071ffb52,Namespace:kube-system,Attempt:0,} returns sandbox id \"82c8022dd096a75b2371834e024d1c3b9d7f68402b80e3c8069e4ed04a06ada0\"" Aug 12 23:59:58.723744 kubelet[2609]: E0812 23:59:58.723628 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:58.726700 containerd[1497]: time="2025-08-12T23:59:58.726456134Z" level=info msg="CreateContainer within sandbox \"82c8022dd096a75b2371834e024d1c3b9d7f68402b80e3c8069e4ed04a06ada0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:59:58.762228 systemd[1]: Started cri-containerd-595d7fc7c0efbe7f610061074ef4ba184928d324be859ca942c34d93a9ba7f73.scope - libcontainer container 595d7fc7c0efbe7f610061074ef4ba184928d324be859ca942c34d93a9ba7f73. Aug 12 23:59:58.767127 containerd[1497]: time="2025-08-12T23:59:58.767086811Z" level=info msg="CreateContainer within sandbox \"82c8022dd096a75b2371834e024d1c3b9d7f68402b80e3c8069e4ed04a06ada0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e393ce82dfd40021093202ed23a198b25a72f62ac9f5c9fd03e451759535d7ae\"" Aug 12 23:59:58.767127 containerd[1497]: time="2025-08-12T23:59:58.767865263Z" level=info msg="StartContainer for \"e393ce82dfd40021093202ed23a198b25a72f62ac9f5c9fd03e451759535d7ae\"" Aug 12 23:59:58.802948 systemd[1]: Started cri-containerd-e393ce82dfd40021093202ed23a198b25a72f62ac9f5c9fd03e451759535d7ae.scope - libcontainer container e393ce82dfd40021093202ed23a198b25a72f62ac9f5c9fd03e451759535d7ae. Aug 12 23:59:58.808057 containerd[1497]: time="2025-08-12T23:59:58.808012043Z" level=info msg="StartContainer for \"595d7fc7c0efbe7f610061074ef4ba184928d324be859ca942c34d93a9ba7f73\" returns successfully" Aug 12 23:59:58.921857 containerd[1497]: time="2025-08-12T23:59:58.921774525Z" level=info msg="StartContainer for \"e393ce82dfd40021093202ed23a198b25a72f62ac9f5c9fd03e451759535d7ae\" returns successfully" Aug 12 23:59:59.627905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2145810794.mount: Deactivated successfully. Aug 12 23:59:59.710124 kubelet[2609]: E0812 23:59:59.710032 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:59:59.712858 kubelet[2609]: E0812 23:59:59.712817 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:00.034542 kubelet[2609]: I0813 00:00:00.033800 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-btjtt" podStartSLOduration=34.033774479 podStartE2EDuration="34.033774479s" podCreationTimestamp="2025-08-12 23:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:00:00.030793415 +0000 UTC m=+39.742071882" watchObservedRunningTime="2025-08-13 00:00:00.033774479 +0000 UTC m=+39.745052956" Aug 13 00:00:00.034542 kubelet[2609]: I0813 00:00:00.033908 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m5scs" podStartSLOduration=34.033902772 podStartE2EDuration="34.033902772s" podCreationTimestamp="2025-08-12 23:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:59:59.934024185 +0000 UTC m=+39.645302652" watchObservedRunningTime="2025-08-13 00:00:00.033902772 +0000 UTC m=+39.745181239" Aug 13 00:00:00.714760 kubelet[2609]: E0813 00:00:00.714721 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:00.715296 kubelet[2609]: E0813 00:00:00.714800 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:01.717658 kubelet[2609]: E0813 00:00:01.717595 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:01.718200 kubelet[2609]: E0813 00:00:01.717907 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:02.646207 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Aug 13 00:00:02.648201 systemd[1]: Started sshd@10-10.0.0.83:22-10.0.0.1:37184.service - OpenSSH per-connection server daemon (10.0.0.1:37184). Aug 13 00:00:02.825495 systemd[1]: logrotate.service: Deactivated successfully. Aug 13 00:00:02.846315 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 37184 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:02.848548 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:02.853572 systemd-logind[1480]: New session 11 of user core. Aug 13 00:00:02.860777 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:00:03.009281 sshd[4068]: Connection closed by 10.0.0.1 port 37184 Aug 13 00:00:03.009769 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:03.014590 systemd[1]: sshd@10-10.0.0.83:22-10.0.0.1:37184.service: Deactivated successfully. Aug 13 00:00:03.017519 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:00:03.018679 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:00:03.020214 systemd-logind[1480]: Removed session 11. Aug 13 00:00:08.023298 systemd[1]: Started sshd@11-10.0.0.83:22-10.0.0.1:38306.service - OpenSSH per-connection server daemon (10.0.0.1:38306). Aug 13 00:00:08.065469 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 38306 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:08.067547 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:08.072626 systemd-logind[1480]: New session 12 of user core. Aug 13 00:00:08.081913 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:00:08.202139 sshd[4084]: Connection closed by 10.0.0.1 port 38306 Aug 13 00:00:08.202560 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:08.214698 systemd[1]: sshd@11-10.0.0.83:22-10.0.0.1:38306.service: Deactivated successfully. Aug 13 00:00:08.217199 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:00:08.219149 systemd-logind[1480]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:00:08.226137 systemd[1]: Started sshd@12-10.0.0.83:22-10.0.0.1:38308.service - OpenSSH per-connection server daemon (10.0.0.1:38308). Aug 13 00:00:08.227926 systemd-logind[1480]: Removed session 12. Aug 13 00:00:08.258416 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 38308 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:08.260199 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:08.265199 systemd-logind[1480]: New session 13 of user core. Aug 13 00:00:08.274807 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:00:08.472345 sshd[4100]: Connection closed by 10.0.0.1 port 38308 Aug 13 00:00:08.472979 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:08.486172 systemd[1]: sshd@12-10.0.0.83:22-10.0.0.1:38308.service: Deactivated successfully. Aug 13 00:00:08.488952 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:00:08.490745 systemd-logind[1480]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:00:08.500289 systemd[1]: Started sshd@13-10.0.0.83:22-10.0.0.1:38314.service - OpenSSH per-connection server daemon (10.0.0.1:38314). Aug 13 00:00:08.501792 systemd-logind[1480]: Removed session 13. Aug 13 00:00:08.537598 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 38314 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:08.538358 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:08.543133 systemd-logind[1480]: New session 14 of user core. Aug 13 00:00:08.553788 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:00:08.880746 sshd[4114]: Connection closed by 10.0.0.1 port 38314 Aug 13 00:00:08.881048 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:08.886101 systemd[1]: sshd@13-10.0.0.83:22-10.0.0.1:38314.service: Deactivated successfully. Aug 13 00:00:08.888589 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:00:08.889321 systemd-logind[1480]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:00:08.890613 systemd-logind[1480]: Removed session 14. Aug 13 00:00:13.930912 systemd[1]: Started sshd@14-10.0.0.83:22-10.0.0.1:38330.service - OpenSSH per-connection server daemon (10.0.0.1:38330). Aug 13 00:00:13.999511 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 38330 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:14.001841 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:14.016684 systemd-logind[1480]: New session 15 of user core. Aug 13 00:00:14.032841 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:00:14.461261 sshd[4129]: Connection closed by 10.0.0.1 port 38330 Aug 13 00:00:14.461771 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:14.466854 systemd[1]: sshd@14-10.0.0.83:22-10.0.0.1:38330.service: Deactivated successfully. Aug 13 00:00:14.469204 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:00:14.469984 systemd-logind[1480]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:00:14.470955 systemd-logind[1480]: Removed session 15. Aug 13 00:00:19.476161 systemd[1]: Started sshd@15-10.0.0.83:22-10.0.0.1:39796.service - OpenSSH per-connection server daemon (10.0.0.1:39796). Aug 13 00:00:19.512706 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 39796 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:19.514145 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:19.518339 systemd-logind[1480]: New session 16 of user core. Aug 13 00:00:19.525776 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:00:19.638395 sshd[4145]: Connection closed by 10.0.0.1 port 39796 Aug 13 00:00:19.638718 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:19.642603 systemd[1]: sshd@15-10.0.0.83:22-10.0.0.1:39796.service: Deactivated successfully. Aug 13 00:00:19.645049 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:00:19.645967 systemd-logind[1480]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:00:19.646806 systemd-logind[1480]: Removed session 16. Aug 13 00:00:24.652289 systemd[1]: Started sshd@16-10.0.0.83:22-10.0.0.1:39808.service - OpenSSH per-connection server daemon (10.0.0.1:39808). Aug 13 00:00:24.690836 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 39808 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:24.692905 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:24.698322 systemd-logind[1480]: New session 17 of user core. Aug 13 00:00:24.708854 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:00:24.858788 sshd[4162]: Connection closed by 10.0.0.1 port 39808 Aug 13 00:00:24.859262 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:24.864091 systemd[1]: sshd@16-10.0.0.83:22-10.0.0.1:39808.service: Deactivated successfully. Aug 13 00:00:24.866891 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:00:24.867673 systemd-logind[1480]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:00:24.868835 systemd-logind[1480]: Removed session 17. Aug 13 00:00:29.872928 systemd[1]: Started sshd@17-10.0.0.83:22-10.0.0.1:44388.service - OpenSSH per-connection server daemon (10.0.0.1:44388). Aug 13 00:00:29.909316 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 44388 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:29.910978 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:29.915295 systemd-logind[1480]: New session 18 of user core. Aug 13 00:00:29.928835 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:00:30.053727 sshd[4180]: Connection closed by 10.0.0.1 port 44388 Aug 13 00:00:30.054163 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:30.065898 systemd[1]: sshd@17-10.0.0.83:22-10.0.0.1:44388.service: Deactivated successfully. Aug 13 00:00:30.068136 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:00:30.069762 systemd-logind[1480]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:00:30.080089 systemd[1]: Started sshd@18-10.0.0.83:22-10.0.0.1:44396.service - OpenSSH per-connection server daemon (10.0.0.1:44396). Aug 13 00:00:30.081313 systemd-logind[1480]: Removed session 18. Aug 13 00:00:30.112220 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 44396 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:30.113808 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:30.118395 systemd-logind[1480]: New session 19 of user core. Aug 13 00:00:30.132827 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:00:30.781430 sshd[4195]: Connection closed by 10.0.0.1 port 44396 Aug 13 00:00:30.782054 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:30.791022 systemd[1]: sshd@18-10.0.0.83:22-10.0.0.1:44396.service: Deactivated successfully. Aug 13 00:00:30.793180 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:00:30.795034 systemd-logind[1480]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:00:30.802924 systemd[1]: Started sshd@19-10.0.0.83:22-10.0.0.1:44412.service - OpenSSH per-connection server daemon (10.0.0.1:44412). Aug 13 00:00:30.803917 systemd-logind[1480]: Removed session 19. Aug 13 00:00:30.840769 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 44412 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:30.842430 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:30.846894 systemd-logind[1480]: New session 20 of user core. Aug 13 00:00:30.859767 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:00:31.381657 kubelet[2609]: E0813 00:00:31.381584 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:31.513192 sshd[4209]: Connection closed by 10.0.0.1 port 44412 Aug 13 00:00:31.513843 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:31.530352 systemd[1]: sshd@19-10.0.0.83:22-10.0.0.1:44412.service: Deactivated successfully. Aug 13 00:00:31.534121 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:00:31.538158 systemd-logind[1480]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:00:31.547183 systemd[1]: Started sshd@20-10.0.0.83:22-10.0.0.1:44414.service - OpenSSH per-connection server daemon (10.0.0.1:44414). Aug 13 00:00:31.548387 systemd-logind[1480]: Removed session 20. Aug 13 00:00:31.580447 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 44414 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:31.582205 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:31.586901 systemd-logind[1480]: New session 21 of user core. Aug 13 00:00:31.592780 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:00:31.821979 sshd[4230]: Connection closed by 10.0.0.1 port 44414 Aug 13 00:00:31.822928 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:31.833431 systemd[1]: sshd@20-10.0.0.83:22-10.0.0.1:44414.service: Deactivated successfully. Aug 13 00:00:31.836557 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:00:31.838385 systemd-logind[1480]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:00:31.846049 systemd[1]: Started sshd@21-10.0.0.83:22-10.0.0.1:44424.service - OpenSSH per-connection server daemon (10.0.0.1:44424). Aug 13 00:00:31.847480 systemd-logind[1480]: Removed session 21. Aug 13 00:00:31.880950 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 44424 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:31.882516 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:31.887679 systemd-logind[1480]: New session 22 of user core. Aug 13 00:00:31.898805 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:00:32.013191 sshd[4244]: Connection closed by 10.0.0.1 port 44424 Aug 13 00:00:32.013601 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:32.017915 systemd[1]: sshd@21-10.0.0.83:22-10.0.0.1:44424.service: Deactivated successfully. Aug 13 00:00:32.020399 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:00:32.021140 systemd-logind[1480]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:00:32.022138 systemd-logind[1480]: Removed session 22. Aug 13 00:00:37.032346 systemd[1]: Started sshd@22-10.0.0.83:22-10.0.0.1:44434.service - OpenSSH per-connection server daemon (10.0.0.1:44434). Aug 13 00:00:37.069613 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 44434 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:37.071424 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:37.076541 systemd-logind[1480]: New session 23 of user core. Aug 13 00:00:37.085825 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:00:37.373663 sshd[4260]: Connection closed by 10.0.0.1 port 44434 Aug 13 00:00:37.374007 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:37.379103 systemd[1]: sshd@22-10.0.0.83:22-10.0.0.1:44434.service: Deactivated successfully. Aug 13 00:00:37.382617 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:00:37.383529 systemd-logind[1480]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:00:37.384654 systemd-logind[1480]: Removed session 23. Aug 13 00:00:38.382305 kubelet[2609]: E0813 00:00:38.382242 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:42.391361 systemd[1]: Started sshd@23-10.0.0.83:22-10.0.0.1:41918.service - OpenSSH per-connection server daemon (10.0.0.1:41918). Aug 13 00:00:42.435546 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 41918 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:42.437743 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:42.442621 systemd-logind[1480]: New session 24 of user core. Aug 13 00:00:42.452802 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:00:42.567092 sshd[4277]: Connection closed by 10.0.0.1 port 41918 Aug 13 00:00:42.567474 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:42.572614 systemd[1]: sshd@23-10.0.0.83:22-10.0.0.1:41918.service: Deactivated successfully. Aug 13 00:00:42.575415 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:00:42.576225 systemd-logind[1480]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:00:42.577353 systemd-logind[1480]: Removed session 24. Aug 13 00:00:47.382310 kubelet[2609]: E0813 00:00:47.382131 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:47.582559 systemd[1]: Started sshd@24-10.0.0.83:22-10.0.0.1:41924.service - OpenSSH per-connection server daemon (10.0.0.1:41924). Aug 13 00:00:47.620190 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 41924 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:47.621846 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:47.626505 systemd-logind[1480]: New session 25 of user core. Aug 13 00:00:47.644952 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:00:47.808680 sshd[4292]: Connection closed by 10.0.0.1 port 41924 Aug 13 00:00:47.809116 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:47.813250 systemd[1]: sshd@24-10.0.0.83:22-10.0.0.1:41924.service: Deactivated successfully. Aug 13 00:00:47.815537 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:00:47.816406 systemd-logind[1480]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:00:47.817332 systemd-logind[1480]: Removed session 25. Aug 13 00:00:52.825299 systemd[1]: Started sshd@25-10.0.0.83:22-10.0.0.1:58054.service - OpenSSH per-connection server daemon (10.0.0.1:58054). Aug 13 00:00:52.862501 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 58054 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:52.864036 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:52.868538 systemd-logind[1480]: New session 26 of user core. Aug 13 00:00:52.875769 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:00:53.027659 sshd[4308]: Connection closed by 10.0.0.1 port 58054 Aug 13 00:00:53.028116 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:53.041893 systemd[1]: sshd@25-10.0.0.83:22-10.0.0.1:58054.service: Deactivated successfully. Aug 13 00:00:53.044159 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:00:53.045762 systemd-logind[1480]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:00:53.050883 systemd[1]: Started sshd@26-10.0.0.83:22-10.0.0.1:58060.service - OpenSSH per-connection server daemon (10.0.0.1:58060). Aug 13 00:00:53.051998 systemd-logind[1480]: Removed session 26. Aug 13 00:00:53.084490 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 58060 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:53.086021 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:53.090837 systemd-logind[1480]: New session 27 of user core. Aug 13 00:00:53.096824 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:00:54.631367 containerd[1497]: time="2025-08-13T00:00:54.631312685Z" level=info msg="StopContainer for \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\" with timeout 30 (s)" Aug 13 00:00:54.638341 containerd[1497]: time="2025-08-13T00:00:54.638303041Z" level=info msg="Stop container \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\" with signal terminated" Aug 13 00:00:54.663824 systemd[1]: cri-containerd-3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a.scope: Deactivated successfully. Aug 13 00:00:54.676077 containerd[1497]: time="2025-08-13T00:00:54.676019111Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:00:54.676412 containerd[1497]: time="2025-08-13T00:00:54.676379117Z" level=info msg="StopContainer for \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\" with timeout 2 (s)" Aug 13 00:00:54.676626 containerd[1497]: time="2025-08-13T00:00:54.676605328Z" level=info msg="Stop container \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\" with signal terminated" Aug 13 00:00:54.683608 systemd-networkd[1438]: lxc_health: Link DOWN Aug 13 00:00:54.683621 systemd-networkd[1438]: lxc_health: Lost carrier Aug 13 00:00:54.692516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a-rootfs.mount: Deactivated successfully. Aug 13 00:00:54.706185 systemd[1]: cri-containerd-e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2.scope: Deactivated successfully. Aug 13 00:00:54.706608 systemd[1]: cri-containerd-e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2.scope: Consumed 7.282s CPU time, 125M memory peak, 196K read from disk, 13.3M written to disk. Aug 13 00:00:54.707502 containerd[1497]: time="2025-08-13T00:00:54.707418651Z" level=info msg="shim disconnected" id=3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a namespace=k8s.io Aug 13 00:00:54.707502 containerd[1497]: time="2025-08-13T00:00:54.707491399Z" level=warning msg="cleaning up after shim disconnected" id=3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a namespace=k8s.io Aug 13 00:00:54.707502 containerd[1497]: time="2025-08-13T00:00:54.707502089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:00:54.729074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2-rootfs.mount: Deactivated successfully. Aug 13 00:00:54.729560 containerd[1497]: time="2025-08-13T00:00:54.729524922Z" level=info msg="StopContainer for \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\" returns successfully" Aug 13 00:00:54.731754 containerd[1497]: time="2025-08-13T00:00:54.731697356Z" level=info msg="shim disconnected" id=e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2 namespace=k8s.io Aug 13 00:00:54.731807 containerd[1497]: time="2025-08-13T00:00:54.731750598Z" level=warning msg="cleaning up after shim disconnected" id=e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2 namespace=k8s.io Aug 13 00:00:54.731807 containerd[1497]: time="2025-08-13T00:00:54.731774323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:00:54.734231 containerd[1497]: time="2025-08-13T00:00:54.734188848Z" level=info msg="StopPodSandbox for \"9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82\"" Aug 13 00:00:54.748766 containerd[1497]: time="2025-08-13T00:00:54.748712696Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:00:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:00:54.752739 containerd[1497]: time="2025-08-13T00:00:54.752676948Z" level=info msg="Container to stop \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:00:54.755348 containerd[1497]: time="2025-08-13T00:00:54.755316874Z" level=info msg="StopContainer for \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\" returns successfully" Aug 13 00:00:54.755362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82-shm.mount: Deactivated successfully. Aug 13 00:00:54.755955 containerd[1497]: time="2025-08-13T00:00:54.755898845Z" level=info msg="StopPodSandbox for \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\"" Aug 13 00:00:54.756050 containerd[1497]: time="2025-08-13T00:00:54.755927730Z" level=info msg="Container to stop \"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:00:54.756050 containerd[1497]: time="2025-08-13T00:00:54.755964379Z" level=info msg="Container to stop \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:00:54.756050 containerd[1497]: time="2025-08-13T00:00:54.755972224Z" level=info msg="Container to stop \"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:00:54.756050 containerd[1497]: time="2025-08-13T00:00:54.755979808Z" level=info msg="Container to stop \"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:00:54.756050 containerd[1497]: time="2025-08-13T00:00:54.755987574Z" level=info msg="Container to stop \"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:00:54.760659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c-shm.mount: Deactivated successfully. Aug 13 00:00:54.761444 systemd[1]: cri-containerd-9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82.scope: Deactivated successfully. Aug 13 00:00:54.765904 systemd[1]: cri-containerd-887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c.scope: Deactivated successfully. Aug 13 00:00:54.792478 containerd[1497]: time="2025-08-13T00:00:54.792156081Z" level=info msg="shim disconnected" id=9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82 namespace=k8s.io Aug 13 00:00:54.792907 containerd[1497]: time="2025-08-13T00:00:54.792723093Z" level=warning msg="cleaning up after shim disconnected" id=9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82 namespace=k8s.io Aug 13 00:00:54.792907 containerd[1497]: time="2025-08-13T00:00:54.792743652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:00:54.799943 containerd[1497]: time="2025-08-13T00:00:54.799858955Z" level=info msg="shim disconnected" id=887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c namespace=k8s.io Aug 13 00:00:54.799943 containerd[1497]: time="2025-08-13T00:00:54.799923538Z" level=warning msg="cleaning up after shim disconnected" id=887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c namespace=k8s.io Aug 13 00:00:54.799943 containerd[1497]: time="2025-08-13T00:00:54.799931675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:00:54.808729 containerd[1497]: time="2025-08-13T00:00:54.808681496Z" level=info msg="TearDown network for sandbox \"9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82\" successfully" Aug 13 00:00:54.808729 containerd[1497]: time="2025-08-13T00:00:54.808725250Z" level=info msg="StopPodSandbox for \"9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82\" returns successfully" Aug 13 00:00:54.822253 containerd[1497]: time="2025-08-13T00:00:54.822198602Z" level=info msg="TearDown network for sandbox \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" successfully" Aug 13 00:00:54.822253 containerd[1497]: time="2025-08-13T00:00:54.822235132Z" level=info msg="StopPodSandbox for \"887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c\" returns successfully" Aug 13 00:00:54.832427 kubelet[2609]: I0813 00:00:54.832379 2609 scope.go:117] "RemoveContainer" containerID="e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2" Aug 13 00:00:54.839647 containerd[1497]: time="2025-08-13T00:00:54.839599257Z" level=info msg="RemoveContainer for \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\"" Aug 13 00:00:54.847689 containerd[1497]: time="2025-08-13T00:00:54.847608085Z" level=info msg="RemoveContainer for \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\" returns successfully" Aug 13 00:00:54.847937 kubelet[2609]: I0813 00:00:54.847901 2609 scope.go:117] "RemoveContainer" containerID="007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508" Aug 13 00:00:54.848882 containerd[1497]: time="2025-08-13T00:00:54.848853921Z" level=info msg="RemoveContainer for \"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508\"" Aug 13 00:00:54.852796 containerd[1497]: time="2025-08-13T00:00:54.852763570Z" level=info msg="RemoveContainer for \"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508\" returns successfully" Aug 13 00:00:54.852966 kubelet[2609]: I0813 00:00:54.852925 2609 scope.go:117] "RemoveContainer" containerID="b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1" Aug 13 00:00:54.853765 containerd[1497]: time="2025-08-13T00:00:54.853732038Z" level=info msg="RemoveContainer for \"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1\"" Aug 13 00:00:54.858686 containerd[1497]: time="2025-08-13T00:00:54.858650891Z" level=info msg="RemoveContainer for \"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1\" returns successfully" Aug 13 00:00:54.858799 kubelet[2609]: I0813 00:00:54.858771 2609 scope.go:117] "RemoveContainer" containerID="a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27" Aug 13 00:00:54.859527 containerd[1497]: time="2025-08-13T00:00:54.859502716Z" level=info msg="RemoveContainer for \"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27\"" Aug 13 00:00:54.863174 containerd[1497]: time="2025-08-13T00:00:54.863134905Z" level=info msg="RemoveContainer for \"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27\" returns successfully" Aug 13 00:00:54.863318 kubelet[2609]: I0813 00:00:54.863274 2609 scope.go:117] "RemoveContainer" containerID="3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a" Aug 13 00:00:54.864364 containerd[1497]: time="2025-08-13T00:00:54.864325697Z" level=info msg="RemoveContainer for \"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a\"" Aug 13 00:00:54.867589 containerd[1497]: time="2025-08-13T00:00:54.867554767Z" level=info msg="RemoveContainer for \"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a\" returns successfully" Aug 13 00:00:54.867726 kubelet[2609]: I0813 00:00:54.867700 2609 scope.go:117] "RemoveContainer" containerID="e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2" Aug 13 00:00:54.867904 containerd[1497]: time="2025-08-13T00:00:54.867866641Z" level=error msg="ContainerStatus for \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\": not found" Aug 13 00:00:54.868031 kubelet[2609]: E0813 00:00:54.868002 2609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\": not found" containerID="e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2" Aug 13 00:00:54.868113 kubelet[2609]: I0813 00:00:54.868031 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2"} err="failed to get container status \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2726f09e85dcc9a5f5789fd27d04c5281f3294c6b4fd0cbb5f216ead54196f2\": not found" Aug 13 00:00:54.868113 kubelet[2609]: I0813 00:00:54.868112 2609 scope.go:117] "RemoveContainer" containerID="007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508" Aug 13 00:00:54.868332 containerd[1497]: time="2025-08-13T00:00:54.868290119Z" level=error msg="ContainerStatus for \"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508\": not found" Aug 13 00:00:54.868439 kubelet[2609]: E0813 00:00:54.868416 2609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508\": not found" containerID="007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508" Aug 13 00:00:54.868490 kubelet[2609]: I0813 00:00:54.868437 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508"} err="failed to get container status \"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508\": rpc error: code = NotFound desc = an error occurred when try to find container \"007159357ad20b68171c68fdc7de3a6ae0779c326fa3bb6cc7da0f096ff40508\": not found" Aug 13 00:00:54.868490 kubelet[2609]: I0813 00:00:54.868451 2609 scope.go:117] "RemoveContainer" containerID="b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1" Aug 13 00:00:54.868662 containerd[1497]: time="2025-08-13T00:00:54.868598859Z" level=error msg="ContainerStatus for \"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1\": not found" Aug 13 00:00:54.868787 kubelet[2609]: E0813 00:00:54.868761 2609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1\": not found" containerID="b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1" Aug 13 00:00:54.868833 kubelet[2609]: I0813 00:00:54.868792 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1"} err="failed to get container status \"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"b918599cfb2ec51ac66c1ede8a6564db2000795eff52105df4bdbbdf885566c1\": not found" Aug 13 00:00:54.868833 kubelet[2609]: I0813 00:00:54.868813 2609 scope.go:117] "RemoveContainer" containerID="a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27" Aug 13 00:00:54.869021 containerd[1497]: time="2025-08-13T00:00:54.868961130Z" level=error msg="ContainerStatus for \"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27\": not found" Aug 13 00:00:54.869077 kubelet[2609]: E0813 00:00:54.869038 2609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27\": not found" containerID="a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27" Aug 13 00:00:54.869077 kubelet[2609]: I0813 00:00:54.869053 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27"} err="failed to get container status \"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9ee99a1f236b4d506ed6d10296b99e195b532db6488f4b37d7d23d17666ca27\": not found" Aug 13 00:00:54.869077 kubelet[2609]: I0813 00:00:54.869067 2609 scope.go:117] "RemoveContainer" containerID="3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a" Aug 13 00:00:54.869240 containerd[1497]: time="2025-08-13T00:00:54.869212139Z" level=error msg="ContainerStatus for \"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a\": not found" Aug 13 00:00:54.869355 kubelet[2609]: E0813 00:00:54.869317 2609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a\": not found" containerID="3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a" Aug 13 00:00:54.869425 kubelet[2609]: I0813 00:00:54.869354 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a"} err="failed to get container status \"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3407dc2c4de03a484dfaafbaec9d8d424b0c6bab82deb698a20d29f6529be45a\": not found" Aug 13 00:00:54.869425 kubelet[2609]: I0813 00:00:54.869370 2609 scope.go:117] "RemoveContainer" containerID="3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a" Aug 13 00:00:54.870131 containerd[1497]: time="2025-08-13T00:00:54.870109651Z" level=info msg="RemoveContainer for \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\"" Aug 13 00:00:54.873207 containerd[1497]: time="2025-08-13T00:00:54.873178334Z" level=info msg="RemoveContainer for \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\" returns successfully" Aug 13 00:00:54.873323 kubelet[2609]: I0813 00:00:54.873298 2609 scope.go:117] "RemoveContainer" containerID="3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a" Aug 13 00:00:54.873445 containerd[1497]: time="2025-08-13T00:00:54.873418903Z" level=error msg="ContainerStatus for \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\": not found" Aug 13 00:00:54.873554 kubelet[2609]: E0813 00:00:54.873523 2609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\": not found" containerID="3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a" Aug 13 00:00:54.873554 kubelet[2609]: I0813 00:00:54.873547 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a"} err="failed to get container status \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3be53e0c761e9b41e12a6c4aee2c3a96c03768caaa4503145476d1658bc6d28a\": not found" Aug 13 00:00:54.906852 kubelet[2609]: I0813 00:00:54.906734 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-host-proc-sys-net\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.906852 kubelet[2609]: I0813 00:00:54.906785 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c12043ea-7643-4d34-b998-1e17da5d923e-hubble-tls\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.906852 kubelet[2609]: I0813 00:00:54.906801 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cni-path\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.906852 kubelet[2609]: I0813 00:00:54.906815 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-hostproc\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.906852 kubelet[2609]: I0813 00:00:54.906835 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-host-proc-sys-kernel\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.907060 kubelet[2609]: I0813 00:00:54.906864 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddsmv\" (UniqueName: \"kubernetes.io/projected/5a06979d-de8f-47c3-b87c-623c4a4b4952-kube-api-access-ddsmv\") pod \"5a06979d-de8f-47c3-b87c-623c4a4b4952\" (UID: \"5a06979d-de8f-47c3-b87c-623c4a4b4952\") " Aug 13 00:00:54.907060 kubelet[2609]: I0813 00:00:54.906885 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-cgroup\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.907060 kubelet[2609]: I0813 00:00:54.906884 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cni-path" (OuterVolumeSpecName: "cni-path") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:00:54.907060 kubelet[2609]: I0813 00:00:54.906903 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-xtables-lock\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.907060 kubelet[2609]: I0813 00:00:54.906923 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvxnl\" (UniqueName: \"kubernetes.io/projected/c12043ea-7643-4d34-b998-1e17da5d923e-kube-api-access-gvxnl\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.907060 kubelet[2609]: I0813 00:00:54.906944 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-run\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.907302 kubelet[2609]: I0813 00:00:54.906963 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-bpf-maps\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.907302 kubelet[2609]: I0813 00:00:54.906981 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-lib-modules\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.907302 kubelet[2609]: I0813 00:00:54.907010 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-config-path\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.907302 kubelet[2609]: I0813 00:00:54.907036 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c12043ea-7643-4d34-b998-1e17da5d923e-clustermesh-secrets\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.907302 kubelet[2609]: I0813 00:00:54.907058 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-etc-cni-netd\") pod \"c12043ea-7643-4d34-b998-1e17da5d923e\" (UID: \"c12043ea-7643-4d34-b998-1e17da5d923e\") " Aug 13 00:00:54.907302 kubelet[2609]: I0813 00:00:54.907084 2609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a06979d-de8f-47c3-b87c-623c4a4b4952-cilium-config-path\") pod \"5a06979d-de8f-47c3-b87c-623c4a4b4952\" (UID: \"5a06979d-de8f-47c3-b87c-623c4a4b4952\") " Aug 13 00:00:54.907540 kubelet[2609]: I0813 00:00:54.907141 2609 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:54.907540 kubelet[2609]: I0813 00:00:54.907261 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:00:54.907540 kubelet[2609]: I0813 00:00:54.907304 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:00:54.907540 kubelet[2609]: I0813 00:00:54.907332 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:00:54.907540 kubelet[2609]: I0813 00:00:54.907360 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:00:54.910679 kubelet[2609]: I0813 00:00:54.910434 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:00:54.913313 kubelet[2609]: I0813 00:00:54.910777 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:00:54.913425 kubelet[2609]: I0813 00:00:54.911688 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-hostproc" (OuterVolumeSpecName: "hostproc") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:00:54.913425 kubelet[2609]: I0813 00:00:54.911723 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:00:54.913425 kubelet[2609]: I0813 00:00:54.911737 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a06979d-de8f-47c3-b87c-623c4a4b4952-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5a06979d-de8f-47c3-b87c-623c4a4b4952" (UID: "5a06979d-de8f-47c3-b87c-623c4a4b4952"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:00:54.913425 kubelet[2609]: I0813 00:00:54.911873 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:00:54.913425 kubelet[2609]: I0813 00:00:54.912701 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a06979d-de8f-47c3-b87c-623c4a4b4952-kube-api-access-ddsmv" (OuterVolumeSpecName: "kube-api-access-ddsmv") pod "5a06979d-de8f-47c3-b87c-623c4a4b4952" (UID: "5a06979d-de8f-47c3-b87c-623c4a4b4952"). InnerVolumeSpecName "kube-api-access-ddsmv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:00:54.913714 kubelet[2609]: I0813 00:00:54.913553 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12043ea-7643-4d34-b998-1e17da5d923e-kube-api-access-gvxnl" (OuterVolumeSpecName: "kube-api-access-gvxnl") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "kube-api-access-gvxnl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:00:54.913973 kubelet[2609]: I0813 00:00:54.913944 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12043ea-7643-4d34-b998-1e17da5d923e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:00:54.914925 kubelet[2609]: I0813 00:00:54.914901 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12043ea-7643-4d34-b998-1e17da5d923e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:00:54.915968 kubelet[2609]: I0813 00:00:54.915942 2609 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c12043ea-7643-4d34-b998-1e17da5d923e" (UID: "c12043ea-7643-4d34-b998-1e17da5d923e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:00:55.008354 kubelet[2609]: I0813 00:00:55.008297 2609 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008354 kubelet[2609]: I0813 00:00:55.008333 2609 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008354 kubelet[2609]: I0813 00:00:55.008349 2609 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gvxnl\" (UniqueName: \"kubernetes.io/projected/c12043ea-7643-4d34-b998-1e17da5d923e-kube-api-access-gvxnl\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008354 kubelet[2609]: I0813 00:00:55.008363 2609 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008623 kubelet[2609]: I0813 00:00:55.008374 2609 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008623 kubelet[2609]: I0813 00:00:55.008385 2609 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c12043ea-7643-4d34-b998-1e17da5d923e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008623 kubelet[2609]: I0813 00:00:55.008395 2609 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008623 kubelet[2609]: I0813 00:00:55.008404 2609 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a06979d-de8f-47c3-b87c-623c4a4b4952-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008623 kubelet[2609]: I0813 00:00:55.008414 2609 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008623 kubelet[2609]: I0813 00:00:55.008424 2609 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008623 kubelet[2609]: I0813 00:00:55.008433 2609 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008623 kubelet[2609]: I0813 00:00:55.008443 2609 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c12043ea-7643-4d34-b998-1e17da5d923e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008847 kubelet[2609]: I0813 00:00:55.008452 2609 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddsmv\" (UniqueName: \"kubernetes.io/projected/5a06979d-de8f-47c3-b87c-623c4a4b4952-kube-api-access-ddsmv\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008847 kubelet[2609]: I0813 00:00:55.008462 2609 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.008847 kubelet[2609]: I0813 00:00:55.008474 2609 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c12043ea-7643-4d34-b998-1e17da5d923e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:00:55.140701 systemd[1]: Removed slice kubepods-burstable-podc12043ea_7643_4d34_b998_1e17da5d923e.slice - libcontainer container kubepods-burstable-podc12043ea_7643_4d34_b998_1e17da5d923e.slice. Aug 13 00:00:55.140841 systemd[1]: kubepods-burstable-podc12043ea_7643_4d34_b998_1e17da5d923e.slice: Consumed 7.403s CPU time, 125.3M memory peak, 220K read from disk, 15.6M written to disk. Aug 13 00:00:55.142129 systemd[1]: Removed slice kubepods-besteffort-pod5a06979d_de8f_47c3_b87c_623c4a4b4952.slice - libcontainer container kubepods-besteffort-pod5a06979d_de8f_47c3_b87c_623c4a4b4952.slice. Aug 13 00:00:55.439570 kubelet[2609]: E0813 00:00:55.439513 2609 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:00:55.651044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9def5a663d51fe8ce874655b097b4ab84a20068cef464af6720eae2010556e82-rootfs.mount: Deactivated successfully. Aug 13 00:00:55.651189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-887accf0c01d7e42b6d858748559b7d77406dd9a4a1d9ed9cf881fdc14695b3c-rootfs.mount: Deactivated successfully. Aug 13 00:00:55.651282 systemd[1]: var-lib-kubelet-pods-c12043ea\x2d7643\x2d4d34\x2db998\x2d1e17da5d923e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgvxnl.mount: Deactivated successfully. Aug 13 00:00:55.651368 systemd[1]: var-lib-kubelet-pods-5a06979d\x2dde8f\x2d47c3\x2db87c\x2d623c4a4b4952-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dddsmv.mount: Deactivated successfully. Aug 13 00:00:55.651464 systemd[1]: var-lib-kubelet-pods-c12043ea\x2d7643\x2d4d34\x2db998\x2d1e17da5d923e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:00:55.651552 systemd[1]: var-lib-kubelet-pods-c12043ea\x2d7643\x2d4d34\x2db998\x2d1e17da5d923e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:00:56.383948 kubelet[2609]: I0813 00:00:56.383910 2609 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a06979d-de8f-47c3-b87c-623c4a4b4952" path="/var/lib/kubelet/pods/5a06979d-de8f-47c3-b87c-623c4a4b4952/volumes" Aug 13 00:00:56.384534 kubelet[2609]: I0813 00:00:56.384517 2609 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c12043ea-7643-4d34-b998-1e17da5d923e" path="/var/lib/kubelet/pods/c12043ea-7643-4d34-b998-1e17da5d923e/volumes" Aug 13 00:00:56.597343 sshd[4323]: Connection closed by 10.0.0.1 port 58060 Aug 13 00:00:56.598060 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:56.618541 systemd[1]: sshd@26-10.0.0.83:22-10.0.0.1:58060.service: Deactivated successfully. Aug 13 00:00:56.620944 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:00:56.621956 systemd-logind[1480]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:00:56.630923 systemd[1]: Started sshd@27-10.0.0.83:22-10.0.0.1:58072.service - OpenSSH per-connection server daemon (10.0.0.1:58072). Aug 13 00:00:56.632169 systemd-logind[1480]: Removed session 27. Aug 13 00:00:56.669521 sshd[4481]: Accepted publickey for core from 10.0.0.1 port 58072 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:56.671413 sshd-session[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:56.676443 systemd-logind[1480]: New session 28 of user core. Aug 13 00:00:56.686777 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:00:57.230660 sshd[4484]: Connection closed by 10.0.0.1 port 58072 Aug 13 00:00:57.232929 sshd-session[4481]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:57.257809 kubelet[2609]: I0813 00:00:57.257736 2609 memory_manager.go:355] "RemoveStaleState removing state" podUID="5a06979d-de8f-47c3-b87c-623c4a4b4952" containerName="cilium-operator" Aug 13 00:00:57.257809 kubelet[2609]: I0813 00:00:57.257781 2609 memory_manager.go:355] "RemoveStaleState removing state" podUID="c12043ea-7643-4d34-b998-1e17da5d923e" containerName="cilium-agent" Aug 13 00:00:57.262785 systemd[1]: Started sshd@28-10.0.0.83:22-10.0.0.1:58078.service - OpenSSH per-connection server daemon (10.0.0.1:58078). Aug 13 00:00:57.263863 systemd[1]: sshd@27-10.0.0.83:22-10.0.0.1:58072.service: Deactivated successfully. Aug 13 00:00:57.270086 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:00:57.278299 systemd-logind[1480]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:00:57.289092 systemd-logind[1480]: Removed session 28. Aug 13 00:00:57.305118 systemd[1]: Created slice kubepods-burstable-pod647fc3d0_9889_4758_988c_2c95cc2bc847.slice - libcontainer container kubepods-burstable-pod647fc3d0_9889_4758_988c_2c95cc2bc847.slice. Aug 13 00:00:57.319075 kubelet[2609]: I0813 00:00:57.319031 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rff45\" (UniqueName: \"kubernetes.io/projected/647fc3d0-9889-4758-988c-2c95cc2bc847-kube-api-access-rff45\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319669 kubelet[2609]: I0813 00:00:57.319290 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/647fc3d0-9889-4758-988c-2c95cc2bc847-lib-modules\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319669 kubelet[2609]: I0813 00:00:57.319327 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/647fc3d0-9889-4758-988c-2c95cc2bc847-cilium-config-path\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319669 kubelet[2609]: I0813 00:00:57.319353 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/647fc3d0-9889-4758-988c-2c95cc2bc847-host-proc-sys-kernel\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319669 kubelet[2609]: I0813 00:00:57.319396 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/647fc3d0-9889-4758-988c-2c95cc2bc847-hostproc\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319669 kubelet[2609]: I0813 00:00:57.319417 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/647fc3d0-9889-4758-988c-2c95cc2bc847-etc-cni-netd\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319669 kubelet[2609]: I0813 00:00:57.319441 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/647fc3d0-9889-4758-988c-2c95cc2bc847-cilium-ipsec-secrets\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319920 kubelet[2609]: I0813 00:00:57.319464 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/647fc3d0-9889-4758-988c-2c95cc2bc847-clustermesh-secrets\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319920 kubelet[2609]: I0813 00:00:57.319489 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/647fc3d0-9889-4758-988c-2c95cc2bc847-cilium-run\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319920 kubelet[2609]: I0813 00:00:57.319514 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/647fc3d0-9889-4758-988c-2c95cc2bc847-hubble-tls\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319920 kubelet[2609]: I0813 00:00:57.319537 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/647fc3d0-9889-4758-988c-2c95cc2bc847-cilium-cgroup\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319920 kubelet[2609]: I0813 00:00:57.319560 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/647fc3d0-9889-4758-988c-2c95cc2bc847-cni-path\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.319920 kubelet[2609]: I0813 00:00:57.319582 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/647fc3d0-9889-4758-988c-2c95cc2bc847-xtables-lock\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.320125 kubelet[2609]: I0813 00:00:57.319606 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/647fc3d0-9889-4758-988c-2c95cc2bc847-bpf-maps\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.320222 kubelet[2609]: I0813 00:00:57.320172 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/647fc3d0-9889-4758-988c-2c95cc2bc847-host-proc-sys-net\") pod \"cilium-p777b\" (UID: \"647fc3d0-9889-4758-988c-2c95cc2bc847\") " pod="kube-system/cilium-p777b" Aug 13 00:00:57.323363 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 58078 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:57.326151 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:57.338738 systemd-logind[1480]: New session 29 of user core. Aug 13 00:00:57.346015 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:00:57.405663 sshd[4503]: Connection closed by 10.0.0.1 port 58078 Aug 13 00:00:57.407179 sshd-session[4497]: pam_unix(sshd:session): session closed for user core Aug 13 00:00:57.418085 systemd[1]: sshd@28-10.0.0.83:22-10.0.0.1:58078.service: Deactivated successfully. Aug 13 00:00:57.420309 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:00:57.421365 systemd-logind[1480]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:00:57.433980 systemd[1]: Started sshd@29-10.0.0.83:22-10.0.0.1:58094.service - OpenSSH per-connection server daemon (10.0.0.1:58094). Aug 13 00:00:57.445463 systemd-logind[1480]: Removed session 29. Aug 13 00:00:57.465048 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 58094 ssh2: RSA SHA256:OA5w6dTpMGcXkWYoUiefgUl6F6DtTjAPobX7lm0tBLA Aug 13 00:00:57.466917 sshd-session[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:00:57.471558 systemd-logind[1480]: New session 30 of user core. Aug 13 00:00:57.486808 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 00:00:57.609413 kubelet[2609]: E0813 00:00:57.609343 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:57.610092 containerd[1497]: time="2025-08-13T00:00:57.610042618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p777b,Uid:647fc3d0-9889-4758-988c-2c95cc2bc847,Namespace:kube-system,Attempt:0,}" Aug 13 00:00:57.636255 containerd[1497]: time="2025-08-13T00:00:57.635985264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:00:57.636255 containerd[1497]: time="2025-08-13T00:00:57.636058614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:00:57.636255 containerd[1497]: time="2025-08-13T00:00:57.636069905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:00:57.636255 containerd[1497]: time="2025-08-13T00:00:57.636164355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:00:57.661794 systemd[1]: Started cri-containerd-7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86.scope - libcontainer container 7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86. Aug 13 00:00:57.685513 containerd[1497]: time="2025-08-13T00:00:57.685453277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p777b,Uid:647fc3d0-9889-4758-988c-2c95cc2bc847,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86\"" Aug 13 00:00:57.686128 kubelet[2609]: E0813 00:00:57.686076 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:57.689188 containerd[1497]: time="2025-08-13T00:00:57.689143111Z" level=info msg="CreateContainer within sandbox \"7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:00:57.705970 containerd[1497]: time="2025-08-13T00:00:57.705924092Z" level=info msg="CreateContainer within sandbox \"7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4183b362c80e71f97d56c7c43dfcf160a52f844da2bbffd20a2644edcf623900\"" Aug 13 00:00:57.706481 containerd[1497]: time="2025-08-13T00:00:57.706442239Z" level=info msg="StartContainer for \"4183b362c80e71f97d56c7c43dfcf160a52f844da2bbffd20a2644edcf623900\"" Aug 13 00:00:57.731803 systemd[1]: Started cri-containerd-4183b362c80e71f97d56c7c43dfcf160a52f844da2bbffd20a2644edcf623900.scope - libcontainer container 4183b362c80e71f97d56c7c43dfcf160a52f844da2bbffd20a2644edcf623900. Aug 13 00:00:57.763018 containerd[1497]: time="2025-08-13T00:00:57.762891952Z" level=info msg="StartContainer for \"4183b362c80e71f97d56c7c43dfcf160a52f844da2bbffd20a2644edcf623900\" returns successfully" Aug 13 00:00:57.771787 systemd[1]: cri-containerd-4183b362c80e71f97d56c7c43dfcf160a52f844da2bbffd20a2644edcf623900.scope: Deactivated successfully. Aug 13 00:00:57.816492 containerd[1497]: time="2025-08-13T00:00:57.816423352Z" level=info msg="shim disconnected" id=4183b362c80e71f97d56c7c43dfcf160a52f844da2bbffd20a2644edcf623900 namespace=k8s.io Aug 13 00:00:57.816492 containerd[1497]: time="2025-08-13T00:00:57.816477765Z" level=warning msg="cleaning up after shim disconnected" id=4183b362c80e71f97d56c7c43dfcf160a52f844da2bbffd20a2644edcf623900 namespace=k8s.io Aug 13 00:00:57.816492 containerd[1497]: time="2025-08-13T00:00:57.816486582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:00:57.843143 kubelet[2609]: E0813 00:00:57.843097 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:57.845134 containerd[1497]: time="2025-08-13T00:00:57.844961535Z" level=info msg="CreateContainer within sandbox \"7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:00:57.859451 containerd[1497]: time="2025-08-13T00:00:57.859396574Z" level=info msg="CreateContainer within sandbox \"7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c32f7783893dff60ab39130d19268434d89504d911c2d56bd8627a952108c68\"" Aug 13 00:00:57.860674 containerd[1497]: time="2025-08-13T00:00:57.859910453Z" level=info msg="StartContainer for \"5c32f7783893dff60ab39130d19268434d89504d911c2d56bd8627a952108c68\"" Aug 13 00:00:57.889801 systemd[1]: Started cri-containerd-5c32f7783893dff60ab39130d19268434d89504d911c2d56bd8627a952108c68.scope - libcontainer container 5c32f7783893dff60ab39130d19268434d89504d911c2d56bd8627a952108c68. Aug 13 00:00:57.917348 containerd[1497]: time="2025-08-13T00:00:57.917288274Z" level=info msg="StartContainer for \"5c32f7783893dff60ab39130d19268434d89504d911c2d56bd8627a952108c68\" returns successfully" Aug 13 00:00:57.924241 systemd[1]: cri-containerd-5c32f7783893dff60ab39130d19268434d89504d911c2d56bd8627a952108c68.scope: Deactivated successfully. Aug 13 00:00:57.949813 containerd[1497]: time="2025-08-13T00:00:57.949727405Z" level=info msg="shim disconnected" id=5c32f7783893dff60ab39130d19268434d89504d911c2d56bd8627a952108c68 namespace=k8s.io Aug 13 00:00:57.949813 containerd[1497]: time="2025-08-13T00:00:57.949792680Z" level=warning msg="cleaning up after shim disconnected" id=5c32f7783893dff60ab39130d19268434d89504d911c2d56bd8627a952108c68 namespace=k8s.io Aug 13 00:00:57.949813 containerd[1497]: time="2025-08-13T00:00:57.949807117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:00:58.846380 kubelet[2609]: E0813 00:00:58.846343 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:58.848542 containerd[1497]: time="2025-08-13T00:00:58.848505516Z" level=info msg="CreateContainer within sandbox \"7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:00:58.872011 containerd[1497]: time="2025-08-13T00:00:58.871949880Z" level=info msg="CreateContainer within sandbox \"7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6ca26132948ae8dacf9de2cd8a3d598449a2e9442b443c1291e541c435fc0351\"" Aug 13 00:00:58.874081 containerd[1497]: time="2025-08-13T00:00:58.872455173Z" level=info msg="StartContainer for \"6ca26132948ae8dacf9de2cd8a3d598449a2e9442b443c1291e541c435fc0351\"" Aug 13 00:00:58.910804 systemd[1]: Started cri-containerd-6ca26132948ae8dacf9de2cd8a3d598449a2e9442b443c1291e541c435fc0351.scope - libcontainer container 6ca26132948ae8dacf9de2cd8a3d598449a2e9442b443c1291e541c435fc0351. Aug 13 00:00:58.946133 containerd[1497]: time="2025-08-13T00:00:58.945513948Z" level=info msg="StartContainer for \"6ca26132948ae8dacf9de2cd8a3d598449a2e9442b443c1291e541c435fc0351\" returns successfully" Aug 13 00:00:58.945798 systemd[1]: cri-containerd-6ca26132948ae8dacf9de2cd8a3d598449a2e9442b443c1291e541c435fc0351.scope: Deactivated successfully. Aug 13 00:00:58.974565 containerd[1497]: time="2025-08-13T00:00:58.974483011Z" level=info msg="shim disconnected" id=6ca26132948ae8dacf9de2cd8a3d598449a2e9442b443c1291e541c435fc0351 namespace=k8s.io Aug 13 00:00:58.974565 containerd[1497]: time="2025-08-13T00:00:58.974549598Z" level=warning msg="cleaning up after shim disconnected" id=6ca26132948ae8dacf9de2cd8a3d598449a2e9442b443c1291e541c435fc0351 namespace=k8s.io Aug 13 00:00:58.974565 containerd[1497]: time="2025-08-13T00:00:58.974561361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:00:59.436483 systemd[1]: run-containerd-runc-k8s.io-6ca26132948ae8dacf9de2cd8a3d598449a2e9442b443c1291e541c435fc0351-runc.YkHOt9.mount: Deactivated successfully. Aug 13 00:00:59.436614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ca26132948ae8dacf9de2cd8a3d598449a2e9442b443c1291e541c435fc0351-rootfs.mount: Deactivated successfully. Aug 13 00:00:59.850058 kubelet[2609]: E0813 00:00:59.849901 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:00:59.851838 containerd[1497]: time="2025-08-13T00:00:59.851776592Z" level=info msg="CreateContainer within sandbox \"7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:00:59.872914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4140194129.mount: Deactivated successfully. Aug 13 00:00:59.874540 containerd[1497]: time="2025-08-13T00:00:59.874493600Z" level=info msg="CreateContainer within sandbox \"7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"926a7283da516fe1aee376bfb096dc3f3ebb9feb8a122f418f79eb1c4137459f\"" Aug 13 00:00:59.875075 containerd[1497]: time="2025-08-13T00:00:59.875053608Z" level=info msg="StartContainer for \"926a7283da516fe1aee376bfb096dc3f3ebb9feb8a122f418f79eb1c4137459f\"" Aug 13 00:00:59.909786 systemd[1]: Started cri-containerd-926a7283da516fe1aee376bfb096dc3f3ebb9feb8a122f418f79eb1c4137459f.scope - libcontainer container 926a7283da516fe1aee376bfb096dc3f3ebb9feb8a122f418f79eb1c4137459f. Aug 13 00:00:59.937993 systemd[1]: cri-containerd-926a7283da516fe1aee376bfb096dc3f3ebb9feb8a122f418f79eb1c4137459f.scope: Deactivated successfully. Aug 13 00:00:59.945959 containerd[1497]: time="2025-08-13T00:00:59.945906700Z" level=info msg="StartContainer for \"926a7283da516fe1aee376bfb096dc3f3ebb9feb8a122f418f79eb1c4137459f\" returns successfully" Aug 13 00:00:59.975093 containerd[1497]: time="2025-08-13T00:00:59.975021319Z" level=info msg="shim disconnected" id=926a7283da516fe1aee376bfb096dc3f3ebb9feb8a122f418f79eb1c4137459f namespace=k8s.io Aug 13 00:00:59.975093 containerd[1497]: time="2025-08-13T00:00:59.975084720Z" level=warning msg="cleaning up after shim disconnected" id=926a7283da516fe1aee376bfb096dc3f3ebb9feb8a122f418f79eb1c4137459f namespace=k8s.io Aug 13 00:00:59.975093 containerd[1497]: time="2025-08-13T00:00:59.975093927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:01:00.436864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-926a7283da516fe1aee376bfb096dc3f3ebb9feb8a122f418f79eb1c4137459f-rootfs.mount: Deactivated successfully. Aug 13 00:01:00.440528 kubelet[2609]: E0813 00:01:00.440486 2609 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:01:00.855502 kubelet[2609]: E0813 00:01:00.854075 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:01:00.856253 containerd[1497]: time="2025-08-13T00:01:00.856205451Z" level=info msg="CreateContainer within sandbox \"7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:01:00.882695 containerd[1497]: time="2025-08-13T00:01:00.882625725Z" level=info msg="CreateContainer within sandbox \"7ca94a9a21c4230fd69bac71de9c3c3b02f6211eae304d4ade4f7cc517d26d86\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"539ad95196a4ec9d074c2a82ef6cc778fcbff295aac4dd33da164b1f9b9069fc\"" Aug 13 00:01:00.883189 containerd[1497]: time="2025-08-13T00:01:00.883161186Z" level=info msg="StartContainer for \"539ad95196a4ec9d074c2a82ef6cc778fcbff295aac4dd33da164b1f9b9069fc\"" Aug 13 00:01:00.921949 systemd[1]: Started cri-containerd-539ad95196a4ec9d074c2a82ef6cc778fcbff295aac4dd33da164b1f9b9069fc.scope - libcontainer container 539ad95196a4ec9d074c2a82ef6cc778fcbff295aac4dd33da164b1f9b9069fc. Aug 13 00:01:00.955710 containerd[1497]: time="2025-08-13T00:01:00.955660517Z" level=info msg="StartContainer for \"539ad95196a4ec9d074c2a82ef6cc778fcbff295aac4dd33da164b1f9b9069fc\" returns successfully" Aug 13 00:01:01.381618 kubelet[2609]: E0813 00:01:01.381576 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:01:01.405693 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:01:01.858321 kubelet[2609]: E0813 00:01:01.858278 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:01:02.151259 kubelet[2609]: I0813 00:01:02.150700 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p777b" podStartSLOduration=5.15067673 podStartE2EDuration="5.15067673s" podCreationTimestamp="2025-08-13 00:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:01:02.149953201 +0000 UTC m=+101.861231669" watchObservedRunningTime="2025-08-13 00:01:02.15067673 +0000 UTC m=+101.861955187" Aug 13 00:01:02.973655 kubelet[2609]: I0813 00:01:02.973565 2609 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:01:02Z","lastTransitionTime":"2025-08-13T00:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:01:03.611013 kubelet[2609]: E0813 00:01:03.610924 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:01:05.422758 systemd-networkd[1438]: lxc_health: Link UP Aug 13 00:01:05.470095 systemd-networkd[1438]: lxc_health: Gained carrier Aug 13 00:01:05.615427 kubelet[2609]: E0813 00:01:05.613015 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:01:05.873000 kubelet[2609]: E0813 00:01:05.872824 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:01:06.888110 kubelet[2609]: E0813 00:01:06.888011 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:01:07.497419 systemd-networkd[1438]: lxc_health: Gained IPv6LL Aug 13 00:01:13.292326 systemd[1]: run-containerd-runc-k8s.io-539ad95196a4ec9d074c2a82ef6cc778fcbff295aac4dd33da164b1f9b9069fc-runc.kFXXmZ.mount: Deactivated successfully. Aug 13 00:01:15.513559 sshd[4516]: Connection closed by 10.0.0.1 port 58094 Aug 13 00:01:15.513980 sshd-session[4511]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:15.518042 systemd[1]: sshd@29-10.0.0.83:22-10.0.0.1:58094.service: Deactivated successfully. Aug 13 00:01:15.520285 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:01:15.521333 systemd-logind[1480]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:01:15.522540 systemd-logind[1480]: Removed session 30. Aug 13 00:01:16.381959 kubelet[2609]: E0813 00:01:16.381887 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"