Jul 9 23:50:08.937652 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:08:48 -00 2025 Jul 9 23:50:08.937684 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c257b65f06e0ad68d969d5b3e057f031663dc29a4487d91a77595a40c4dc82d6 Jul 9 23:50:08.937701 kernel: BIOS-provided physical RAM map: Jul 9 23:50:08.937710 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 9 23:50:08.937719 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 9 23:50:08.937727 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 9 23:50:08.937738 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 9 23:50:08.937747 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 9 23:50:08.937756 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 9 23:50:08.937768 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 9 23:50:08.937777 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 9 23:50:08.937785 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 9 23:50:08.937799 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 9 23:50:08.937808 kernel: NX (Execute Disable) protection: active Jul 9 23:50:08.937819 kernel: APIC: Static calls initialized Jul 9 23:50:08.937837 kernel: SMBIOS 2.8 present. Jul 9 23:50:08.937846 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 9 23:50:08.937856 kernel: Hypervisor detected: KVM Jul 9 23:50:08.937865 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 9 23:50:08.937875 kernel: kvm-clock: using sched offset of 3982051298 cycles Jul 9 23:50:08.937885 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 9 23:50:08.937895 kernel: tsc: Detected 2794.748 MHz processor Jul 9 23:50:08.937905 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 9 23:50:08.937916 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 9 23:50:08.937926 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 9 23:50:08.937940 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 9 23:50:08.937950 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 9 23:50:08.937960 kernel: Using GB pages for direct mapping Jul 9 23:50:08.937970 kernel: ACPI: Early table checksum verification disabled Jul 9 23:50:08.937980 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 9 23:50:08.937990 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:50:08.937999 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:50:08.938009 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:50:08.938018 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 9 23:50:08.938032 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:50:08.938041 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:50:08.938051 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:50:08.938061 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:50:08.938070 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 9 23:50:08.938080 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 9 23:50:08.938096 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 9 23:50:08.938110 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 9 23:50:08.938120 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 9 23:50:08.938130 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 9 23:50:08.938140 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 9 23:50:08.938151 kernel: No NUMA configuration found Jul 9 23:50:08.938163 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 9 23:50:08.938175 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 9 23:50:08.938190 kernel: Zone ranges: Jul 9 23:50:08.938201 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 9 23:50:08.938211 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 9 23:50:08.938221 kernel: Normal empty Jul 9 23:50:08.938231 kernel: Movable zone start for each node Jul 9 23:50:08.938242 kernel: Early memory node ranges Jul 9 23:50:08.938252 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 9 23:50:08.938263 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 9 23:50:08.938273 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 9 23:50:08.938287 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 9 23:50:08.938302 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 9 23:50:08.938312 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 9 23:50:08.938323 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 9 23:50:08.938333 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 9 23:50:08.938343 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 9 23:50:08.938353 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 9 23:50:08.938364 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 9 23:50:08.938375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 9 23:50:08.938386 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 9 23:50:08.938401 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 9 23:50:08.938411 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 9 23:50:08.938422 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 9 23:50:08.938432 kernel: TSC deadline timer available Jul 9 23:50:08.938443 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 9 23:50:08.938453 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 9 23:50:08.938463 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 9 23:50:08.938477 kernel: kvm-guest: setup PV sched yield Jul 9 23:50:08.938488 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 9 23:50:08.938512 kernel: Booting paravirtualized kernel on KVM Jul 9 23:50:08.938523 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 9 23:50:08.938533 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 9 23:50:08.938543 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 9 23:50:08.938553 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 9 23:50:08.938562 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 9 23:50:08.938571 kernel: kvm-guest: PV spinlocks enabled Jul 9 23:50:08.938614 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 9 23:50:08.938623 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c257b65f06e0ad68d969d5b3e057f031663dc29a4487d91a77595a40c4dc82d6 Jul 9 23:50:08.938637 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 23:50:08.938645 kernel: random: crng init done Jul 9 23:50:08.938652 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 23:50:08.938660 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 23:50:08.938667 kernel: Fallback order for Node 0: 0 Jul 9 23:50:08.938675 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 9 23:50:08.938683 kernel: Policy zone: DMA32 Jul 9 23:50:08.938690 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 23:50:08.938701 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 138948K reserved, 0K cma-reserved) Jul 9 23:50:08.938709 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 23:50:08.938716 kernel: ftrace: allocating 37940 entries in 149 pages Jul 9 23:50:08.938724 kernel: ftrace: allocated 149 pages with 4 groups Jul 9 23:50:08.938731 kernel: Dynamic Preempt: voluntary Jul 9 23:50:08.938739 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 23:50:08.938747 kernel: rcu: RCU event tracing is enabled. Jul 9 23:50:08.938755 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 23:50:08.938763 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 23:50:08.938774 kernel: Rude variant of Tasks RCU enabled. Jul 9 23:50:08.938781 kernel: Tracing variant of Tasks RCU enabled. Jul 9 23:50:08.938789 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 23:50:08.938800 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 23:50:08.938807 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 9 23:50:08.938815 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 23:50:08.938822 kernel: Console: colour VGA+ 80x25 Jul 9 23:50:08.938830 kernel: printk: console [ttyS0] enabled Jul 9 23:50:08.938837 kernel: ACPI: Core revision 20230628 Jul 9 23:50:08.938848 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 9 23:50:08.938856 kernel: APIC: Switch to symmetric I/O mode setup Jul 9 23:50:08.938863 kernel: x2apic enabled Jul 9 23:50:08.938871 kernel: APIC: Switched APIC routing to: physical x2apic Jul 9 23:50:08.938878 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 9 23:50:08.938886 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 9 23:50:08.938894 kernel: kvm-guest: setup PV IPIs Jul 9 23:50:08.938913 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 9 23:50:08.938921 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 9 23:50:08.938929 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 9 23:50:08.938937 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 9 23:50:08.938944 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 9 23:50:08.938955 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 9 23:50:08.938963 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 9 23:50:08.938971 kernel: Spectre V2 : Mitigation: Retpolines Jul 9 23:50:08.938979 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 9 23:50:08.938987 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 9 23:50:08.938998 kernel: RETBleed: Mitigation: untrained return thunk Jul 9 23:50:08.939008 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 9 23:50:08.939017 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 9 23:50:08.939025 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 9 23:50:08.939033 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 9 23:50:08.939041 kernel: x86/bugs: return thunk changed Jul 9 23:50:08.939049 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 9 23:50:08.939056 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 9 23:50:08.939067 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 9 23:50:08.939075 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 9 23:50:08.939083 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 9 23:50:08.939091 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 9 23:50:08.939099 kernel: Freeing SMP alternatives memory: 32K Jul 9 23:50:08.939107 kernel: pid_max: default: 32768 minimum: 301 Jul 9 23:50:08.939115 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 9 23:50:08.939123 kernel: landlock: Up and running. Jul 9 23:50:08.939131 kernel: SELinux: Initializing. Jul 9 23:50:08.939142 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:50:08.939150 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:50:08.939158 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 9 23:50:08.939166 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 23:50:08.939174 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 23:50:08.939182 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 23:50:08.939190 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 9 23:50:08.939200 kernel: ... version: 0 Jul 9 23:50:08.939208 kernel: ... bit width: 48 Jul 9 23:50:08.939219 kernel: ... generic registers: 6 Jul 9 23:50:08.939227 kernel: ... value mask: 0000ffffffffffff Jul 9 23:50:08.939235 kernel: ... max period: 00007fffffffffff Jul 9 23:50:08.939242 kernel: ... fixed-purpose events: 0 Jul 9 23:50:08.939250 kernel: ... event mask: 000000000000003f Jul 9 23:50:08.939258 kernel: signal: max sigframe size: 1776 Jul 9 23:50:08.939266 kernel: rcu: Hierarchical SRCU implementation. Jul 9 23:50:08.939274 kernel: rcu: Max phase no-delay instances is 400. Jul 9 23:50:08.939282 kernel: smp: Bringing up secondary CPUs ... Jul 9 23:50:08.939292 kernel: smpboot: x86: Booting SMP configuration: Jul 9 23:50:08.939300 kernel: .... node #0, CPUs: #1 #2 #3 Jul 9 23:50:08.939308 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 23:50:08.939316 kernel: smpboot: Max logical packages: 1 Jul 9 23:50:08.939324 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 9 23:50:08.939331 kernel: devtmpfs: initialized Jul 9 23:50:08.939339 kernel: x86/mm: Memory block size: 128MB Jul 9 23:50:08.939347 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 23:50:08.939355 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 23:50:08.939365 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 23:50:08.939373 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 23:50:08.939381 kernel: audit: initializing netlink subsys (disabled) Jul 9 23:50:08.939389 kernel: audit: type=2000 audit(1752105008.356:1): state=initialized audit_enabled=0 res=1 Jul 9 23:50:08.939397 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 23:50:08.939405 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 9 23:50:08.939413 kernel: cpuidle: using governor menu Jul 9 23:50:08.939421 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 23:50:08.939428 kernel: dca service started, version 1.12.1 Jul 9 23:50:08.939439 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 9 23:50:08.939447 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 9 23:50:08.939455 kernel: PCI: Using configuration type 1 for base access Jul 9 23:50:08.939463 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 9 23:50:08.939471 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 23:50:08.939479 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 23:50:08.939490 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 23:50:08.939504 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 23:50:08.939512 kernel: ACPI: Added _OSI(Module Device) Jul 9 23:50:08.939522 kernel: ACPI: Added _OSI(Processor Device) Jul 9 23:50:08.939531 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 23:50:08.939539 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 23:50:08.939547 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 9 23:50:08.939555 kernel: ACPI: Interpreter enabled Jul 9 23:50:08.939563 kernel: ACPI: PM: (supports S0 S3 S5) Jul 9 23:50:08.939570 kernel: ACPI: Using IOAPIC for interrupt routing Jul 9 23:50:08.939601 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 9 23:50:08.939609 kernel: PCI: Using E820 reservations for host bridge windows Jul 9 23:50:08.939620 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 9 23:50:08.939628 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 23:50:08.939871 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 23:50:08.940017 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 9 23:50:08.940152 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 9 23:50:08.940163 kernel: PCI host bridge to bus 0000:00 Jul 9 23:50:08.940308 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 9 23:50:08.940440 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 9 23:50:08.940571 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 9 23:50:08.940724 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 9 23:50:08.940846 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 9 23:50:08.940967 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 9 23:50:08.941088 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 23:50:08.941265 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 9 23:50:08.941419 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 9 23:50:08.941561 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 9 23:50:08.941714 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 9 23:50:08.941850 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 9 23:50:08.941983 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 9 23:50:08.942139 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 9 23:50:08.942281 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 9 23:50:08.942419 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 9 23:50:08.942562 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 9 23:50:08.942730 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 9 23:50:08.942867 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 9 23:50:08.943001 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 9 23:50:08.943135 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 9 23:50:08.943294 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 9 23:50:08.943431 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 9 23:50:08.943574 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 9 23:50:08.943726 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 9 23:50:08.943860 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 9 23:50:08.944003 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 9 23:50:08.944137 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 9 23:50:08.944293 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 9 23:50:08.944428 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 9 23:50:08.944575 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 9 23:50:08.944746 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 9 23:50:08.944881 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 9 23:50:08.944892 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 9 23:50:08.944900 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 9 23:50:08.944913 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 9 23:50:08.944922 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 9 23:50:08.944930 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 9 23:50:08.944938 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 9 23:50:08.944946 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 9 23:50:08.944954 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 9 23:50:08.944962 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 9 23:50:08.944970 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 9 23:50:08.944978 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 9 23:50:08.944988 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 9 23:50:08.944996 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 9 23:50:08.945004 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 9 23:50:08.945013 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 9 23:50:08.945021 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 9 23:50:08.945029 kernel: iommu: Default domain type: Translated Jul 9 23:50:08.945037 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 9 23:50:08.945045 kernel: PCI: Using ACPI for IRQ routing Jul 9 23:50:08.945053 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 9 23:50:08.945064 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 9 23:50:08.945072 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 9 23:50:08.945207 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 9 23:50:08.945338 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 9 23:50:08.945469 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 9 23:50:08.945480 kernel: vgaarb: loaded Jul 9 23:50:08.945488 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 9 23:50:08.945506 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 9 23:50:08.945518 kernel: clocksource: Switched to clocksource kvm-clock Jul 9 23:50:08.945526 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 23:50:08.945535 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 23:50:08.945543 kernel: pnp: PnP ACPI init Jul 9 23:50:08.945743 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 9 23:50:08.945756 kernel: pnp: PnP ACPI: found 6 devices Jul 9 23:50:08.945764 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 9 23:50:08.945773 kernel: NET: Registered PF_INET protocol family Jul 9 23:50:08.945785 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 23:50:08.945793 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 23:50:08.945802 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 23:50:08.945810 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 23:50:08.945818 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 23:50:08.945826 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 23:50:08.945834 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:50:08.945842 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:50:08.945851 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 23:50:08.945861 kernel: NET: Registered PF_XDP protocol family Jul 9 23:50:08.945985 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 9 23:50:08.946105 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 9 23:50:08.946225 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 9 23:50:08.946353 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 9 23:50:08.946473 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 9 23:50:08.946620 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 9 23:50:08.946632 kernel: PCI: CLS 0 bytes, default 64 Jul 9 23:50:08.946640 kernel: Initialise system trusted keyrings Jul 9 23:50:08.946653 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 23:50:08.946661 kernel: Key type asymmetric registered Jul 9 23:50:08.946669 kernel: Asymmetric key parser 'x509' registered Jul 9 23:50:08.946677 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 9 23:50:08.946685 kernel: io scheduler mq-deadline registered Jul 9 23:50:08.946693 kernel: io scheduler kyber registered Jul 9 23:50:08.946701 kernel: io scheduler bfq registered Jul 9 23:50:08.946709 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 9 23:50:08.946718 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 9 23:50:08.946730 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 9 23:50:08.946738 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 9 23:50:08.946746 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 23:50:08.946754 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 9 23:50:08.946762 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 9 23:50:08.946770 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 9 23:50:08.946778 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 9 23:50:08.946930 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 9 23:50:08.946947 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 9 23:50:08.947072 kernel: rtc_cmos 00:04: registered as rtc0 Jul 9 23:50:08.947196 kernel: rtc_cmos 00:04: setting system clock to 2025-07-09T23:50:08 UTC (1752105008) Jul 9 23:50:08.947320 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 9 23:50:08.947331 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 9 23:50:08.947339 kernel: NET: Registered PF_INET6 protocol family Jul 9 23:50:08.947348 kernel: Segment Routing with IPv6 Jul 9 23:50:08.947356 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 23:50:08.947365 kernel: NET: Registered PF_PACKET protocol family Jul 9 23:50:08.947377 kernel: Key type dns_resolver registered Jul 9 23:50:08.947386 kernel: IPI shorthand broadcast: enabled Jul 9 23:50:08.947394 kernel: sched_clock: Marking stable (768003012, 136559911)->(919726456, -15163533) Jul 9 23:50:08.947402 kernel: registered taskstats version 1 Jul 9 23:50:08.947410 kernel: Loading compiled-in X.509 certificates Jul 9 23:50:08.947418 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 50743221a03cbb928e294992219bf2bc20f6f14b' Jul 9 23:50:08.947426 kernel: Key type .fscrypt registered Jul 9 23:50:08.947434 kernel: Key type fscrypt-provisioning registered Jul 9 23:50:08.947442 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 23:50:08.947453 kernel: ima: Allocated hash algorithm: sha1 Jul 9 23:50:08.947461 kernel: ima: No architecture policies found Jul 9 23:50:08.947469 kernel: clk: Disabling unused clocks Jul 9 23:50:08.947477 kernel: Freeing unused kernel image (initmem) memory: 43488K Jul 9 23:50:08.947485 kernel: Write protecting the kernel read-only data: 38912k Jul 9 23:50:08.947494 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 9 23:50:08.947510 kernel: Run /init as init process Jul 9 23:50:08.947518 kernel: with arguments: Jul 9 23:50:08.947530 kernel: /init Jul 9 23:50:08.947538 kernel: with environment: Jul 9 23:50:08.947546 kernel: HOME=/ Jul 9 23:50:08.947554 kernel: TERM=linux Jul 9 23:50:08.947562 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 23:50:08.947571 systemd[1]: Successfully made /usr/ read-only. Jul 9 23:50:08.947596 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:50:08.947606 systemd[1]: Detected virtualization kvm. Jul 9 23:50:08.947618 systemd[1]: Detected architecture x86-64. Jul 9 23:50:08.947626 systemd[1]: Running in initrd. Jul 9 23:50:08.947635 systemd[1]: No hostname configured, using default hostname. Jul 9 23:50:08.947644 systemd[1]: Hostname set to . Jul 9 23:50:08.947652 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:50:08.947660 systemd[1]: Queued start job for default target initrd.target. Jul 9 23:50:08.947669 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:50:08.947678 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:50:08.947690 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 23:50:08.947712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:50:08.947723 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 23:50:08.947733 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 23:50:08.947743 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 23:50:08.947755 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 23:50:08.947764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:50:08.947773 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:50:08.947782 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:50:08.947791 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:50:08.947800 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:50:08.947808 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:50:08.947817 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:50:08.947829 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:50:08.947838 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 23:50:08.947847 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 23:50:08.947856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:50:08.947865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:50:08.947874 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:50:08.947883 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:50:08.947892 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 23:50:08.947900 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:50:08.947912 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 23:50:08.947921 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 23:50:08.947930 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:50:08.947939 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:50:08.947948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:50:08.947957 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 23:50:08.947966 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:50:08.947978 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 23:50:08.947987 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:50:08.948032 systemd-journald[194]: Collecting audit messages is disabled. Jul 9 23:50:08.948059 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:50:08.948071 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:50:08.948081 systemd-journald[194]: Journal started Jul 9 23:50:08.948103 systemd-journald[194]: Runtime Journal (/run/log/journal/9b89797e95164eb0b0209abeededb783) is 6M, max 48.4M, 42.3M free. Jul 9 23:50:08.951756 systemd-modules-load[195]: Inserted module 'overlay' Jul 9 23:50:08.982731 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:50:08.982294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:50:08.994980 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 23:50:08.995002 kernel: Bridge firewalling registered Jul 9 23:50:08.988596 systemd-modules-load[195]: Inserted module 'br_netfilter' Jul 9 23:50:08.992988 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:50:09.009765 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:50:09.011303 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:50:09.012927 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:50:09.013540 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:50:09.028756 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:50:09.033402 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:50:09.043844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:50:09.044478 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:50:09.047506 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 23:50:09.065354 dracut-cmdline[234]: dracut-dracut-053 Jul 9 23:50:09.068654 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c257b65f06e0ad68d969d5b3e057f031663dc29a4487d91a77595a40c4dc82d6 Jul 9 23:50:09.091511 systemd-resolved[229]: Positive Trust Anchors: Jul 9 23:50:09.091524 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:50:09.091555 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:50:09.094468 systemd-resolved[229]: Defaulting to hostname 'linux'. Jul 9 23:50:09.095776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:50:09.100959 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:50:09.175625 kernel: SCSI subsystem initialized Jul 9 23:50:09.184616 kernel: Loading iSCSI transport class v2.0-870. Jul 9 23:50:09.195645 kernel: iscsi: registered transport (tcp) Jul 9 23:50:09.220613 kernel: iscsi: registered transport (qla4xxx) Jul 9 23:50:09.220671 kernel: QLogic iSCSI HBA Driver Jul 9 23:50:09.272820 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 23:50:09.281804 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 23:50:09.310392 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 23:50:09.310474 kernel: device-mapper: uevent: version 1.0.3 Jul 9 23:50:09.310519 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 9 23:50:09.355628 kernel: raid6: avx2x4 gen() 28495 MB/s Jul 9 23:50:09.372635 kernel: raid6: avx2x2 gen() 30149 MB/s Jul 9 23:50:09.389709 kernel: raid6: avx2x1 gen() 25217 MB/s Jul 9 23:50:09.389798 kernel: raid6: using algorithm avx2x2 gen() 30149 MB/s Jul 9 23:50:09.407732 kernel: raid6: .... xor() 19170 MB/s, rmw enabled Jul 9 23:50:09.407819 kernel: raid6: using avx2x2 recovery algorithm Jul 9 23:50:09.428631 kernel: xor: automatically using best checksumming function avx Jul 9 23:50:09.582632 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 23:50:09.599465 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:50:09.611801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:50:09.627914 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jul 9 23:50:09.633532 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:50:09.644802 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 23:50:09.660666 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Jul 9 23:50:09.699873 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:50:09.713799 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:50:09.783800 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:50:09.792763 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 23:50:09.809399 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 23:50:09.812416 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:50:09.813859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:50:09.817315 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:50:09.827790 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 23:50:09.836937 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 9 23:50:09.838600 kernel: cryptd: max_cpu_qlen set to 1000 Jul 9 23:50:09.845052 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 23:50:09.845510 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:50:09.854057 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 23:50:09.854107 kernel: GPT:9289727 != 19775487 Jul 9 23:50:09.854120 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 23:50:09.854131 kernel: GPT:9289727 != 19775487 Jul 9 23:50:09.854988 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 23:50:09.855021 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:50:09.860711 kernel: libata version 3.00 loaded. Jul 9 23:50:09.869639 kernel: AVX2 version of gcm_enc/dec engaged. Jul 9 23:50:09.871172 kernel: ahci 0000:00:1f.2: version 3.0 Jul 9 23:50:09.871438 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 9 23:50:09.871453 kernel: AES CTR mode by8 optimization enabled Jul 9 23:50:09.870563 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:50:09.875974 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 9 23:50:09.876197 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 9 23:50:09.870737 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:50:09.876066 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:50:09.884493 kernel: scsi host0: ahci Jul 9 23:50:09.884730 kernel: scsi host1: ahci Jul 9 23:50:09.884908 kernel: scsi host2: ahci Jul 9 23:50:09.877416 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:50:09.887607 kernel: scsi host3: ahci Jul 9 23:50:09.888472 kernel: scsi host4: ahci Jul 9 23:50:09.879684 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:50:09.885059 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:50:09.908943 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (464) Jul 9 23:50:09.908983 kernel: scsi host5: ahci Jul 9 23:50:09.910599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:50:09.916791 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 9 23:50:09.916815 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 9 23:50:09.916825 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 9 23:50:09.916836 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 9 23:50:09.916847 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 9 23:50:09.916856 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 9 23:50:09.920610 kernel: BTRFS: device fsid 2ea7ed46-2399-4750-93a6-9faa0c83416c devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (475) Jul 9 23:50:09.922734 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:50:09.949656 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 23:50:09.979868 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 23:50:09.980391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:50:09.997886 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 23:50:10.005121 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 23:50:10.005379 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 23:50:10.025813 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 23:50:10.026989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:50:10.040211 disk-uuid[557]: Primary Header is updated. Jul 9 23:50:10.040211 disk-uuid[557]: Secondary Entries is updated. Jul 9 23:50:10.040211 disk-uuid[557]: Secondary Header is updated. Jul 9 23:50:10.043622 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:50:10.048608 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:50:10.052215 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:50:10.218615 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 9 23:50:10.218688 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 9 23:50:10.220026 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 9 23:50:10.220041 kernel: ata3.00: applying bridge limits Jul 9 23:50:10.220806 kernel: ata3.00: configured for UDMA/100 Jul 9 23:50:10.221616 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 9 23:50:10.223615 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 9 23:50:10.227612 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 9 23:50:10.227676 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 9 23:50:10.228614 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 9 23:50:10.262081 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 9 23:50:10.262318 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 9 23:50:10.276663 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 9 23:50:11.050375 disk-uuid[560]: The operation has completed successfully. Jul 9 23:50:11.051828 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:50:11.086426 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 23:50:11.086629 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 23:50:11.143827 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 23:50:11.149386 sh[594]: Success Jul 9 23:50:11.163600 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 9 23:50:11.202955 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 23:50:11.218237 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 23:50:11.221104 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 23:50:11.234287 kernel: BTRFS info (device dm-0): first mount of filesystem 2ea7ed46-2399-4750-93a6-9faa0c83416c Jul 9 23:50:11.234353 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 9 23:50:11.234384 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 9 23:50:11.235318 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 9 23:50:11.236052 kernel: BTRFS info (device dm-0): using free space tree Jul 9 23:50:11.241630 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 23:50:11.242796 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 23:50:11.253764 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 23:50:11.255943 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 23:50:11.274842 kernel: BTRFS info (device vda6): first mount of filesystem 8e2332fd-cd78-45f6-aab3-8af291a1450c Jul 9 23:50:11.274914 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 23:50:11.274929 kernel: BTRFS info (device vda6): using free space tree Jul 9 23:50:11.278648 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 23:50:11.284626 kernel: BTRFS info (device vda6): last unmount of filesystem 8e2332fd-cd78-45f6-aab3-8af291a1450c Jul 9 23:50:11.292283 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 23:50:11.298798 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 23:50:11.456864 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:50:11.461504 ignition[682]: Ignition 2.20.0 Jul 9 23:50:11.461516 ignition[682]: Stage: fetch-offline Jul 9 23:50:11.465209 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:50:11.461554 ignition[682]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:50:11.461565 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:50:11.461703 ignition[682]: parsed url from cmdline: "" Jul 9 23:50:11.461709 ignition[682]: no config URL provided Jul 9 23:50:11.461716 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:50:11.461729 ignition[682]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:50:11.461763 ignition[682]: op(1): [started] loading QEMU firmware config module Jul 9 23:50:11.461770 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 23:50:11.471988 ignition[682]: op(1): [finished] loading QEMU firmware config module Jul 9 23:50:11.506374 systemd-networkd[778]: lo: Link UP Jul 9 23:50:11.506386 systemd-networkd[778]: lo: Gained carrier Jul 9 23:50:11.509826 systemd-networkd[778]: Enumeration completed Jul 9 23:50:11.511007 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:50:11.511015 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:50:11.514814 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:50:11.519913 systemd[1]: Reached target network.target - Network. Jul 9 23:50:11.522339 systemd-networkd[778]: eth0: Link UP Jul 9 23:50:11.522347 systemd-networkd[778]: eth0: Gained carrier Jul 9 23:50:11.523717 ignition[682]: parsing config with SHA512: 1899753e58cf0f8bbd01ae2334bab089803ee19b434cea7643c7c29fd2b696269b4e55ea6c866659ee8cdb6bd4338ad260bb0966cbdc1ff882d272b47d6e3901 Jul 9 23:50:11.522358 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:50:11.529024 unknown[682]: fetched base config from "system" Jul 9 23:50:11.529053 unknown[682]: fetched user config from "qemu" Jul 9 23:50:11.544871 ignition[682]: fetch-offline: fetch-offline passed Jul 9 23:50:11.545035 ignition[682]: Ignition finished successfully Jul 9 23:50:11.548956 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:50:11.550258 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 23:50:11.550475 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 23:50:11.556804 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 23:50:11.581003 ignition[784]: Ignition 2.20.0 Jul 9 23:50:11.581015 ignition[784]: Stage: kargs Jul 9 23:50:11.581187 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:50:11.581198 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:50:11.584777 ignition[784]: kargs: kargs passed Jul 9 23:50:11.584831 ignition[784]: Ignition finished successfully Jul 9 23:50:11.589124 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 23:50:11.604839 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 23:50:11.622922 ignition[793]: Ignition 2.20.0 Jul 9 23:50:11.622935 ignition[793]: Stage: disks Jul 9 23:50:11.623177 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:50:11.623193 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:50:11.624379 ignition[793]: disks: disks passed Jul 9 23:50:11.624549 ignition[793]: Ignition finished successfully Jul 9 23:50:11.629188 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 23:50:11.631885 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 23:50:11.634017 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 23:50:11.636290 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:50:11.638184 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:50:11.640093 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:50:11.652741 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 23:50:11.665256 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 9 23:50:11.672264 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 23:50:11.674047 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 23:50:11.765610 kernel: EXT4-fs (vda9): mounted filesystem 147af866-f15a-4a2f-aea7-d9959c235d2a r/w with ordered data mode. Quota mode: none. Jul 9 23:50:11.766341 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 23:50:11.768529 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 23:50:11.784678 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:50:11.787259 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 23:50:11.790067 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 23:50:11.790127 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 23:50:11.799725 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (811) Jul 9 23:50:11.799752 kernel: BTRFS info (device vda6): first mount of filesystem 8e2332fd-cd78-45f6-aab3-8af291a1450c Jul 9 23:50:11.799767 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 23:50:11.799780 kernel: BTRFS info (device vda6): using free space tree Jul 9 23:50:11.790157 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:50:11.802249 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 23:50:11.803921 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 23:50:11.804989 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:50:11.818731 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 23:50:11.849701 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 23:50:11.855020 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jul 9 23:50:11.859882 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 23:50:11.863830 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 23:50:11.949503 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 23:50:11.963700 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 23:50:11.966061 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 23:50:11.972611 kernel: BTRFS info (device vda6): last unmount of filesystem 8e2332fd-cd78-45f6-aab3-8af291a1450c Jul 9 23:50:11.994718 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 23:50:12.001286 ignition[923]: INFO : Ignition 2.20.0 Jul 9 23:50:12.001286 ignition[923]: INFO : Stage: mount Jul 9 23:50:12.003299 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:50:12.003299 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:50:12.003299 ignition[923]: INFO : mount: mount passed Jul 9 23:50:12.003299 ignition[923]: INFO : Ignition finished successfully Jul 9 23:50:12.009123 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 23:50:12.026752 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 23:50:12.234158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 23:50:12.242888 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:50:12.252670 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (938) Jul 9 23:50:12.252705 kernel: BTRFS info (device vda6): first mount of filesystem 8e2332fd-cd78-45f6-aab3-8af291a1450c Jul 9 23:50:12.252717 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 23:50:12.254629 kernel: BTRFS info (device vda6): using free space tree Jul 9 23:50:12.257621 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 23:50:12.259399 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:50:12.297432 ignition[955]: INFO : Ignition 2.20.0 Jul 9 23:50:12.297432 ignition[955]: INFO : Stage: files Jul 9 23:50:12.299711 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:50:12.299711 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:50:12.299711 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jul 9 23:50:12.303942 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 23:50:12.303942 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 23:50:12.303942 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 23:50:12.303942 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 23:50:12.310446 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 23:50:12.310446 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 9 23:50:12.310446 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 9 23:50:12.304304 unknown[955]: wrote ssh authorized keys file for user: core Jul 9 23:50:12.360898 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 23:50:12.756095 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 9 23:50:12.756095 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:50:12.760788 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 9 23:50:13.259831 systemd-networkd[778]: eth0: Gained IPv6LL Jul 9 23:50:13.326925 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 23:50:13.810872 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 23:50:13.813060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 9 23:50:14.557903 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 23:50:17.058318 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 23:50:17.058318 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 9 23:50:17.062661 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:50:17.062661 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:50:17.062661 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 9 23:50:17.062661 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 9 23:50:17.062661 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:50:17.062661 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:50:17.062661 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 9 23:50:17.062661 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 23:50:17.095231 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:50:17.099386 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:50:17.101136 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 23:50:17.101136 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 9 23:50:17.103844 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 23:50:17.105259 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:50:17.106988 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:50:17.108612 ignition[955]: INFO : files: files passed Jul 9 23:50:17.109360 ignition[955]: INFO : Ignition finished successfully Jul 9 23:50:17.112556 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 23:50:17.124693 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 23:50:17.126635 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 23:50:17.129017 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 23:50:17.129154 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 23:50:17.136709 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 23:50:17.139720 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:50:17.139720 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:50:17.144146 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:50:17.142374 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:50:17.144381 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 23:50:17.154715 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 23:50:17.178944 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 23:50:17.179074 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 23:50:17.181246 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 23:50:17.183244 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 23:50:17.185201 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 23:50:17.186078 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 23:50:17.203264 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:50:17.212758 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 23:50:17.221802 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:50:17.223165 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:50:17.225561 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 23:50:17.227737 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 23:50:17.227897 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:50:17.230199 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 23:50:17.232045 systemd[1]: Stopped target basic.target - Basic System. Jul 9 23:50:17.234720 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 23:50:17.236895 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:50:17.239068 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 23:50:17.241408 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 23:50:17.243690 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:50:17.246138 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 23:50:17.248303 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 23:50:17.250436 systemd[1]: Stopped target swap.target - Swaps. Jul 9 23:50:17.252117 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 23:50:17.252233 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:50:17.254250 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:50:17.255786 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:50:17.257751 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 23:50:17.257884 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:50:17.259855 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 23:50:17.259974 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 23:50:17.262045 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 23:50:17.262156 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:50:17.264060 systemd[1]: Stopped target paths.target - Path Units. Jul 9 23:50:17.265701 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 23:50:17.265847 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:50:17.268244 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 23:50:17.269968 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 23:50:17.271786 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 23:50:17.271904 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:50:17.273675 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 23:50:17.273783 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:50:17.275058 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 23:50:17.275194 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:50:17.275521 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 23:50:17.275671 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 23:50:17.290735 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 23:50:17.292413 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 23:50:17.293370 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 23:50:17.293542 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:50:17.295559 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 23:50:17.295784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:50:17.304602 ignition[1009]: INFO : Ignition 2.20.0 Jul 9 23:50:17.304602 ignition[1009]: INFO : Stage: umount Jul 9 23:50:17.304602 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:50:17.304602 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:50:17.304663 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 23:50:17.305741 ignition[1009]: INFO : umount: umount passed Jul 9 23:50:17.305741 ignition[1009]: INFO : Ignition finished successfully Jul 9 23:50:17.304783 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 23:50:17.313340 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 23:50:17.314512 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 23:50:17.317718 systemd[1]: Stopped target network.target - Network. Jul 9 23:50:17.319739 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 23:50:17.320767 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 23:50:17.323101 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 23:50:17.323161 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 23:50:17.326706 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 23:50:17.326766 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 23:50:17.329759 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 23:50:17.330807 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 23:50:17.333118 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 23:50:17.335350 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 23:50:17.338652 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 23:50:17.340336 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 23:50:17.341515 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 23:50:17.343876 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 23:50:17.345087 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 23:50:17.349844 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 23:50:17.351445 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 23:50:17.351575 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 23:50:17.355857 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 23:50:17.357408 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 23:50:17.357469 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:50:17.359710 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 23:50:17.359766 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 23:50:17.371694 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 23:50:17.372783 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 23:50:17.372857 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:50:17.375378 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:50:17.375445 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:50:17.378160 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 23:50:17.378226 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 23:50:17.380793 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 23:50:17.380860 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:50:17.383198 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:50:17.386597 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:50:17.386686 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:50:17.398867 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 23:50:17.399094 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:50:17.401729 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 23:50:17.401862 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 23:50:17.405339 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 23:50:17.405414 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 23:50:17.407154 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 23:50:17.407206 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:50:17.409298 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 23:50:17.409369 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:50:17.411971 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 23:50:17.412037 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 23:50:17.413736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:50:17.413811 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:50:17.426764 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 23:50:17.427942 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 23:50:17.428014 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:50:17.430621 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 23:50:17.430690 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:50:17.433107 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 23:50:17.433173 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:50:17.435767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:50:17.435834 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:50:17.439232 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 23:50:17.439319 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:50:17.439857 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 23:50:17.439996 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 23:50:17.442748 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 23:50:17.458721 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 23:50:17.465885 systemd[1]: Switching root. Jul 9 23:50:17.501121 systemd-journald[194]: Journal stopped Jul 9 23:50:18.920686 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jul 9 23:50:18.920797 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 23:50:18.920838 kernel: SELinux: policy capability open_perms=1 Jul 9 23:50:18.920874 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 23:50:18.920892 kernel: SELinux: policy capability always_check_network=0 Jul 9 23:50:18.920908 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 23:50:18.920930 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 23:50:18.920974 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 23:50:18.920997 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 23:50:18.921017 kernel: audit: type=1403 audit(1752105018.058:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 23:50:18.921042 systemd[1]: Successfully loaded SELinux policy in 50.107ms. Jul 9 23:50:18.921085 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.370ms. Jul 9 23:50:18.921120 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:50:18.921148 systemd[1]: Detected virtualization kvm. Jul 9 23:50:18.921173 systemd[1]: Detected architecture x86-64. Jul 9 23:50:18.921192 systemd[1]: Detected first boot. Jul 9 23:50:18.921214 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:50:18.921257 zram_generator::config[1057]: No configuration found. Jul 9 23:50:18.921297 kernel: Guest personality initialized and is inactive Jul 9 23:50:18.921319 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 9 23:50:18.921335 kernel: Initialized host personality Jul 9 23:50:18.921358 kernel: NET: Registered PF_VSOCK protocol family Jul 9 23:50:18.921379 systemd[1]: Populated /etc with preset unit settings. Jul 9 23:50:18.921401 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 23:50:18.921423 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 23:50:18.921458 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 23:50:18.921478 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 23:50:18.921495 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 23:50:18.921513 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 23:50:18.921571 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 23:50:18.921632 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 23:50:18.921651 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 23:50:18.921676 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 23:50:18.921729 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 23:50:18.921769 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 23:50:18.921788 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:50:18.921805 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:50:18.921823 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 23:50:18.921859 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 23:50:18.921914 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 23:50:18.921934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:50:18.921972 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 9 23:50:18.921995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:50:18.922048 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 23:50:18.922080 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 23:50:18.922101 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 23:50:18.922119 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 23:50:18.922145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:50:18.922217 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:50:18.922283 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:50:18.922305 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:50:18.922322 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 23:50:18.922338 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 23:50:18.922356 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 23:50:18.922373 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:50:18.922392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:50:18.922409 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:50:18.922442 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 23:50:18.922483 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 23:50:18.922505 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 23:50:18.922522 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 23:50:18.922551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:50:18.922837 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 23:50:18.922997 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 23:50:18.923023 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 23:50:18.924454 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 23:50:18.924482 systemd[1]: Reached target machines.target - Containers. Jul 9 23:50:18.924506 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 23:50:18.924524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:50:18.924542 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:50:18.924560 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 23:50:18.924592 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:50:18.924625 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:50:18.924657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:50:18.924677 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 23:50:18.924716 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:50:18.924738 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 23:50:18.924755 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 23:50:18.924772 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 23:50:18.924798 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 23:50:18.924819 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 23:50:18.924837 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:50:18.924855 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:50:18.924882 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:50:18.924922 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:50:18.924955 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 23:50:18.924984 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 23:50:18.925006 kernel: fuse: init (API version 7.39) Jul 9 23:50:18.925092 systemd-journald[1121]: Collecting audit messages is disabled. Jul 9 23:50:18.925136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:50:18.925180 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 23:50:18.925233 systemd[1]: Stopped verity-setup.service. Jul 9 23:50:18.925264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:50:18.925282 systemd-journald[1121]: Journal started Jul 9 23:50:18.925335 systemd-journald[1121]: Runtime Journal (/run/log/journal/9b89797e95164eb0b0209abeededb783) is 6M, max 48.4M, 42.3M free. Jul 9 23:50:18.649595 systemd[1]: Queued start job for default target multi-user.target. Jul 9 23:50:18.667771 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 23:50:18.668252 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 23:50:18.951434 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:50:18.937435 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 23:50:18.940996 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 23:50:18.942506 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 23:50:18.943829 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 23:50:18.945294 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 23:50:18.946831 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 23:50:18.949959 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:50:18.952285 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 23:50:18.952643 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 23:50:18.955451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:50:18.955732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:50:18.958497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:50:18.958753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:50:18.960407 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:50:19.453614 kernel: loop: module loaded Jul 9 23:50:19.454174 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 23:50:19.465835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:50:19.470875 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:50:19.474485 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 23:50:19.474931 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 23:50:19.481256 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:50:19.481537 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:50:19.483133 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:50:19.484992 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 23:50:19.491209 kernel: ACPI: bus type drm_connector registered Jul 9 23:50:19.542598 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:50:19.542868 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:50:19.544315 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jul 9 23:50:19.544337 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jul 9 23:50:19.548015 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 23:50:19.551964 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 23:50:19.553574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:50:19.562052 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 23:50:19.563659 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:50:19.566383 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:50:19.575689 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 23:50:19.576821 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 23:50:19.576866 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:50:19.579179 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 23:50:19.581639 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 23:50:19.583836 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 23:50:19.584948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:50:19.586203 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 23:50:19.588404 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 23:50:19.589574 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:50:19.591249 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 23:50:19.592483 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:50:19.596336 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 23:50:19.601182 systemd-journald[1121]: Time spent on flushing to /var/log/journal/9b89797e95164eb0b0209abeededb783 is 16.339ms for 975 entries. Jul 9 23:50:19.601182 systemd-journald[1121]: System Journal (/var/log/journal/9b89797e95164eb0b0209abeededb783) is 8M, max 195.6M, 187.6M free. Jul 9 23:50:19.635960 systemd-journald[1121]: Received client request to flush runtime journal. Jul 9 23:50:19.636010 kernel: loop0: detected capacity change from 0 to 221472 Jul 9 23:50:19.602059 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 23:50:19.604739 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:50:19.608156 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 23:50:19.610907 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 23:50:19.614366 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 23:50:19.618920 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 23:50:19.632837 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 23:50:19.636462 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 9 23:50:19.642385 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 23:50:19.656364 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 23:50:19.656942 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 23:50:19.661950 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 9 23:50:19.665396 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 23:50:19.676776 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:50:19.679140 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 23:50:19.692679 kernel: loop1: detected capacity change from 0 to 138176 Jul 9 23:50:19.709160 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jul 9 23:50:19.709183 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jul 9 23:50:19.715793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:50:19.735865 kernel: loop2: detected capacity change from 0 to 147912 Jul 9 23:50:19.845615 kernel: loop3: detected capacity change from 0 to 221472 Jul 9 23:50:19.859860 kernel: loop4: detected capacity change from 0 to 138176 Jul 9 23:50:19.872607 kernel: loop5: detected capacity change from 0 to 147912 Jul 9 23:50:19.896634 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 23:50:19.897366 (sd-merge)[1204]: Merged extensions into '/usr'. Jul 9 23:50:19.901608 systemd[1]: Reload requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 23:50:19.901635 systemd[1]: Reloading... Jul 9 23:50:20.013307 zram_generator::config[1235]: No configuration found. Jul 9 23:50:20.046420 ldconfig[1180]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 23:50:20.189211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:50:20.254535 systemd[1]: Reloading finished in 352 ms. Jul 9 23:50:20.280567 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 23:50:20.282302 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 23:50:20.299467 systemd[1]: Starting ensure-sysext.service... Jul 9 23:50:20.302642 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:50:20.314142 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Jul 9 23:50:20.314159 systemd[1]: Reloading... Jul 9 23:50:20.340657 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 23:50:20.341144 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 23:50:20.342375 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 23:50:20.342898 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 9 23:50:20.343075 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 9 23:50:20.350763 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:50:20.350973 systemd-tmpfiles[1270]: Skipping /boot Jul 9 23:50:20.379497 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:50:20.379695 systemd-tmpfiles[1270]: Skipping /boot Jul 9 23:50:20.391610 zram_generator::config[1302]: No configuration found. Jul 9 23:50:20.513607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:50:20.600461 systemd[1]: Reloading finished in 285 ms. Jul 9 23:50:20.617254 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 23:50:20.636374 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:50:20.657958 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:50:20.660666 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 23:50:20.663241 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 23:50:20.668153 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:50:20.672153 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:50:20.678934 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 23:50:20.683819 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:50:20.684009 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:50:20.689176 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:50:20.694678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:50:20.700003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:50:20.701426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:50:20.701720 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:50:20.704865 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 23:50:20.705959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:50:20.707512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:50:20.707779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:50:20.709628 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:50:20.709844 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:50:20.712035 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 23:50:20.714152 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:50:20.714409 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:50:20.725574 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Jul 9 23:50:20.726279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:50:20.726514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:50:20.734977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:50:20.739545 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:50:20.744107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:50:20.745326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:50:20.745448 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:50:20.748786 augenrules[1375]: No rules Jul 9 23:50:20.748765 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 23:50:20.750534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:50:20.752097 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:50:20.752658 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:50:20.756170 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 23:50:20.758368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:50:20.758702 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:50:20.760781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:50:20.761193 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:50:20.763984 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:50:20.764275 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:50:20.768968 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 23:50:20.771728 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:50:20.773593 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 23:50:20.787084 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 23:50:20.809914 systemd[1]: Finished ensure-sysext.service. Jul 9 23:50:20.817775 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:50:20.826816 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:50:20.828595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:50:20.832757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:50:20.837744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:50:20.840795 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:50:20.847986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:50:20.849315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:50:20.849383 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:50:20.851494 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:50:20.854597 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 23:50:20.856751 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 23:50:20.856797 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 23:50:20.857476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:50:20.857738 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:50:20.911070 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 9 23:50:20.912548 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:50:20.912902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:50:20.914750 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:50:20.914991 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:50:20.923619 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1394) Jul 9 23:50:20.928689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:50:20.929699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:50:20.930868 augenrules[1414]: /sbin/augenrules: No change Jul 9 23:50:20.942660 systemd-resolved[1343]: Positive Trust Anchors: Jul 9 23:50:20.942678 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:50:20.942711 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:50:20.943841 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:50:20.943919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:50:20.964081 augenrules[1445]: No rules Jul 9 23:50:20.948735 systemd-resolved[1343]: Defaulting to hostname 'linux'. Jul 9 23:50:20.965005 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:50:20.966632 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:50:20.966930 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:50:20.976505 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:50:21.013800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 23:50:21.025046 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 23:50:21.044182 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 23:50:21.051609 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 9 23:50:21.070616 kernel: ACPI: button: Power Button [PWRF] Jul 9 23:50:21.073170 systemd-networkd[1428]: lo: Link UP Jul 9 23:50:21.073201 systemd-networkd[1428]: lo: Gained carrier Jul 9 23:50:21.075281 systemd-networkd[1428]: Enumeration completed Jul 9 23:50:21.075429 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:50:21.076860 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:50:21.076874 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:50:21.076922 systemd[1]: Reached target network.target - Network. Jul 9 23:50:21.078002 systemd-networkd[1428]: eth0: Link UP Jul 9 23:50:21.078015 systemd-networkd[1428]: eth0: Gained carrier Jul 9 23:50:21.078029 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:50:21.085605 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 9 23:50:21.086930 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 23:50:21.090671 systemd-networkd[1428]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 23:50:21.090815 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 23:50:21.098831 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 23:50:21.100612 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 23:50:21.745220 systemd-timesyncd[1429]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 23:50:21.745311 systemd-timesyncd[1429]: Initial clock synchronization to Wed 2025-07-09 23:50:21.745054 UTC. Jul 9 23:50:21.745377 systemd-resolved[1343]: Clock change detected. Flushing caches. Jul 9 23:50:21.780052 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 23:50:21.798221 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 9 23:50:21.824483 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 9 23:50:21.824837 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 9 23:50:21.836844 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 23:50:21.845469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:50:21.871131 kernel: kvm_amd: TSC scaling supported Jul 9 23:50:21.871257 kernel: kvm_amd: Nested Virtualization enabled Jul 9 23:50:21.871283 kernel: kvm_amd: Nested Paging enabled Jul 9 23:50:21.872104 kernel: kvm_amd: LBR virtualization supported Jul 9 23:50:21.872126 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 9 23:50:21.873245 kernel: kvm_amd: Virtual GIF supported Jul 9 23:50:21.894931 kernel: EDAC MC: Ver: 3.0.0 Jul 9 23:50:21.940601 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 9 23:50:21.956140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:50:21.967988 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 9 23:50:21.976015 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 9 23:50:22.023089 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 9 23:50:22.024942 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:50:22.026303 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:50:22.027619 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 23:50:22.029077 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 23:50:22.030990 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 23:50:22.032607 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 23:50:22.033910 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 23:50:22.035281 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 23:50:22.035324 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:50:22.036228 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:50:22.038306 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 23:50:22.042173 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 23:50:22.047261 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 23:50:22.048765 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 23:50:22.050104 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 23:50:22.054784 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 23:50:22.056595 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 23:50:22.059756 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 9 23:50:22.061580 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 23:50:22.062758 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:50:22.063763 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:50:22.064745 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:50:22.064770 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:50:22.065987 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 23:50:22.068366 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 23:50:22.071823 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 9 23:50:22.072234 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 23:50:22.078013 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 23:50:22.079337 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 23:50:22.082792 jq[1482]: false Jul 9 23:50:22.084014 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 23:50:22.091760 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 23:50:22.095470 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 23:50:22.099627 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 23:50:22.109218 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 23:50:22.110764 extend-filesystems[1483]: Found loop3 Jul 9 23:50:22.110764 extend-filesystems[1483]: Found loop4 Jul 9 23:50:22.110764 extend-filesystems[1483]: Found loop5 Jul 9 23:50:22.110764 extend-filesystems[1483]: Found sr0 Jul 9 23:50:22.110764 extend-filesystems[1483]: Found vda Jul 9 23:50:22.110764 extend-filesystems[1483]: Found vda1 Jul 9 23:50:22.110764 extend-filesystems[1483]: Found vda2 Jul 9 23:50:22.110764 extend-filesystems[1483]: Found vda3 Jul 9 23:50:22.110764 extend-filesystems[1483]: Found usr Jul 9 23:50:22.110764 extend-filesystems[1483]: Found vda4 Jul 9 23:50:22.134480 extend-filesystems[1483]: Found vda6 Jul 9 23:50:22.134480 extend-filesystems[1483]: Found vda7 Jul 9 23:50:22.134480 extend-filesystems[1483]: Found vda9 Jul 9 23:50:22.134480 extend-filesystems[1483]: Checking size of /dev/vda9 Jul 9 23:50:22.134480 extend-filesystems[1483]: Resized partition /dev/vda9 Jul 9 23:50:22.140900 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 23:50:22.114744 dbus-daemon[1481]: [system] SELinux support is enabled Jul 9 23:50:22.111780 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 23:50:22.141304 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024) Jul 9 23:50:22.114222 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 23:50:22.116146 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 23:50:22.120217 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 23:50:22.122963 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 23:50:22.127281 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 9 23:50:22.137363 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 23:50:22.137723 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 23:50:22.138162 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 23:50:22.138432 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 23:50:22.141457 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 23:50:22.141801 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 23:50:22.147156 jq[1501]: true Jul 9 23:50:22.148781 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1406) Jul 9 23:50:22.160205 update_engine[1499]: I20250709 23:50:22.160104 1499 main.cc:92] Flatcar Update Engine starting Jul 9 23:50:22.165046 update_engine[1499]: I20250709 23:50:22.161768 1499 update_check_scheduler.cc:74] Next update check in 6m20s Jul 9 23:50:22.178993 systemd[1]: Started update-engine.service - Update Engine. Jul 9 23:50:22.186489 jq[1508]: true Jul 9 23:50:22.189929 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 23:50:22.210778 extend-filesystems[1504]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 23:50:22.210778 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 23:50:22.210778 extend-filesystems[1504]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 23:50:22.196291 (ntainerd)[1516]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 23:50:22.228774 tar[1506]: linux-amd64/helm Jul 9 23:50:22.229863 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Jul 9 23:50:22.211508 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 23:50:22.211850 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 23:50:22.216100 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 23:50:22.216143 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 23:50:22.217695 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 23:50:22.217713 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 23:50:22.227994 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 23:50:22.236262 systemd-logind[1494]: Watching system buttons on /dev/input/event1 (Power Button) Jul 9 23:50:22.236293 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 23:50:22.239631 systemd-logind[1494]: New seat seat0. Jul 9 23:50:22.247779 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 23:50:22.285749 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:50:22.292248 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 23:50:22.466548 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 23:50:22.479464 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 23:50:22.480994 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 23:50:22.851263 containerd[1516]: time="2025-07-09T23:50:22.851085356Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 9 23:50:22.878631 containerd[1516]: time="2025-07-09T23:50:22.878407265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883456 containerd[1516]: time="2025-07-09T23:50:22.882007857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883456 containerd[1516]: time="2025-07-09T23:50:22.882040408Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 9 23:50:22.883456 containerd[1516]: time="2025-07-09T23:50:22.882056178Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 9 23:50:22.883456 containerd[1516]: time="2025-07-09T23:50:22.882246134Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 9 23:50:22.883456 containerd[1516]: time="2025-07-09T23:50:22.882264779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883456 containerd[1516]: time="2025-07-09T23:50:22.882339228Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883456 containerd[1516]: time="2025-07-09T23:50:22.882352984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883456 containerd[1516]: time="2025-07-09T23:50:22.882634041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883456 containerd[1516]: time="2025-07-09T23:50:22.882648428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883456 containerd[1516]: time="2025-07-09T23:50:22.882661463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883456 containerd[1516]: time="2025-07-09T23:50:22.882670339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883729 containerd[1516]: time="2025-07-09T23:50:22.882776528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883729 containerd[1516]: time="2025-07-09T23:50:22.883056103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883729 containerd[1516]: time="2025-07-09T23:50:22.883220451Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 23:50:22.883729 containerd[1516]: time="2025-07-09T23:50:22.883232453Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 9 23:50:22.883729 containerd[1516]: time="2025-07-09T23:50:22.883331479Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 9 23:50:22.883729 containerd[1516]: time="2025-07-09T23:50:22.883386873Z" level=info msg="metadata content store policy set" policy=shared Jul 9 23:50:22.889914 containerd[1516]: time="2025-07-09T23:50:22.889892694Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 9 23:50:22.890022 containerd[1516]: time="2025-07-09T23:50:22.890007599Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 9 23:50:22.890132 containerd[1516]: time="2025-07-09T23:50:22.890117646Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 9 23:50:22.890212 containerd[1516]: time="2025-07-09T23:50:22.890198277Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 9 23:50:22.890304 containerd[1516]: time="2025-07-09T23:50:22.890288787Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 9 23:50:22.890512 containerd[1516]: time="2025-07-09T23:50:22.890491447Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 9 23:50:22.890949 containerd[1516]: time="2025-07-09T23:50:22.890912176Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 9 23:50:22.891223 containerd[1516]: time="2025-07-09T23:50:22.891079409Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 9 23:50:22.891223 containerd[1516]: time="2025-07-09T23:50:22.891100429Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 9 23:50:22.891223 containerd[1516]: time="2025-07-09T23:50:22.891114806Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 9 23:50:22.891223 containerd[1516]: time="2025-07-09T23:50:22.891128992Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 9 23:50:22.891223 containerd[1516]: time="2025-07-09T23:50:22.891142538Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 9 23:50:22.891223 containerd[1516]: time="2025-07-09T23:50:22.891154680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 9 23:50:22.891223 containerd[1516]: time="2025-07-09T23:50:22.891168316Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 9 23:50:22.891223 containerd[1516]: time="2025-07-09T23:50:22.891185308Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 9 23:50:22.891223 containerd[1516]: time="2025-07-09T23:50:22.891199815Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 9 23:50:22.891223 containerd[1516]: time="2025-07-09T23:50:22.891214222Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891239089Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891279795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891295745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891308319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891320201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891331953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891344567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891355287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891367660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891380163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891402104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891413125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891424797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891436970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891504 containerd[1516]: time="2025-07-09T23:50:22.891456016Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 9 23:50:22.891837 containerd[1516]: time="2025-07-09T23:50:22.891485952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891837 containerd[1516]: time="2025-07-09T23:50:22.891500960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.891837 containerd[1516]: time="2025-07-09T23:50:22.891515026Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 9 23:50:22.893519 containerd[1516]: time="2025-07-09T23:50:22.893437602Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 9 23:50:22.893519 containerd[1516]: time="2025-07-09T23:50:22.893469451Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 9 23:50:22.893519 containerd[1516]: time="2025-07-09T23:50:22.893486944Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 9 23:50:22.893519 containerd[1516]: time="2025-07-09T23:50:22.893500680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 9 23:50:22.893519 containerd[1516]: time="2025-07-09T23:50:22.893510579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.893519 containerd[1516]: time="2025-07-09T23:50:22.893528172Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 9 23:50:22.893519 containerd[1516]: time="2025-07-09T23:50:22.893539864Z" level=info msg="NRI interface is disabled by configuration." Jul 9 23:50:22.893819 containerd[1516]: time="2025-07-09T23:50:22.893551325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 9 23:50:22.894838 containerd[1516]: time="2025-07-09T23:50:22.894382454Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 9 23:50:22.894838 containerd[1516]: time="2025-07-09T23:50:22.894625760Z" level=info msg="Connect containerd service" Jul 9 23:50:22.894838 containerd[1516]: time="2025-07-09T23:50:22.894689069Z" level=info msg="using legacy CRI server" Jul 9 23:50:22.894838 containerd[1516]: time="2025-07-09T23:50:22.894700390Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 23:50:22.895226 containerd[1516]: time="2025-07-09T23:50:22.895149933Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 9 23:50:22.896482 containerd[1516]: time="2025-07-09T23:50:22.896446585Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:50:22.896825 containerd[1516]: time="2025-07-09T23:50:22.896752729Z" level=info msg="Start subscribing containerd event" Jul 9 23:50:22.896860 containerd[1516]: time="2025-07-09T23:50:22.896833550Z" level=info msg="Start recovering state" Jul 9 23:50:22.896971 containerd[1516]: time="2025-07-09T23:50:22.896903081Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 23:50:22.897028 containerd[1516]: time="2025-07-09T23:50:22.896914001Z" level=info msg="Start event monitor" Jul 9 23:50:22.897054 containerd[1516]: time="2025-07-09T23:50:22.897032023Z" level=info msg="Start snapshots syncer" Jul 9 23:50:22.897054 containerd[1516]: time="2025-07-09T23:50:22.897043825Z" level=info msg="Start cni network conf syncer for default" Jul 9 23:50:22.897054 containerd[1516]: time="2025-07-09T23:50:22.897051319Z" level=info msg="Start streaming server" Jul 9 23:50:22.897349 containerd[1516]: time="2025-07-09T23:50:22.897323068Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 23:50:22.899903 containerd[1516]: time="2025-07-09T23:50:22.899878651Z" level=info msg="containerd successfully booted in 0.055668s" Jul 9 23:50:22.899977 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 23:50:22.920580 tar[1506]: linux-amd64/LICENSE Jul 9 23:50:22.920580 tar[1506]: linux-amd64/README.md Jul 9 23:50:22.937537 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 23:50:23.132391 sshd_keygen[1500]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 23:50:23.164705 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 23:50:23.175143 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 23:50:23.177132 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:41558.service - OpenSSH per-connection server daemon (10.0.0.1:41558). Jul 9 23:50:23.184248 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 23:50:23.184575 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 23:50:23.187850 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 23:50:23.204613 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 23:50:23.219245 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 23:50:23.221784 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 9 23:50:23.223178 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 23:50:23.238195 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 41558 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:50:23.263922 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:23.275708 systemd-logind[1494]: New session 1 of user core. Jul 9 23:50:23.277042 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 23:50:23.291053 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 23:50:23.304286 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 23:50:23.324036 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 23:50:23.328281 (systemd)[1577]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 23:50:23.330747 systemd-logind[1494]: New session c1 of user core. Jul 9 23:50:23.502302 systemd[1577]: Queued start job for default target default.target. Jul 9 23:50:23.511472 systemd[1577]: Created slice app.slice - User Application Slice. Jul 9 23:50:23.511501 systemd[1577]: Reached target paths.target - Paths. Jul 9 23:50:23.511546 systemd[1577]: Reached target timers.target - Timers. Jul 9 23:50:23.513477 systemd[1577]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 23:50:23.526015 systemd[1577]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 23:50:23.526195 systemd[1577]: Reached target sockets.target - Sockets. Jul 9 23:50:23.526253 systemd[1577]: Reached target basic.target - Basic System. Jul 9 23:50:23.526314 systemd[1577]: Reached target default.target - Main User Target. Jul 9 23:50:23.526359 systemd[1577]: Startup finished in 188ms. Jul 9 23:50:23.526960 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 23:50:23.530486 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 23:50:23.611312 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:41560.service - OpenSSH per-connection server daemon (10.0.0.1:41560). Jul 9 23:50:23.650854 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 41560 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:50:23.652641 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:23.657220 systemd-logind[1494]: New session 2 of user core. Jul 9 23:50:23.666990 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 23:50:23.724430 sshd[1590]: Connection closed by 10.0.0.1 port 41560 Jul 9 23:50:23.724904 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:23.737834 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:41560.service: Deactivated successfully. Jul 9 23:50:23.740044 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 23:50:23.742086 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. Jul 9 23:50:23.752129 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:41568.service - OpenSSH per-connection server daemon (10.0.0.1:41568). Jul 9 23:50:23.754514 systemd-logind[1494]: Removed session 2. Jul 9 23:50:23.759971 systemd-networkd[1428]: eth0: Gained IPv6LL Jul 9 23:50:23.764870 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 23:50:23.766919 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 23:50:23.786289 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 23:50:23.789404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:50:23.792071 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 23:50:23.814868 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 41568 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:50:23.817474 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:23.820236 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 23:50:23.821197 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 23:50:23.823248 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 23:50:23.826039 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 23:50:23.831493 systemd-logind[1494]: New session 3 of user core. Jul 9 23:50:23.843147 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 23:50:23.900453 sshd[1616]: Connection closed by 10.0.0.1 port 41568 Jul 9 23:50:23.900819 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:23.906322 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:41568.service: Deactivated successfully. Jul 9 23:50:23.908960 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 23:50:23.909924 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. Jul 9 23:50:23.911125 systemd-logind[1494]: Removed session 3. Jul 9 23:50:25.229501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:50:25.231324 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 23:50:25.232620 systemd[1]: Startup finished in 935ms (kernel) + 9.313s (initrd) + 6.578s (userspace) = 16.827s. Jul 9 23:50:25.267366 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:50:25.855691 kubelet[1626]: E0709 23:50:25.855599 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:50:25.860537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:50:25.860783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:50:25.861292 systemd[1]: kubelet.service: Consumed 1.885s CPU time, 266.2M memory peak. Jul 9 23:50:33.917382 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:40090.service - OpenSSH per-connection server daemon (10.0.0.1:40090). Jul 9 23:50:33.961841 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 40090 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:50:33.963736 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:33.968996 systemd-logind[1494]: New session 4 of user core. Jul 9 23:50:33.981958 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 23:50:34.036831 sshd[1641]: Connection closed by 10.0.0.1 port 40090 Jul 9 23:50:34.037214 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:34.051049 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:40090.service: Deactivated successfully. Jul 9 23:50:34.054275 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 23:50:34.056286 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. Jul 9 23:50:34.066143 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:40096.service - OpenSSH per-connection server daemon (10.0.0.1:40096). Jul 9 23:50:34.067327 systemd-logind[1494]: Removed session 4. Jul 9 23:50:34.106333 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 40096 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:50:34.108544 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:34.114645 systemd-logind[1494]: New session 5 of user core. Jul 9 23:50:34.135991 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 23:50:34.187794 sshd[1649]: Connection closed by 10.0.0.1 port 40096 Jul 9 23:50:34.188117 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:34.198706 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:40096.service: Deactivated successfully. Jul 9 23:50:34.200941 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 23:50:34.202635 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. Jul 9 23:50:34.212290 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:40102.service - OpenSSH per-connection server daemon (10.0.0.1:40102). Jul 9 23:50:34.213558 systemd-logind[1494]: Removed session 5. Jul 9 23:50:34.249942 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 40102 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:50:34.251585 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:34.256163 systemd-logind[1494]: New session 6 of user core. Jul 9 23:50:34.265956 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 23:50:34.320124 sshd[1657]: Connection closed by 10.0.0.1 port 40102 Jul 9 23:50:34.320603 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:34.333438 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:40102.service: Deactivated successfully. Jul 9 23:50:34.335457 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 23:50:34.337266 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. Jul 9 23:50:34.350075 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:40116.service - OpenSSH per-connection server daemon (10.0.0.1:40116). Jul 9 23:50:34.351135 systemd-logind[1494]: Removed session 6. Jul 9 23:50:34.388643 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 40116 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:50:34.390337 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:34.395265 systemd-logind[1494]: New session 7 of user core. Jul 9 23:50:34.411983 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 23:50:34.472653 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 23:50:34.473133 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:50:34.496777 sudo[1666]: pam_unix(sudo:session): session closed for user root Jul 9 23:50:34.498424 sshd[1665]: Connection closed by 10.0.0.1 port 40116 Jul 9 23:50:34.498829 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:34.514035 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:40116.service: Deactivated successfully. Jul 9 23:50:34.516176 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 23:50:34.517938 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. Jul 9 23:50:34.542157 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:40120.service - OpenSSH per-connection server daemon (10.0.0.1:40120). Jul 9 23:50:34.543422 systemd-logind[1494]: Removed session 7. Jul 9 23:50:34.580927 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 40120 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:50:34.582562 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:34.587063 systemd-logind[1494]: New session 8 of user core. Jul 9 23:50:34.604939 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 23:50:34.660003 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 23:50:34.660465 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:50:34.664494 sudo[1676]: pam_unix(sudo:session): session closed for user root Jul 9 23:50:34.671444 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 23:50:34.671778 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:50:34.690104 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:50:34.723184 augenrules[1698]: No rules Jul 9 23:50:34.724925 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:50:34.725224 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:50:34.726407 sudo[1675]: pam_unix(sudo:session): session closed for user root Jul 9 23:50:34.727970 sshd[1674]: Connection closed by 10.0.0.1 port 40120 Jul 9 23:50:34.728556 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:34.743872 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:40120.service: Deactivated successfully. Jul 9 23:50:34.746026 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 23:50:34.747912 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. Jul 9 23:50:34.758056 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:40124.service - OpenSSH per-connection server daemon (10.0.0.1:40124). Jul 9 23:50:34.759049 systemd-logind[1494]: Removed session 8. Jul 9 23:50:34.802874 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 40124 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:50:34.804542 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:34.809476 systemd-logind[1494]: New session 9 of user core. Jul 9 23:50:34.823989 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 23:50:34.878590 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 23:50:34.878954 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:50:35.359051 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 23:50:35.359223 (dockerd)[1731]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 23:50:35.833434 dockerd[1731]: time="2025-07-09T23:50:35.833251301Z" level=info msg="Starting up" Jul 9 23:50:35.971914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 23:50:35.993080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:50:36.012192 dockerd[1731]: time="2025-07-09T23:50:36.012147305Z" level=info msg="Loading containers: start." Jul 9 23:50:36.250042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:50:36.256006 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:50:36.395387 kubelet[1826]: E0709 23:50:36.395315 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:50:36.402293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:50:36.402588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:50:36.403123 systemd[1]: kubelet.service: Consumed 326ms CPU time, 111.1M memory peak. Jul 9 23:50:36.431821 kernel: Initializing XFRM netlink socket Jul 9 23:50:36.521698 systemd-networkd[1428]: docker0: Link UP Jul 9 23:50:36.560678 dockerd[1731]: time="2025-07-09T23:50:36.560624125Z" level=info msg="Loading containers: done." Jul 9 23:50:36.597547 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4231200466-merged.mount: Deactivated successfully. Jul 9 23:50:36.600573 dockerd[1731]: time="2025-07-09T23:50:36.600527458Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 23:50:36.600653 dockerd[1731]: time="2025-07-09T23:50:36.600642283Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 9 23:50:36.600789 dockerd[1731]: time="2025-07-09T23:50:36.600767959Z" level=info msg="Daemon has completed initialization" Jul 9 23:50:36.644361 dockerd[1731]: time="2025-07-09T23:50:36.644273487Z" level=info msg="API listen on /run/docker.sock" Jul 9 23:50:36.644479 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 23:50:37.753997 containerd[1516]: time="2025-07-09T23:50:37.753900113Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 9 23:50:38.712074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055394853.mount: Deactivated successfully. Jul 9 23:50:40.044087 containerd[1516]: time="2025-07-09T23:50:40.044005839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:40.044551 containerd[1516]: time="2025-07-09T23:50:40.044190636Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 9 23:50:40.045464 containerd[1516]: time="2025-07-09T23:50:40.045421885Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:40.049950 containerd[1516]: time="2025-07-09T23:50:40.049914290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:40.051234 containerd[1516]: time="2025-07-09T23:50:40.051195432Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.297212062s" Jul 9 23:50:40.051287 containerd[1516]: time="2025-07-09T23:50:40.051246568Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 9 23:50:40.054135 containerd[1516]: time="2025-07-09T23:50:40.053594491Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 9 23:50:42.667297 containerd[1516]: time="2025-07-09T23:50:42.665346438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:42.670835 containerd[1516]: time="2025-07-09T23:50:42.670755242Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 9 23:50:42.684581 containerd[1516]: time="2025-07-09T23:50:42.684466576Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:42.694203 containerd[1516]: time="2025-07-09T23:50:42.694121782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:42.695509 containerd[1516]: time="2025-07-09T23:50:42.695439614Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.641802854s" Jul 9 23:50:42.695509 containerd[1516]: time="2025-07-09T23:50:42.695498434Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 9 23:50:42.696650 containerd[1516]: time="2025-07-09T23:50:42.696582758Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 9 23:50:45.241064 containerd[1516]: time="2025-07-09T23:50:45.240961506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:45.241880 containerd[1516]: time="2025-07-09T23:50:45.241733474Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 9 23:50:45.243125 containerd[1516]: time="2025-07-09T23:50:45.243095087Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:45.246668 containerd[1516]: time="2025-07-09T23:50:45.246618355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:45.248207 containerd[1516]: time="2025-07-09T23:50:45.248160597Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.55154104s" Jul 9 23:50:45.248293 containerd[1516]: time="2025-07-09T23:50:45.248206052Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 9 23:50:45.249302 containerd[1516]: time="2025-07-09T23:50:45.249279986Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 9 23:50:46.533000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount802232489.mount: Deactivated successfully. Jul 9 23:50:46.534471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 23:50:46.545092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:50:46.940131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:50:46.959333 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:50:47.039136 kubelet[2025]: E0709 23:50:47.039053 2025 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:50:47.042365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:50:47.042557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:50:47.042974 systemd[1]: kubelet.service: Consumed 441ms CPU time, 110.7M memory peak. Jul 9 23:50:47.585788 containerd[1516]: time="2025-07-09T23:50:47.585697633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:47.586567 containerd[1516]: time="2025-07-09T23:50:47.586519004Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 9 23:50:47.587626 containerd[1516]: time="2025-07-09T23:50:47.587592066Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:47.590082 containerd[1516]: time="2025-07-09T23:50:47.590024147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:47.590580 containerd[1516]: time="2025-07-09T23:50:47.590533863Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.341223389s" Jul 9 23:50:47.590580 containerd[1516]: time="2025-07-09T23:50:47.590562296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 9 23:50:47.591338 containerd[1516]: time="2025-07-09T23:50:47.591123579Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 23:50:48.112873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1290261975.mount: Deactivated successfully. Jul 9 23:50:49.773923 containerd[1516]: time="2025-07-09T23:50:49.773849359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:49.774557 containerd[1516]: time="2025-07-09T23:50:49.774511170Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 9 23:50:49.775702 containerd[1516]: time="2025-07-09T23:50:49.775659904Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:49.778586 containerd[1516]: time="2025-07-09T23:50:49.778549844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:49.780005 containerd[1516]: time="2025-07-09T23:50:49.779971139Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.188811573s" Jul 9 23:50:49.780095 containerd[1516]: time="2025-07-09T23:50:49.780006105Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 9 23:50:49.780553 containerd[1516]: time="2025-07-09T23:50:49.780524918Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 23:50:50.294047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766045665.mount: Deactivated successfully. Jul 9 23:50:50.300470 containerd[1516]: time="2025-07-09T23:50:50.300422942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:50.301162 containerd[1516]: time="2025-07-09T23:50:50.301096876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 9 23:50:50.302239 containerd[1516]: time="2025-07-09T23:50:50.302199263Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:50.304608 containerd[1516]: time="2025-07-09T23:50:50.304578746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:50.305447 containerd[1516]: time="2025-07-09T23:50:50.305409273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 524.855191ms" Jul 9 23:50:50.305447 containerd[1516]: time="2025-07-09T23:50:50.305440422Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 9 23:50:50.305977 containerd[1516]: time="2025-07-09T23:50:50.305941702Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 9 23:50:50.863926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622265881.mount: Deactivated successfully. Jul 9 23:50:53.429550 containerd[1516]: time="2025-07-09T23:50:53.429461436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:53.828571 containerd[1516]: time="2025-07-09T23:50:53.828475600Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 9 23:50:53.830149 containerd[1516]: time="2025-07-09T23:50:53.830088675Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:53.833712 containerd[1516]: time="2025-07-09T23:50:53.833662888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:53.835220 containerd[1516]: time="2025-07-09T23:50:53.835176317Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.529201884s" Jul 9 23:50:53.835282 containerd[1516]: time="2025-07-09T23:50:53.835219788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 9 23:50:56.281089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:50:56.281265 systemd[1]: kubelet.service: Consumed 441ms CPU time, 110.7M memory peak. Jul 9 23:50:56.292023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:50:56.319229 systemd[1]: Reload requested from client PID 2173 ('systemctl') (unit session-9.scope)... Jul 9 23:50:56.319260 systemd[1]: Reloading... Jul 9 23:50:56.432880 zram_generator::config[2220]: No configuration found. Jul 9 23:50:56.776021 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:50:56.885519 systemd[1]: Reloading finished in 565 ms. Jul 9 23:50:56.947916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:50:56.952062 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:50:56.956052 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:50:56.960976 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:50:56.961368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:50:56.961431 systemd[1]: kubelet.service: Consumed 162ms CPU time, 99.4M memory peak. Jul 9 23:50:56.973079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:50:57.139664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:50:57.145054 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:50:57.190226 kubelet[2268]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:50:57.190226 kubelet[2268]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 23:50:57.190226 kubelet[2268]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:50:57.190923 kubelet[2268]: I0709 23:50:57.190339 2268 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:50:57.579469 kubelet[2268]: I0709 23:50:57.579413 2268 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 23:50:57.579469 kubelet[2268]: I0709 23:50:57.579456 2268 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:50:57.579743 kubelet[2268]: I0709 23:50:57.579727 2268 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 23:50:57.598761 kubelet[2268]: E0709 23:50:57.598717 2268 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:50:57.600073 kubelet[2268]: I0709 23:50:57.600037 2268 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:50:57.605205 kubelet[2268]: E0709 23:50:57.605161 2268 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 9 23:50:57.605205 kubelet[2268]: I0709 23:50:57.605204 2268 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 9 23:50:57.611784 kubelet[2268]: I0709 23:50:57.611752 2268 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:50:57.612547 kubelet[2268]: I0709 23:50:57.612510 2268 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 23:50:57.612766 kubelet[2268]: I0709 23:50:57.612707 2268 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:50:57.613020 kubelet[2268]: I0709 23:50:57.612750 2268 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:50:57.613144 kubelet[2268]: I0709 23:50:57.613033 2268 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:50:57.613144 kubelet[2268]: I0709 23:50:57.613044 2268 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 23:50:57.613198 kubelet[2268]: I0709 23:50:57.613174 2268 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:50:57.617415 kubelet[2268]: I0709 23:50:57.617380 2268 kubelet.go:408] "Attempting to sync node with API server" Jul 9 23:50:57.617415 kubelet[2268]: I0709 23:50:57.617410 2268 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:50:57.617500 kubelet[2268]: I0709 23:50:57.617460 2268 kubelet.go:314] "Adding apiserver pod source" Jul 9 23:50:57.617500 kubelet[2268]: I0709 23:50:57.617496 2268 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:50:57.624139 kubelet[2268]: W0709 23:50:57.623877 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 9 23:50:57.624139 kubelet[2268]: E0709 23:50:57.623969 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:50:57.624266 kubelet[2268]: I0709 23:50:57.624237 2268 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 9 23:50:57.624729 kubelet[2268]: W0709 23:50:57.624670 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 9 23:50:57.624782 kubelet[2268]: E0709 23:50:57.624738 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:50:57.624782 kubelet[2268]: I0709 23:50:57.624705 2268 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:50:57.625647 kubelet[2268]: W0709 23:50:57.625430 2268 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 23:50:57.628032 kubelet[2268]: I0709 23:50:57.628011 2268 server.go:1274] "Started kubelet" Jul 9 23:50:57.629765 kubelet[2268]: I0709 23:50:57.629722 2268 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:50:57.631832 kubelet[2268]: I0709 23:50:57.631754 2268 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:50:57.632823 kubelet[2268]: I0709 23:50:57.632777 2268 server.go:449] "Adding debug handlers to kubelet server" Jul 9 23:50:57.633534 kubelet[2268]: E0709 23:50:57.632433 2268 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850ba445114a954 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 23:50:57.627973972 +0000 UTC m=+0.476569232,LastTimestamp:2025-07-09 23:50:57.627973972 +0000 UTC m=+0.476569232,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 23:50:57.633728 kubelet[2268]: I0709 23:50:57.633683 2268 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 23:50:57.634761 kubelet[2268]: I0709 23:50:57.633848 2268 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 23:50:57.634761 kubelet[2268]: I0709 23:50:57.633967 2268 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:50:57.634761 kubelet[2268]: W0709 23:50:57.634318 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 9 23:50:57.634761 kubelet[2268]: E0709 23:50:57.634365 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:50:57.634761 kubelet[2268]: I0709 23:50:57.634461 2268 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:50:57.634945 kubelet[2268]: I0709 23:50:57.634777 2268 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:50:57.636188 kubelet[2268]: E0709 23:50:57.635663 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:50:57.636188 kubelet[2268]: I0709 23:50:57.635860 2268 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:50:57.636312 kubelet[2268]: E0709 23:50:57.636254 2268 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:50:57.636456 kubelet[2268]: E0709 23:50:57.636417 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Jul 9 23:50:57.639488 kubelet[2268]: I0709 23:50:57.637796 2268 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:50:57.639488 kubelet[2268]: I0709 23:50:57.637884 2268 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:50:57.639488 kubelet[2268]: I0709 23:50:57.637958 2268 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:50:57.653577 kubelet[2268]: I0709 23:50:57.653534 2268 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 23:50:57.653577 kubelet[2268]: I0709 23:50:57.653553 2268 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 23:50:57.653577 kubelet[2268]: I0709 23:50:57.653584 2268 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:50:57.654206 kubelet[2268]: I0709 23:50:57.654178 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:50:57.655985 kubelet[2268]: I0709 23:50:57.655965 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:50:57.656724 kubelet[2268]: I0709 23:50:57.656368 2268 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 23:50:57.656724 kubelet[2268]: I0709 23:50:57.656406 2268 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 23:50:57.656724 kubelet[2268]: E0709 23:50:57.656451 2268 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:50:57.736569 kubelet[2268]: E0709 23:50:57.736540 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:50:57.756821 kubelet[2268]: E0709 23:50:57.756786 2268 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 23:50:57.837233 kubelet[2268]: E0709 23:50:57.837154 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:50:57.837557 kubelet[2268]: E0709 23:50:57.837525 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Jul 9 23:50:57.937871 kubelet[2268]: E0709 23:50:57.937849 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:50:57.957028 kubelet[2268]: E0709 23:50:57.957006 2268 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 23:50:58.016703 kubelet[2268]: I0709 23:50:58.016665 2268 policy_none.go:49] "None policy: Start" Jul 9 23:50:58.016951 kubelet[2268]: W0709 23:50:58.016888 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 9 23:50:58.017041 kubelet[2268]: E0709 23:50:58.016968 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:50:58.017284 kubelet[2268]: I0709 23:50:58.017253 2268 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 23:50:58.017284 kubelet[2268]: I0709 23:50:58.017287 2268 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:50:58.026482 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 23:50:58.038743 kubelet[2268]: E0709 23:50:58.038713 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:50:58.041222 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 23:50:58.044706 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 23:50:58.057755 kubelet[2268]: I0709 23:50:58.057714 2268 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:50:58.057985 kubelet[2268]: I0709 23:50:58.057961 2268 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:50:58.058062 kubelet[2268]: I0709 23:50:58.057983 2268 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:50:58.058241 kubelet[2268]: I0709 23:50:58.058213 2268 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:50:58.059367 kubelet[2268]: E0709 23:50:58.059347 2268 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 23:50:58.159766 kubelet[2268]: I0709 23:50:58.159666 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 9 23:50:58.160059 kubelet[2268]: E0709 23:50:58.160018 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jul 9 23:50:58.238816 kubelet[2268]: E0709 23:50:58.238763 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Jul 9 23:50:58.361561 kubelet[2268]: I0709 23:50:58.361015 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 9 23:50:58.361561 kubelet[2268]: E0709 23:50:58.361412 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jul 9 23:50:58.365599 systemd[1]: Created slice kubepods-burstable-pod7be0ce850690f55b6b068e31d26cfa86.slice - libcontainer container kubepods-burstable-pod7be0ce850690f55b6b068e31d26cfa86.slice. Jul 9 23:50:58.388858 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 9 23:50:58.392351 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 9 23:50:58.439406 kubelet[2268]: I0709 23:50:58.439236 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:50:58.439406 kubelet[2268]: I0709 23:50:58.439300 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:50:58.439406 kubelet[2268]: I0709 23:50:58.439333 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:50:58.439406 kubelet[2268]: I0709 23:50:58.439358 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:50:58.439406 kubelet[2268]: I0709 23:50:58.439385 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7be0ce850690f55b6b068e31d26cfa86-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7be0ce850690f55b6b068e31d26cfa86\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:50:58.439879 kubelet[2268]: I0709 23:50:58.439446 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:50:58.439879 kubelet[2268]: I0709 23:50:58.439532 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 9 23:50:58.439879 kubelet[2268]: I0709 23:50:58.439554 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7be0ce850690f55b6b068e31d26cfa86-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7be0ce850690f55b6b068e31d26cfa86\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:50:58.439879 kubelet[2268]: I0709 23:50:58.439581 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7be0ce850690f55b6b068e31d26cfa86-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7be0ce850690f55b6b068e31d26cfa86\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:50:58.685945 kubelet[2268]: E0709 23:50:58.685889 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:58.686763 containerd[1516]: time="2025-07-09T23:50:58.686729980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7be0ce850690f55b6b068e31d26cfa86,Namespace:kube-system,Attempt:0,}" Jul 9 23:50:58.691936 kubelet[2268]: E0709 23:50:58.691844 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:58.692244 containerd[1516]: time="2025-07-09T23:50:58.692147850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 9 23:50:58.694504 kubelet[2268]: E0709 23:50:58.694468 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:58.694789 containerd[1516]: time="2025-07-09T23:50:58.694751109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 9 23:50:58.763585 kubelet[2268]: I0709 23:50:58.763547 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 9 23:50:58.763966 kubelet[2268]: E0709 23:50:58.763927 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jul 9 23:50:58.842718 kubelet[2268]: W0709 23:50:58.842628 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 9 23:50:58.842718 kubelet[2268]: E0709 23:50:58.842711 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:50:58.918456 kubelet[2268]: W0709 23:50:58.918391 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 9 23:50:58.918593 kubelet[2268]: E0709 23:50:58.918463 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:50:59.039983 kubelet[2268]: E0709 23:50:59.039909 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Jul 9 23:50:59.171720 kubelet[2268]: W0709 23:50:59.171627 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 9 23:50:59.171879 kubelet[2268]: E0709 23:50:59.171723 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:50:59.224240 kubelet[2268]: W0709 23:50:59.224216 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 9 23:50:59.224329 kubelet[2268]: E0709 23:50:59.224247 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:50:59.305987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1528177032.mount: Deactivated successfully. Jul 9 23:50:59.350896 containerd[1516]: time="2025-07-09T23:50:59.350830156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:50:59.411422 containerd[1516]: time="2025-07-09T23:50:59.411377081Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 9 23:50:59.413754 containerd[1516]: time="2025-07-09T23:50:59.413712393Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:50:59.416691 containerd[1516]: time="2025-07-09T23:50:59.416651015Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:50:59.417945 containerd[1516]: time="2025-07-09T23:50:59.417869613Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 9 23:50:59.420915 containerd[1516]: time="2025-07-09T23:50:59.420854143Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:50:59.422517 containerd[1516]: time="2025-07-09T23:50:59.422455201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:50:59.423598 containerd[1516]: time="2025-07-09T23:50:59.423543710Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 9 23:50:59.423750 containerd[1516]: time="2025-07-09T23:50:59.423710609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 736.853657ms" Jul 9 23:50:59.427371 containerd[1516]: time="2025-07-09T23:50:59.427320303Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 732.470256ms" Jul 9 23:50:59.431667 containerd[1516]: time="2025-07-09T23:50:59.431605709Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 739.380731ms" Jul 9 23:50:59.565629 kubelet[2268]: I0709 23:50:59.565501 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 9 23:50:59.566043 kubelet[2268]: E0709 23:50:59.565904 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jul 9 23:50:59.768274 kubelet[2268]: E0709 23:50:59.768147 2268 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850ba445114a954 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 23:50:57.627973972 +0000 UTC m=+0.476569232,LastTimestamp:2025-07-09 23:50:57.627973972 +0000 UTC m=+0.476569232,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 23:50:59.775795 containerd[1516]: time="2025-07-09T23:50:59.774398359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:50:59.775795 containerd[1516]: time="2025-07-09T23:50:59.775762634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:50:59.776269 containerd[1516]: time="2025-07-09T23:50:59.775780409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:50:59.776783 containerd[1516]: time="2025-07-09T23:50:59.775892873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:50:59.776869 containerd[1516]: time="2025-07-09T23:50:59.776672843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:50:59.776869 containerd[1516]: time="2025-07-09T23:50:59.776750611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:50:59.776869 containerd[1516]: time="2025-07-09T23:50:59.776763867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:50:59.777569 containerd[1516]: time="2025-07-09T23:50:59.776871864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:50:59.782571 containerd[1516]: time="2025-07-09T23:50:59.781145776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:50:59.782571 containerd[1516]: time="2025-07-09T23:50:59.781271085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:50:59.782571 containerd[1516]: time="2025-07-09T23:50:59.781286656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:50:59.782571 containerd[1516]: time="2025-07-09T23:50:59.781442242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:50:59.783920 kubelet[2268]: E0709 23:50:59.783878 2268 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:50:59.846348 systemd[1]: Started cri-containerd-d9749bc760507046748a43ef92cb04e4c00fc0f67a00dc4098acec33ad411532.scope - libcontainer container d9749bc760507046748a43ef92cb04e4c00fc0f67a00dc4098acec33ad411532. Jul 9 23:50:59.853932 systemd[1]: Started cri-containerd-ae17a305ae06d23f2738f865fa390e44d2d000974bedb6e73d47a87b87f9d68f.scope - libcontainer container ae17a305ae06d23f2738f865fa390e44d2d000974bedb6e73d47a87b87f9d68f. Jul 9 23:50:59.858946 systemd[1]: Started cri-containerd-6bfa1cd70b170c4d422fa2426ca715e563f3f667e2025e3759b460ed88bad060.scope - libcontainer container 6bfa1cd70b170c4d422fa2426ca715e563f3f667e2025e3759b460ed88bad060. Jul 9 23:50:59.906848 containerd[1516]: time="2025-07-09T23:50:59.905765311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9749bc760507046748a43ef92cb04e4c00fc0f67a00dc4098acec33ad411532\"" Jul 9 23:50:59.908049 kubelet[2268]: E0709 23:50:59.908013 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:59.912093 containerd[1516]: time="2025-07-09T23:50:59.911433067Z" level=info msg="CreateContainer within sandbox \"d9749bc760507046748a43ef92cb04e4c00fc0f67a00dc4098acec33ad411532\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 23:50:59.914153 containerd[1516]: time="2025-07-09T23:50:59.914127263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bfa1cd70b170c4d422fa2426ca715e563f3f667e2025e3759b460ed88bad060\"" Jul 9 23:50:59.916438 kubelet[2268]: E0709 23:50:59.916233 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:59.918028 containerd[1516]: time="2025-07-09T23:50:59.918002394Z" level=info msg="CreateContainer within sandbox \"6bfa1cd70b170c4d422fa2426ca715e563f3f667e2025e3759b460ed88bad060\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 23:50:59.920260 containerd[1516]: time="2025-07-09T23:50:59.920207206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7be0ce850690f55b6b068e31d26cfa86,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae17a305ae06d23f2738f865fa390e44d2d000974bedb6e73d47a87b87f9d68f\"" Jul 9 23:50:59.921262 kubelet[2268]: E0709 23:50:59.921241 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:59.922886 containerd[1516]: time="2025-07-09T23:50:59.922848211Z" level=info msg="CreateContainer within sandbox \"ae17a305ae06d23f2738f865fa390e44d2d000974bedb6e73d47a87b87f9d68f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 23:50:59.935605 containerd[1516]: time="2025-07-09T23:50:59.935529322Z" level=info msg="CreateContainer within sandbox \"d9749bc760507046748a43ef92cb04e4c00fc0f67a00dc4098acec33ad411532\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d90cfea77150890ef59d33fd87d3e47e7659e302d661e3e34b64b91d16ae6171\"" Jul 9 23:50:59.936155 containerd[1516]: time="2025-07-09T23:50:59.936123296Z" level=info msg="StartContainer for \"d90cfea77150890ef59d33fd87d3e47e7659e302d661e3e34b64b91d16ae6171\"" Jul 9 23:50:59.940772 containerd[1516]: time="2025-07-09T23:50:59.940714034Z" level=info msg="CreateContainer within sandbox \"6bfa1cd70b170c4d422fa2426ca715e563f3f667e2025e3759b460ed88bad060\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed41ac27ce7181440785e902272e3125fa81377babab5e845da9e3c9d8b7595b\"" Jul 9 23:50:59.941267 containerd[1516]: time="2025-07-09T23:50:59.941234448Z" level=info msg="StartContainer for \"ed41ac27ce7181440785e902272e3125fa81377babab5e845da9e3c9d8b7595b\"" Jul 9 23:50:59.953217 containerd[1516]: time="2025-07-09T23:50:59.953168974Z" level=info msg="CreateContainer within sandbox \"ae17a305ae06d23f2738f865fa390e44d2d000974bedb6e73d47a87b87f9d68f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"80294e40842cfa66989e92ad01075cefe8e86332bd2d2f0d9ee994bd6622bf78\"" Jul 9 23:50:59.953571 containerd[1516]: time="2025-07-09T23:50:59.953542216Z" level=info msg="StartContainer for \"80294e40842cfa66989e92ad01075cefe8e86332bd2d2f0d9ee994bd6622bf78\"" Jul 9 23:50:59.973086 systemd[1]: Started cri-containerd-d90cfea77150890ef59d33fd87d3e47e7659e302d661e3e34b64b91d16ae6171.scope - libcontainer container d90cfea77150890ef59d33fd87d3e47e7659e302d661e3e34b64b91d16ae6171. Jul 9 23:50:59.981963 systemd[1]: Started cri-containerd-ed41ac27ce7181440785e902272e3125fa81377babab5e845da9e3c9d8b7595b.scope - libcontainer container ed41ac27ce7181440785e902272e3125fa81377babab5e845da9e3c9d8b7595b. Jul 9 23:50:59.986857 systemd[1]: Started cri-containerd-80294e40842cfa66989e92ad01075cefe8e86332bd2d2f0d9ee994bd6622bf78.scope - libcontainer container 80294e40842cfa66989e92ad01075cefe8e86332bd2d2f0d9ee994bd6622bf78. Jul 9 23:51:00.206347 containerd[1516]: time="2025-07-09T23:51:00.205652298Z" level=info msg="StartContainer for \"80294e40842cfa66989e92ad01075cefe8e86332bd2d2f0d9ee994bd6622bf78\" returns successfully" Jul 9 23:51:00.210744 containerd[1516]: time="2025-07-09T23:51:00.209470873Z" level=info msg="StartContainer for \"d90cfea77150890ef59d33fd87d3e47e7659e302d661e3e34b64b91d16ae6171\" returns successfully" Jul 9 23:51:00.216999 containerd[1516]: time="2025-07-09T23:51:00.216936787Z" level=info msg="StartContainer for \"ed41ac27ce7181440785e902272e3125fa81377babab5e845da9e3c9d8b7595b\" returns successfully" Jul 9 23:51:00.774982 kubelet[2268]: E0709 23:51:00.774935 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:00.777864 kubelet[2268]: E0709 23:51:00.777829 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:00.780253 kubelet[2268]: E0709 23:51:00.780215 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:01.168232 kubelet[2268]: I0709 23:51:01.168091 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 9 23:51:01.690250 kubelet[2268]: I0709 23:51:01.690198 2268 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 9 23:51:01.690250 kubelet[2268]: E0709 23:51:01.690260 2268 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 9 23:51:01.748971 kubelet[2268]: E0709 23:51:01.748866 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jul 9 23:51:01.767185 kubelet[2268]: I0709 23:51:01.767122 2268 apiserver.go:52] "Watching apiserver" Jul 9 23:51:01.788001 kubelet[2268]: E0709 23:51:01.787611 2268 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 23:51:01.788001 kubelet[2268]: E0709 23:51:01.787893 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:01.834837 kubelet[2268]: I0709 23:51:01.834768 2268 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 23:51:03.507025 kubelet[2268]: E0709 23:51:03.506943 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:03.785798 kubelet[2268]: E0709 23:51:03.785756 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:04.091092 systemd[1]: Reload requested from client PID 2546 ('systemctl') (unit session-9.scope)... Jul 9 23:51:04.091109 systemd[1]: Reloading... Jul 9 23:51:04.214861 zram_generator::config[2596]: No configuration found. Jul 9 23:51:04.325752 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:51:04.463161 systemd[1]: Reloading finished in 371 ms. Jul 9 23:51:04.488455 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:51:04.515458 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:51:04.515833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:51:04.515896 systemd[1]: kubelet.service: Consumed 1.080s CPU time, 131.4M memory peak. Jul 9 23:51:04.522010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:51:04.705912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:51:04.718148 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:51:04.761166 kubelet[2635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:51:04.761166 kubelet[2635]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 23:51:04.761166 kubelet[2635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:51:04.761649 kubelet[2635]: I0709 23:51:04.761226 2635 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:51:04.769981 kubelet[2635]: I0709 23:51:04.769945 2635 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 23:51:04.769981 kubelet[2635]: I0709 23:51:04.769968 2635 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:51:04.770352 kubelet[2635]: I0709 23:51:04.770180 2635 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 23:51:04.771448 kubelet[2635]: I0709 23:51:04.771427 2635 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 23:51:04.773461 kubelet[2635]: I0709 23:51:04.773259 2635 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:51:04.777140 kubelet[2635]: E0709 23:51:04.777106 2635 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 9 23:51:04.777140 kubelet[2635]: I0709 23:51:04.777138 2635 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 9 23:51:04.782713 kubelet[2635]: I0709 23:51:04.782671 2635 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:51:04.782874 kubelet[2635]: I0709 23:51:04.782856 2635 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 23:51:04.783053 kubelet[2635]: I0709 23:51:04.783019 2635 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:51:04.783221 kubelet[2635]: I0709 23:51:04.783052 2635 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:51:04.783310 kubelet[2635]: I0709 23:51:04.783234 2635 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:51:04.783310 kubelet[2635]: I0709 23:51:04.783242 2635 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 23:51:04.783310 kubelet[2635]: I0709 23:51:04.783286 2635 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:51:04.783430 kubelet[2635]: I0709 23:51:04.783415 2635 kubelet.go:408] "Attempting to sync node with API server" Jul 9 23:51:04.783458 kubelet[2635]: I0709 23:51:04.783432 2635 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:51:04.783484 kubelet[2635]: I0709 23:51:04.783469 2635 kubelet.go:314] "Adding apiserver pod source" Jul 9 23:51:04.783484 kubelet[2635]: I0709 23:51:04.783482 2635 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:51:04.784439 kubelet[2635]: I0709 23:51:04.784396 2635 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 9 23:51:04.785200 kubelet[2635]: I0709 23:51:04.785174 2635 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:51:04.786109 kubelet[2635]: I0709 23:51:04.786091 2635 server.go:1274] "Started kubelet" Jul 9 23:51:04.788223 kubelet[2635]: I0709 23:51:04.788083 2635 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:51:04.788919 kubelet[2635]: I0709 23:51:04.788887 2635 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:51:04.789068 kubelet[2635]: I0709 23:51:04.789037 2635 server.go:449] "Adding debug handlers to kubelet server" Jul 9 23:51:04.789277 kubelet[2635]: I0709 23:51:04.789259 2635 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:51:04.799038 kubelet[2635]: I0709 23:51:04.798989 2635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:51:04.800285 kubelet[2635]: I0709 23:51:04.800259 2635 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:51:04.804053 kubelet[2635]: I0709 23:51:04.803864 2635 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 23:51:04.804393 kubelet[2635]: I0709 23:51:04.804338 2635 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:51:04.804433 kubelet[2635]: I0709 23:51:04.804417 2635 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 23:51:04.804996 kubelet[2635]: I0709 23:51:04.804967 2635 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:51:04.806115 kubelet[2635]: I0709 23:51:04.806089 2635 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:51:04.806115 kubelet[2635]: I0709 23:51:04.806108 2635 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:51:04.809353 kubelet[2635]: E0709 23:51:04.809328 2635 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:51:04.817853 kubelet[2635]: I0709 23:51:04.817798 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:51:04.819466 kubelet[2635]: I0709 23:51:04.819439 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:51:04.819466 kubelet[2635]: I0709 23:51:04.819463 2635 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 23:51:04.819548 kubelet[2635]: I0709 23:51:04.819483 2635 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 23:51:04.819548 kubelet[2635]: E0709 23:51:04.819532 2635 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:51:04.848550 kubelet[2635]: I0709 23:51:04.848521 2635 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 23:51:04.848550 kubelet[2635]: I0709 23:51:04.848541 2635 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 23:51:04.848687 kubelet[2635]: I0709 23:51:04.848564 2635 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:51:04.848738 kubelet[2635]: I0709 23:51:04.848709 2635 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 23:51:04.848767 kubelet[2635]: I0709 23:51:04.848734 2635 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 23:51:04.848767 kubelet[2635]: I0709 23:51:04.848755 2635 policy_none.go:49] "None policy: Start" Jul 9 23:51:04.849476 kubelet[2635]: I0709 23:51:04.849446 2635 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 23:51:04.849476 kubelet[2635]: I0709 23:51:04.849477 2635 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:51:04.849622 kubelet[2635]: I0709 23:51:04.849606 2635 state_mem.go:75] "Updated machine memory state" Jul 9 23:51:04.854515 kubelet[2635]: I0709 23:51:04.854487 2635 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:51:04.854696 kubelet[2635]: I0709 23:51:04.854673 2635 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:51:04.854742 kubelet[2635]: I0709 23:51:04.854690 2635 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:51:04.855099 kubelet[2635]: I0709 23:51:04.855080 2635 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:51:04.925987 kubelet[2635]: E0709 23:51:04.925921 2635 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 23:51:04.964481 kubelet[2635]: I0709 23:51:04.964422 2635 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 9 23:51:04.973911 kubelet[2635]: I0709 23:51:04.971895 2635 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 9 23:51:04.973911 kubelet[2635]: I0709 23:51:04.971979 2635 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 9 23:51:05.007038 kubelet[2635]: I0709 23:51:05.006982 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:51:05.007038 kubelet[2635]: I0709 23:51:05.007018 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:51:05.007038 kubelet[2635]: I0709 23:51:05.007038 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:51:05.007038 kubelet[2635]: I0709 23:51:05.007058 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7be0ce850690f55b6b068e31d26cfa86-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7be0ce850690f55b6b068e31d26cfa86\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:51:05.007361 kubelet[2635]: I0709 23:51:05.007118 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7be0ce850690f55b6b068e31d26cfa86-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7be0ce850690f55b6b068e31d26cfa86\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:51:05.007361 kubelet[2635]: I0709 23:51:05.007182 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:51:05.007361 kubelet[2635]: I0709 23:51:05.007264 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 9 23:51:05.007361 kubelet[2635]: I0709 23:51:05.007306 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7be0ce850690f55b6b068e31d26cfa86-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7be0ce850690f55b6b068e31d26cfa86\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:51:05.007361 kubelet[2635]: I0709 23:51:05.007333 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:51:05.086571 sudo[2672]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 23:51:05.086999 sudo[2672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 23:51:05.225465 kubelet[2635]: E0709 23:51:05.225318 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:05.225967 kubelet[2635]: E0709 23:51:05.225868 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:05.226489 kubelet[2635]: E0709 23:51:05.226430 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:05.710441 sudo[2672]: pam_unix(sudo:session): session closed for user root Jul 9 23:51:05.784536 kubelet[2635]: I0709 23:51:05.784466 2635 apiserver.go:52] "Watching apiserver" Jul 9 23:51:05.806787 kubelet[2635]: I0709 23:51:05.806733 2635 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 23:51:05.830190 kubelet[2635]: E0709 23:51:05.830130 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:05.831859 kubelet[2635]: E0709 23:51:05.831650 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:05.831859 kubelet[2635]: E0709 23:51:05.831708 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:05.842878 kubelet[2635]: I0709 23:51:05.842234 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.842203063 podStartE2EDuration="1.842203063s" podCreationTimestamp="2025-07-09 23:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:51:05.831437502 +0000 UTC m=+1.108466613" watchObservedRunningTime="2025-07-09 23:51:05.842203063 +0000 UTC m=+1.119232164" Jul 9 23:51:05.853249 kubelet[2635]: I0709 23:51:05.853161 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.852272905 podStartE2EDuration="2.852272905s" podCreationTimestamp="2025-07-09 23:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:51:05.84291228 +0000 UTC m=+1.119941391" watchObservedRunningTime="2025-07-09 23:51:05.852272905 +0000 UTC m=+1.129302017" Jul 9 23:51:05.863915 kubelet[2635]: I0709 23:51:05.863830 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.863780306 podStartE2EDuration="1.863780306s" podCreationTimestamp="2025-07-09 23:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:51:05.853301959 +0000 UTC m=+1.130331080" watchObservedRunningTime="2025-07-09 23:51:05.863780306 +0000 UTC m=+1.140809417" Jul 9 23:51:06.832251 kubelet[2635]: E0709 23:51:06.832203 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:07.406024 update_engine[1499]: I20250709 23:51:07.405870 1499 update_attempter.cc:509] Updating boot flags... Jul 9 23:51:07.486864 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2716) Jul 9 23:51:07.490084 sudo[1710]: pam_unix(sudo:session): session closed for user root Jul 9 23:51:07.498841 sshd[1709]: Connection closed by 10.0.0.1 port 40124 Jul 9 23:51:07.498345 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:07.515532 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:40124.service: Deactivated successfully. Jul 9 23:51:07.521798 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 23:51:07.522621 systemd[1]: session-9.scope: Consumed 5.170s CPU time, 251.2M memory peak. Jul 9 23:51:07.526259 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. Jul 9 23:51:07.553072 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2713) Jul 9 23:51:07.550008 systemd-logind[1494]: Removed session 9. Jul 9 23:51:08.009748 kubelet[2635]: E0709 23:51:08.009209 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:09.440669 kubelet[2635]: I0709 23:51:09.440614 2635 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 23:51:09.441207 kubelet[2635]: I0709 23:51:09.441153 2635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 23:51:09.441266 containerd[1516]: time="2025-07-09T23:51:09.440992307Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 23:51:10.390358 systemd[1]: Created slice kubepods-burstable-pod7c8d4173_032a_4c40_bcef_27f445bbf0eb.slice - libcontainer container kubepods-burstable-pod7c8d4173_032a_4c40_bcef_27f445bbf0eb.slice. Jul 9 23:51:10.396060 systemd[1]: Created slice kubepods-besteffort-pod36207527_b5be_4361_baa1_8db779a002f9.slice - libcontainer container kubepods-besteffort-pod36207527_b5be_4361_baa1_8db779a002f9.slice. Jul 9 23:51:10.444747 kubelet[2635]: I0709 23:51:10.444702 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-lib-modules\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.444747 kubelet[2635]: I0709 23:51:10.444744 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c8d4173-032a-4c40-bcef-27f445bbf0eb-clustermesh-secrets\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445270 kubelet[2635]: I0709 23:51:10.444768 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36207527-b5be-4361-baa1-8db779a002f9-kube-proxy\") pod \"kube-proxy-lpdqc\" (UID: \"36207527-b5be-4361-baa1-8db779a002f9\") " pod="kube-system/kube-proxy-lpdqc" Jul 9 23:51:10.445270 kubelet[2635]: I0709 23:51:10.444783 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-run\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445270 kubelet[2635]: I0709 23:51:10.444845 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cni-path\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445270 kubelet[2635]: I0709 23:51:10.444918 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-etc-cni-netd\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445270 kubelet[2635]: I0709 23:51:10.444949 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-hostproc\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445270 kubelet[2635]: I0709 23:51:10.445030 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-cgroup\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445429 kubelet[2635]: I0709 23:51:10.445077 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tqjh\" (UniqueName: \"kubernetes.io/projected/7c8d4173-032a-4c40-bcef-27f445bbf0eb-kube-api-access-6tqjh\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445429 kubelet[2635]: I0709 23:51:10.445120 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-host-proc-sys-net\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445429 kubelet[2635]: I0709 23:51:10.445147 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36207527-b5be-4361-baa1-8db779a002f9-lib-modules\") pod \"kube-proxy-lpdqc\" (UID: \"36207527-b5be-4361-baa1-8db779a002f9\") " pod="kube-system/kube-proxy-lpdqc" Jul 9 23:51:10.445429 kubelet[2635]: I0709 23:51:10.445166 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfvlg\" (UniqueName: \"kubernetes.io/projected/36207527-b5be-4361-baa1-8db779a002f9-kube-api-access-rfvlg\") pod \"kube-proxy-lpdqc\" (UID: \"36207527-b5be-4361-baa1-8db779a002f9\") " pod="kube-system/kube-proxy-lpdqc" Jul 9 23:51:10.445429 kubelet[2635]: I0709 23:51:10.445181 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-bpf-maps\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445543 kubelet[2635]: I0709 23:51:10.445200 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-xtables-lock\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445543 kubelet[2635]: I0709 23:51:10.445215 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-host-proc-sys-kernel\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445543 kubelet[2635]: I0709 23:51:10.445236 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c8d4173-032a-4c40-bcef-27f445bbf0eb-hubble-tls\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.445543 kubelet[2635]: I0709 23:51:10.445270 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36207527-b5be-4361-baa1-8db779a002f9-xtables-lock\") pod \"kube-proxy-lpdqc\" (UID: \"36207527-b5be-4361-baa1-8db779a002f9\") " pod="kube-system/kube-proxy-lpdqc" Jul 9 23:51:10.445543 kubelet[2635]: I0709 23:51:10.445295 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-config-path\") pod \"cilium-4rvsh\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " pod="kube-system/cilium-4rvsh" Jul 9 23:51:10.660195 kubelet[2635]: E0709 23:51:10.659465 2635 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 9 23:51:10.660195 kubelet[2635]: E0709 23:51:10.659507 2635 projected.go:194] Error preparing data for projected volume kube-api-access-6tqjh for pod kube-system/cilium-4rvsh: configmap "kube-root-ca.crt" not found Jul 9 23:51:10.660195 kubelet[2635]: E0709 23:51:10.659582 2635 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7c8d4173-032a-4c40-bcef-27f445bbf0eb-kube-api-access-6tqjh podName:7c8d4173-032a-4c40-bcef-27f445bbf0eb nodeName:}" failed. No retries permitted until 2025-07-09 23:51:11.159557045 +0000 UTC m=+6.436586156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6tqjh" (UniqueName: "kubernetes.io/projected/7c8d4173-032a-4c40-bcef-27f445bbf0eb-kube-api-access-6tqjh") pod "cilium-4rvsh" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb") : configmap "kube-root-ca.crt" not found Jul 9 23:51:10.662490 kubelet[2635]: E0709 23:51:10.662062 2635 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 9 23:51:10.662490 kubelet[2635]: E0709 23:51:10.662086 2635 projected.go:194] Error preparing data for projected volume kube-api-access-rfvlg for pod kube-system/kube-proxy-lpdqc: configmap "kube-root-ca.crt" not found Jul 9 23:51:10.662490 kubelet[2635]: E0709 23:51:10.662136 2635 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36207527-b5be-4361-baa1-8db779a002f9-kube-api-access-rfvlg podName:36207527-b5be-4361-baa1-8db779a002f9 nodeName:}" failed. No retries permitted until 2025-07-09 23:51:11.162115045 +0000 UTC m=+6.439144156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rfvlg" (UniqueName: "kubernetes.io/projected/36207527-b5be-4361-baa1-8db779a002f9-kube-api-access-rfvlg") pod "kube-proxy-lpdqc" (UID: "36207527-b5be-4361-baa1-8db779a002f9") : configmap "kube-root-ca.crt" not found Jul 9 23:51:10.814298 kubelet[2635]: E0709 23:51:10.814246 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:10.839515 kubelet[2635]: E0709 23:51:10.839478 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:10.960077 systemd[1]: Created slice kubepods-besteffort-pod2a760908_a258_45da_b084_2f437acba1af.slice - libcontainer container kubepods-besteffort-pod2a760908_a258_45da_b084_2f437acba1af.slice. Jul 9 23:51:11.048875 kubelet[2635]: I0709 23:51:11.048821 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a760908-a258-45da-b084-2f437acba1af-cilium-config-path\") pod \"cilium-operator-5d85765b45-lhqh9\" (UID: \"2a760908-a258-45da-b084-2f437acba1af\") " pod="kube-system/cilium-operator-5d85765b45-lhqh9" Jul 9 23:51:11.048974 kubelet[2635]: I0709 23:51:11.048882 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h58dj\" (UniqueName: \"kubernetes.io/projected/2a760908-a258-45da-b084-2f437acba1af-kube-api-access-h58dj\") pod \"cilium-operator-5d85765b45-lhqh9\" (UID: \"2a760908-a258-45da-b084-2f437acba1af\") " pod="kube-system/cilium-operator-5d85765b45-lhqh9" Jul 9 23:51:11.298167 kubelet[2635]: E0709 23:51:11.298128 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:11.298764 containerd[1516]: time="2025-07-09T23:51:11.298716986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4rvsh,Uid:7c8d4173-032a-4c40-bcef-27f445bbf0eb,Namespace:kube-system,Attempt:0,}" Jul 9 23:51:11.306012 kubelet[2635]: E0709 23:51:11.305939 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:11.306469 containerd[1516]: time="2025-07-09T23:51:11.306422810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lpdqc,Uid:36207527-b5be-4361-baa1-8db779a002f9,Namespace:kube-system,Attempt:0,}" Jul 9 23:51:11.563336 kubelet[2635]: E0709 23:51:11.563184 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:11.563844 containerd[1516]: time="2025-07-09T23:51:11.563706419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lhqh9,Uid:2a760908-a258-45da-b084-2f437acba1af,Namespace:kube-system,Attempt:0,}" Jul 9 23:51:11.651318 containerd[1516]: time="2025-07-09T23:51:11.651123062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:51:11.651318 containerd[1516]: time="2025-07-09T23:51:11.651277555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:51:11.652096 containerd[1516]: time="2025-07-09T23:51:11.651292523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:51:11.652167 containerd[1516]: time="2025-07-09T23:51:11.652078731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:51:11.657522 containerd[1516]: time="2025-07-09T23:51:11.654766033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:51:11.657522 containerd[1516]: time="2025-07-09T23:51:11.654834793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:51:11.657522 containerd[1516]: time="2025-07-09T23:51:11.654850613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:51:11.657522 containerd[1516]: time="2025-07-09T23:51:11.654930624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:51:11.664870 containerd[1516]: time="2025-07-09T23:51:11.664727643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:51:11.664870 containerd[1516]: time="2025-07-09T23:51:11.664834115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:51:11.665048 containerd[1516]: time="2025-07-09T23:51:11.664852550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:51:11.665527 containerd[1516]: time="2025-07-09T23:51:11.665450411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:51:11.679021 systemd[1]: Started cri-containerd-61b4db82fd6e4e8998ec3d7e64e46a9ee4407f5cd74792d22e18aeadfa67f91e.scope - libcontainer container 61b4db82fd6e4e8998ec3d7e64e46a9ee4407f5cd74792d22e18aeadfa67f91e. Jul 9 23:51:11.683744 systemd[1]: Started cri-containerd-13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec.scope - libcontainer container 13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec. Jul 9 23:51:11.687633 systemd[1]: Started cri-containerd-856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6.scope - libcontainer container 856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6. Jul 9 23:51:11.716793 containerd[1516]: time="2025-07-09T23:51:11.716740728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lpdqc,Uid:36207527-b5be-4361-baa1-8db779a002f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"61b4db82fd6e4e8998ec3d7e64e46a9ee4407f5cd74792d22e18aeadfa67f91e\"" Jul 9 23:51:11.719498 kubelet[2635]: E0709 23:51:11.719454 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:11.726537 containerd[1516]: time="2025-07-09T23:51:11.726370110Z" level=info msg="CreateContainer within sandbox \"61b4db82fd6e4e8998ec3d7e64e46a9ee4407f5cd74792d22e18aeadfa67f91e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 23:51:11.733920 containerd[1516]: time="2025-07-09T23:51:11.733562233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4rvsh,Uid:7c8d4173-032a-4c40-bcef-27f445bbf0eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\"" Jul 9 23:51:11.735635 kubelet[2635]: E0709 23:51:11.735613 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:11.737575 containerd[1516]: time="2025-07-09T23:51:11.737511121Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 23:51:11.744854 containerd[1516]: time="2025-07-09T23:51:11.744827389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lhqh9,Uid:2a760908-a258-45da-b084-2f437acba1af,Namespace:kube-system,Attempt:0,} returns sandbox id \"856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6\"" Jul 9 23:51:11.745587 kubelet[2635]: E0709 23:51:11.745561 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:11.785895 containerd[1516]: time="2025-07-09T23:51:11.785849015Z" level=info msg="CreateContainer within sandbox \"61b4db82fd6e4e8998ec3d7e64e46a9ee4407f5cd74792d22e18aeadfa67f91e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"929768af590ef29663a25e5ad026a68eb273c50bbf1788c5f8fddbbcd929aa11\"" Jul 9 23:51:11.786449 containerd[1516]: time="2025-07-09T23:51:11.786405397Z" level=info msg="StartContainer for \"929768af590ef29663a25e5ad026a68eb273c50bbf1788c5f8fddbbcd929aa11\"" Jul 9 23:51:11.821938 systemd[1]: Started cri-containerd-929768af590ef29663a25e5ad026a68eb273c50bbf1788c5f8fddbbcd929aa11.scope - libcontainer container 929768af590ef29663a25e5ad026a68eb273c50bbf1788c5f8fddbbcd929aa11. Jul 9 23:51:11.860759 containerd[1516]: time="2025-07-09T23:51:11.860710442Z" level=info msg="StartContainer for \"929768af590ef29663a25e5ad026a68eb273c50bbf1788c5f8fddbbcd929aa11\" returns successfully" Jul 9 23:51:12.849074 kubelet[2635]: E0709 23:51:12.849028 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:12.923630 kubelet[2635]: I0709 23:51:12.923577 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lpdqc" podStartSLOduration=3.923559503 podStartE2EDuration="3.923559503s" podCreationTimestamp="2025-07-09 23:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:51:12.923415752 +0000 UTC m=+8.200444863" watchObservedRunningTime="2025-07-09 23:51:12.923559503 +0000 UTC m=+8.200588615" Jul 9 23:51:13.671359 kubelet[2635]: E0709 23:51:13.671319 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:13.850317 kubelet[2635]: E0709 23:51:13.850284 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:13.850317 kubelet[2635]: E0709 23:51:13.850329 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:18.013788 kubelet[2635]: E0709 23:51:18.013751 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:22.200507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773965658.mount: Deactivated successfully. Jul 9 23:51:30.099562 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:55054.service - OpenSSH per-connection server daemon (10.0.0.1:55054). Jul 9 23:51:30.166129 sshd[3043]: Accepted publickey for core from 10.0.0.1 port 55054 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:51:30.167795 sshd-session[3043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:51:30.172277 systemd-logind[1494]: New session 10 of user core. Jul 9 23:51:30.182963 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 23:51:30.321789 sshd[3045]: Connection closed by 10.0.0.1 port 55054 Jul 9 23:51:30.322160 sshd-session[3043]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:30.326412 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:55054.service: Deactivated successfully. Jul 9 23:51:30.328722 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 23:51:30.329566 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. Jul 9 23:51:30.330635 systemd-logind[1494]: Removed session 10. Jul 9 23:51:35.337603 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:55058.service - OpenSSH per-connection server daemon (10.0.0.1:55058). Jul 9 23:51:35.979242 containerd[1516]: time="2025-07-09T23:51:35.979181522Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:51:35.980429 containerd[1516]: time="2025-07-09T23:51:35.980383981Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 9 23:51:35.981457 containerd[1516]: time="2025-07-09T23:51:35.981422122Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:51:35.983251 containerd[1516]: time="2025-07-09T23:51:35.983183792Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 24.245615592s" Jul 9 23:51:35.983332 containerd[1516]: time="2025-07-09T23:51:35.983260045Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 9 23:51:35.988830 containerd[1516]: time="2025-07-09T23:51:35.986374366Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 23:51:35.988830 containerd[1516]: time="2025-07-09T23:51:35.986553422Z" level=info msg="CreateContainer within sandbox \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:51:36.038511 sshd[3064]: Accepted publickey for core from 10.0.0.1 port 55058 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:51:36.040698 sshd-session[3064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:51:36.051967 containerd[1516]: time="2025-07-09T23:51:36.051919278Z" level=info msg="CreateContainer within sandbox \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e\"" Jul 9 23:51:36.052771 containerd[1516]: time="2025-07-09T23:51:36.052737915Z" level=info msg="StartContainer for \"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e\"" Jul 9 23:51:36.056326 systemd-logind[1494]: New session 11 of user core. Jul 9 23:51:36.061984 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 23:51:36.092006 systemd[1]: Started cri-containerd-447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e.scope - libcontainer container 447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e. Jul 9 23:51:36.245969 systemd[1]: cri-containerd-447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e.scope: Deactivated successfully. Jul 9 23:51:36.271801 containerd[1516]: time="2025-07-09T23:51:36.271733427Z" level=info msg="StartContainer for \"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e\" returns successfully" Jul 9 23:51:36.285663 sshd[3081]: Connection closed by 10.0.0.1 port 55058 Jul 9 23:51:36.286799 sshd-session[3064]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:36.290662 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:55058.service: Deactivated successfully. Jul 9 23:51:36.293030 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 23:51:36.295074 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. Jul 9 23:51:36.296403 systemd-logind[1494]: Removed session 11. Jul 9 23:51:36.381131 containerd[1516]: time="2025-07-09T23:51:36.381018844Z" level=info msg="shim disconnected" id=447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e namespace=k8s.io Jul 9 23:51:36.381131 containerd[1516]: time="2025-07-09T23:51:36.381109233Z" level=warning msg="cleaning up after shim disconnected" id=447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e namespace=k8s.io Jul 9 23:51:36.381131 containerd[1516]: time="2025-07-09T23:51:36.381118341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:51:36.889516 kubelet[2635]: E0709 23:51:36.889481 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:36.891242 containerd[1516]: time="2025-07-09T23:51:36.891176826Z" level=info msg="CreateContainer within sandbox \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:51:37.038322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e-rootfs.mount: Deactivated successfully. Jul 9 23:51:37.458553 containerd[1516]: time="2025-07-09T23:51:37.458484076Z" level=info msg="CreateContainer within sandbox \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4\"" Jul 9 23:51:37.459202 containerd[1516]: time="2025-07-09T23:51:37.459166437Z" level=info msg="StartContainer for \"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4\"" Jul 9 23:51:37.491985 systemd[1]: Started cri-containerd-b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4.scope - libcontainer container b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4. Jul 9 23:51:37.531859 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:51:37.532108 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:51:37.532639 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:51:37.537127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:51:37.537340 systemd[1]: cri-containerd-b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4.scope: Deactivated successfully. Jul 9 23:51:37.560226 containerd[1516]: time="2025-07-09T23:51:37.560169425Z" level=info msg="StartContainer for \"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4\" returns successfully" Jul 9 23:51:37.580138 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:51:37.649935 containerd[1516]: time="2025-07-09T23:51:37.649864233Z" level=info msg="shim disconnected" id=b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4 namespace=k8s.io Jul 9 23:51:37.649935 containerd[1516]: time="2025-07-09T23:51:37.649925417Z" level=warning msg="cleaning up after shim disconnected" id=b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4 namespace=k8s.io Jul 9 23:51:37.649935 containerd[1516]: time="2025-07-09T23:51:37.649933663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:51:37.892570 kubelet[2635]: E0709 23:51:37.892512 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:37.894448 containerd[1516]: time="2025-07-09T23:51:37.894411957Z" level=info msg="CreateContainer within sandbox \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:51:37.925362 containerd[1516]: time="2025-07-09T23:51:37.925296899Z" level=info msg="CreateContainer within sandbox \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f\"" Jul 9 23:51:37.925906 containerd[1516]: time="2025-07-09T23:51:37.925883040Z" level=info msg="StartContainer for \"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f\"" Jul 9 23:51:37.959966 systemd[1]: Started cri-containerd-c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f.scope - libcontainer container c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f. Jul 9 23:51:37.996171 containerd[1516]: time="2025-07-09T23:51:37.996100552Z" level=info msg="StartContainer for \"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f\" returns successfully" Jul 9 23:51:37.997288 systemd[1]: cri-containerd-c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f.scope: Deactivated successfully. Jul 9 23:51:38.025023 containerd[1516]: time="2025-07-09T23:51:38.024953504Z" level=info msg="shim disconnected" id=c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f namespace=k8s.io Jul 9 23:51:38.025023 containerd[1516]: time="2025-07-09T23:51:38.025014979Z" level=warning msg="cleaning up after shim disconnected" id=c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f namespace=k8s.io Jul 9 23:51:38.025023 containerd[1516]: time="2025-07-09T23:51:38.025024537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:51:38.896525 kubelet[2635]: E0709 23:51:38.896471 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:38.898224 containerd[1516]: time="2025-07-09T23:51:38.898173134Z" level=info msg="CreateContainer within sandbox \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:51:38.923581 containerd[1516]: time="2025-07-09T23:51:38.923510011Z" level=info msg="CreateContainer within sandbox \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463\"" Jul 9 23:51:38.925326 containerd[1516]: time="2025-07-09T23:51:38.925090048Z" level=info msg="StartContainer for \"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463\"" Jul 9 23:51:38.964936 systemd[1]: Started cri-containerd-38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463.scope - libcontainer container 38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463. Jul 9 23:51:38.989003 systemd[1]: cri-containerd-38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463.scope: Deactivated successfully. Jul 9 23:51:38.992289 containerd[1516]: time="2025-07-09T23:51:38.992251919Z" level=info msg="StartContainer for \"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463\" returns successfully" Jul 9 23:51:39.031446 containerd[1516]: time="2025-07-09T23:51:39.031373740Z" level=info msg="shim disconnected" id=38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463 namespace=k8s.io Jul 9 23:51:39.031446 containerd[1516]: time="2025-07-09T23:51:39.031440586Z" level=warning msg="cleaning up after shim disconnected" id=38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463 namespace=k8s.io Jul 9 23:51:39.031446 containerd[1516]: time="2025-07-09T23:51:39.031449292Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:51:39.038867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463-rootfs.mount: Deactivated successfully. Jul 9 23:51:39.899538 kubelet[2635]: E0709 23:51:39.899500 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:39.902570 containerd[1516]: time="2025-07-09T23:51:39.902521589Z" level=info msg="CreateContainer within sandbox \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:51:40.372866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314238980.mount: Deactivated successfully. Jul 9 23:51:40.378297 containerd[1516]: time="2025-07-09T23:51:40.378193118Z" level=info msg="CreateContainer within sandbox \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\"" Jul 9 23:51:40.379333 containerd[1516]: time="2025-07-09T23:51:40.379251154Z" level=info msg="StartContainer for \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\"" Jul 9 23:51:40.420011 systemd[1]: Started cri-containerd-b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd.scope - libcontainer container b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd. Jul 9 23:51:40.458439 containerd[1516]: time="2025-07-09T23:51:40.458378170Z" level=info msg="StartContainer for \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\" returns successfully" Jul 9 23:51:40.555506 kubelet[2635]: I0709 23:51:40.555327 2635 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 9 23:51:40.777552 systemd[1]: Created slice kubepods-burstable-pod2886e9ed_4df2_422d_be37_a1d73aa8df66.slice - libcontainer container kubepods-burstable-pod2886e9ed_4df2_422d_be37_a1d73aa8df66.slice. Jul 9 23:51:40.785418 systemd[1]: Created slice kubepods-burstable-pod64b16e81_3462_4749_b12a_e6dd2e5853b0.slice - libcontainer container kubepods-burstable-pod64b16e81_3462_4749_b12a_e6dd2e5853b0.slice. Jul 9 23:51:40.904013 kubelet[2635]: E0709 23:51:40.903975 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:40.935799 kubelet[2635]: I0709 23:51:40.935761 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64b16e81-3462-4749-b12a-e6dd2e5853b0-config-volume\") pod \"coredns-7c65d6cfc9-xr677\" (UID: \"64b16e81-3462-4749-b12a-e6dd2e5853b0\") " pod="kube-system/coredns-7c65d6cfc9-xr677" Jul 9 23:51:40.935884 kubelet[2635]: I0709 23:51:40.935824 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4xfg\" (UniqueName: \"kubernetes.io/projected/2886e9ed-4df2-422d-be37-a1d73aa8df66-kube-api-access-k4xfg\") pod \"coredns-7c65d6cfc9-4tpg8\" (UID: \"2886e9ed-4df2-422d-be37-a1d73aa8df66\") " pod="kube-system/coredns-7c65d6cfc9-4tpg8" Jul 9 23:51:40.935884 kubelet[2635]: I0709 23:51:40.935854 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvklv\" (UniqueName: \"kubernetes.io/projected/64b16e81-3462-4749-b12a-e6dd2e5853b0-kube-api-access-bvklv\") pod \"coredns-7c65d6cfc9-xr677\" (UID: \"64b16e81-3462-4749-b12a-e6dd2e5853b0\") " pod="kube-system/coredns-7c65d6cfc9-xr677" Jul 9 23:51:40.935884 kubelet[2635]: I0709 23:51:40.935880 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2886e9ed-4df2-422d-be37-a1d73aa8df66-config-volume\") pod \"coredns-7c65d6cfc9-4tpg8\" (UID: \"2886e9ed-4df2-422d-be37-a1d73aa8df66\") " pod="kube-system/coredns-7c65d6cfc9-4tpg8" Jul 9 23:51:41.025651 kubelet[2635]: I0709 23:51:41.025455 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4rvsh" podStartSLOduration=7.777285089 podStartE2EDuration="32.025422421s" podCreationTimestamp="2025-07-09 23:51:09 +0000 UTC" firstStartedPulling="2025-07-09 23:51:11.736622701 +0000 UTC m=+7.013651812" lastFinishedPulling="2025-07-09 23:51:35.984760033 +0000 UTC m=+31.261789144" observedRunningTime="2025-07-09 23:51:41.023213303 +0000 UTC m=+36.300242414" watchObservedRunningTime="2025-07-09 23:51:41.025422421 +0000 UTC m=+36.302451532" Jul 9 23:51:41.082094 kubelet[2635]: E0709 23:51:41.081945 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:41.083025 containerd[1516]: time="2025-07-09T23:51:41.082971698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4tpg8,Uid:2886e9ed-4df2-422d-be37-a1d73aa8df66,Namespace:kube-system,Attempt:0,}" Jul 9 23:51:41.089209 kubelet[2635]: E0709 23:51:41.089173 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:41.091835 containerd[1516]: time="2025-07-09T23:51:41.089607659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xr677,Uid:64b16e81-3462-4749-b12a-e6dd2e5853b0,Namespace:kube-system,Attempt:0,}" Jul 9 23:51:41.160689 containerd[1516]: time="2025-07-09T23:51:41.160631217Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:51:41.166637 containerd[1516]: time="2025-07-09T23:51:41.166587972Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 9 23:51:41.174080 containerd[1516]: time="2025-07-09T23:51:41.174033092Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:51:41.179131 containerd[1516]: time="2025-07-09T23:51:41.177468813Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.191053469s" Jul 9 23:51:41.179131 containerd[1516]: time="2025-07-09T23:51:41.177524878Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 9 23:51:41.191387 containerd[1516]: time="2025-07-09T23:51:41.191324830Z" level=info msg="CreateContainer within sandbox \"856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 23:51:41.215961 containerd[1516]: time="2025-07-09T23:51:41.215901928Z" level=info msg="CreateContainer within sandbox \"856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\"" Jul 9 23:51:41.216622 containerd[1516]: time="2025-07-09T23:51:41.216492486Z" level=info msg="StartContainer for \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\"" Jul 9 23:51:41.245112 systemd[1]: Started cri-containerd-355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57.scope - libcontainer container 355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57. Jul 9 23:51:41.274183 containerd[1516]: time="2025-07-09T23:51:41.274117416Z" level=info msg="StartContainer for \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\" returns successfully" Jul 9 23:51:41.314216 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:49728.service - OpenSSH per-connection server daemon (10.0.0.1:49728). Jul 9 23:51:41.362738 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 49728 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:51:41.364088 sshd-session[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:51:41.378267 systemd[1]: run-containerd-runc-k8s.io-b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd-runc.ST0KZm.mount: Deactivated successfully. Jul 9 23:51:41.384718 systemd-logind[1494]: New session 12 of user core. Jul 9 23:51:41.395050 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 23:51:41.713546 sshd[3518]: Connection closed by 10.0.0.1 port 49728 Jul 9 23:51:41.715333 sshd-session[3514]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:41.725379 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. Jul 9 23:51:41.726480 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:49728.service: Deactivated successfully. Jul 9 23:51:41.730647 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 23:51:41.732621 systemd-logind[1494]: Removed session 12. Jul 9 23:51:41.907200 kubelet[2635]: E0709 23:51:41.907116 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:41.907851 kubelet[2635]: E0709 23:51:41.907330 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:42.907988 kubelet[2635]: E0709 23:51:42.907938 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:42.907988 kubelet[2635]: E0709 23:51:42.907943 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:44.780123 systemd-networkd[1428]: cilium_host: Link UP Jul 9 23:51:44.780375 systemd-networkd[1428]: cilium_net: Link UP Jul 9 23:51:44.780653 systemd-networkd[1428]: cilium_net: Gained carrier Jul 9 23:51:44.780948 systemd-networkd[1428]: cilium_host: Gained carrier Jul 9 23:51:44.888241 systemd-networkd[1428]: cilium_net: Gained IPv6LL Jul 9 23:51:44.892707 systemd-networkd[1428]: cilium_vxlan: Link UP Jul 9 23:51:44.892718 systemd-networkd[1428]: cilium_vxlan: Gained carrier Jul 9 23:51:44.945367 kubelet[2635]: E0709 23:51:44.945320 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:45.123834 kernel: NET: Registered PF_ALG protocol family Jul 9 23:51:45.808035 systemd-networkd[1428]: cilium_host: Gained IPv6LL Jul 9 23:51:45.835014 systemd-networkd[1428]: lxc_health: Link UP Jul 9 23:51:45.835486 systemd-networkd[1428]: lxc_health: Gained carrier Jul 9 23:51:46.209240 systemd-networkd[1428]: lxca0cd4c83e021: Link UP Jul 9 23:51:46.228836 kernel: eth0: renamed from tmpda464 Jul 9 23:51:46.235385 kernel: eth0: renamed from tmp507c2 Jul 9 23:51:46.241736 systemd-networkd[1428]: lxccb6de00ec594: Link UP Jul 9 23:51:46.242513 systemd-networkd[1428]: lxca0cd4c83e021: Gained carrier Jul 9 23:51:46.242793 systemd-networkd[1428]: lxccb6de00ec594: Gained carrier Jul 9 23:51:46.730057 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:49734.service - OpenSSH per-connection server daemon (10.0.0.1:49734). Jul 9 23:51:46.777069 sshd[3919]: Accepted publickey for core from 10.0.0.1 port 49734 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:51:46.778875 sshd-session[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:51:46.783722 systemd-logind[1494]: New session 13 of user core. Jul 9 23:51:46.793951 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 23:51:46.831955 systemd-networkd[1428]: cilium_vxlan: Gained IPv6LL Jul 9 23:51:46.932985 sshd[3921]: Connection closed by 10.0.0.1 port 49734 Jul 9 23:51:46.933553 sshd-session[3919]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:46.947463 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:49734.service: Deactivated successfully. Jul 9 23:51:46.950199 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 23:51:46.950994 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. Jul 9 23:51:46.960502 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:49746.service - OpenSSH per-connection server daemon (10.0.0.1:49746). Jul 9 23:51:46.961661 systemd-logind[1494]: Removed session 13. Jul 9 23:51:47.000169 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 49746 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:51:47.002066 sshd-session[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:51:47.008279 systemd-logind[1494]: New session 14 of user core. Jul 9 23:51:47.016030 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 23:51:47.174848 sshd[3939]: Connection closed by 10.0.0.1 port 49746 Jul 9 23:51:47.180176 sshd-session[3936]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:47.200075 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:49758.service - OpenSSH per-connection server daemon (10.0.0.1:49758). Jul 9 23:51:47.200644 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:49746.service: Deactivated successfully. Jul 9 23:51:47.204217 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 23:51:47.208636 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. Jul 9 23:51:47.210722 systemd-logind[1494]: Removed session 14. Jul 9 23:51:47.246070 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 49758 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:51:47.248323 sshd-session[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:51:47.255462 systemd-logind[1494]: New session 15 of user core. Jul 9 23:51:47.265989 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 23:51:47.300632 kubelet[2635]: E0709 23:51:47.300479 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:47.319932 kubelet[2635]: I0709 23:51:47.319843 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-lhqh9" podStartSLOduration=7.878125927 podStartE2EDuration="37.319826712s" podCreationTimestamp="2025-07-09 23:51:10 +0000 UTC" firstStartedPulling="2025-07-09 23:51:11.746224321 +0000 UTC m=+7.023253432" lastFinishedPulling="2025-07-09 23:51:41.187925106 +0000 UTC m=+36.464954217" observedRunningTime="2025-07-09 23:51:42.16262971 +0000 UTC m=+37.439658821" watchObservedRunningTime="2025-07-09 23:51:47.319826712 +0000 UTC m=+42.596855823" Jul 9 23:51:47.392184 sshd[3953]: Connection closed by 10.0.0.1 port 49758 Jul 9 23:51:47.392527 sshd-session[3948]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:47.395895 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:49758.service: Deactivated successfully. Jul 9 23:51:47.398603 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 23:51:47.400404 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. Jul 9 23:51:47.401382 systemd-logind[1494]: Removed session 15. Jul 9 23:51:47.472060 systemd-networkd[1428]: lxc_health: Gained IPv6LL Jul 9 23:51:47.792024 systemd-networkd[1428]: lxca0cd4c83e021: Gained IPv6LL Jul 9 23:51:47.917076 kubelet[2635]: E0709 23:51:47.917044 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:48.112060 systemd-networkd[1428]: lxccb6de00ec594: Gained IPv6LL Jul 9 23:51:49.678850 containerd[1516]: time="2025-07-09T23:51:49.678417907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:51:49.678850 containerd[1516]: time="2025-07-09T23:51:49.678494190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:51:49.678850 containerd[1516]: time="2025-07-09T23:51:49.678508557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:51:49.679635 containerd[1516]: time="2025-07-09T23:51:49.679324128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:51:49.684642 containerd[1516]: time="2025-07-09T23:51:49.684548323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:51:49.685134 containerd[1516]: time="2025-07-09T23:51:49.685064751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:51:49.685219 containerd[1516]: time="2025-07-09T23:51:49.685121308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:51:49.685418 containerd[1516]: time="2025-07-09T23:51:49.685377709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:51:49.716006 systemd[1]: Started cri-containerd-507c2d136cfa6618a3cf18b0420f765e1b0339788bcc7267ee0ed9fd1afde7b7.scope - libcontainer container 507c2d136cfa6618a3cf18b0420f765e1b0339788bcc7267ee0ed9fd1afde7b7. Jul 9 23:51:49.719880 systemd[1]: Started cri-containerd-da464a62a216db2817394a87e5609144c5e8699380be16d4b47f3605c3618869.scope - libcontainer container da464a62a216db2817394a87e5609144c5e8699380be16d4b47f3605c3618869. Jul 9 23:51:49.730250 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:51:49.733918 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:51:49.764538 containerd[1516]: time="2025-07-09T23:51:49.764496394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xr677,Uid:64b16e81-3462-4749-b12a-e6dd2e5853b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"da464a62a216db2817394a87e5609144c5e8699380be16d4b47f3605c3618869\"" Jul 9 23:51:49.765613 kubelet[2635]: E0709 23:51:49.765578 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:49.767889 containerd[1516]: time="2025-07-09T23:51:49.767847443Z" level=info msg="CreateContainer within sandbox \"da464a62a216db2817394a87e5609144c5e8699380be16d4b47f3605c3618869\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:51:49.769528 containerd[1516]: time="2025-07-09T23:51:49.769481069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4tpg8,Uid:2886e9ed-4df2-422d-be37-a1d73aa8df66,Namespace:kube-system,Attempt:0,} returns sandbox id \"507c2d136cfa6618a3cf18b0420f765e1b0339788bcc7267ee0ed9fd1afde7b7\"" Jul 9 23:51:49.770190 kubelet[2635]: E0709 23:51:49.770165 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:49.771631 containerd[1516]: time="2025-07-09T23:51:49.771604604Z" level=info msg="CreateContainer within sandbox \"507c2d136cfa6618a3cf18b0420f765e1b0339788bcc7267ee0ed9fd1afde7b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:51:50.130422 containerd[1516]: time="2025-07-09T23:51:50.130338429Z" level=info msg="CreateContainer within sandbox \"da464a62a216db2817394a87e5609144c5e8699380be16d4b47f3605c3618869\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f1d8c167a79a0493f6e423cc0b4160726bb5d586d4a97e1e9bc3eb61683c097\"" Jul 9 23:51:50.131132 containerd[1516]: time="2025-07-09T23:51:50.131068970Z" level=info msg="StartContainer for \"2f1d8c167a79a0493f6e423cc0b4160726bb5d586d4a97e1e9bc3eb61683c097\"" Jul 9 23:51:50.142923 containerd[1516]: time="2025-07-09T23:51:50.142850029Z" level=info msg="CreateContainer within sandbox \"507c2d136cfa6618a3cf18b0420f765e1b0339788bcc7267ee0ed9fd1afde7b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3522f9b74a8ec288c93bfd5e5cae74300fa2d94b737fa48e2aabb4e10117d3c\"" Jul 9 23:51:50.143626 containerd[1516]: time="2025-07-09T23:51:50.143574829Z" level=info msg="StartContainer for \"f3522f9b74a8ec288c93bfd5e5cae74300fa2d94b737fa48e2aabb4e10117d3c\"" Jul 9 23:51:50.161002 systemd[1]: Started cri-containerd-2f1d8c167a79a0493f6e423cc0b4160726bb5d586d4a97e1e9bc3eb61683c097.scope - libcontainer container 2f1d8c167a79a0493f6e423cc0b4160726bb5d586d4a97e1e9bc3eb61683c097. Jul 9 23:51:50.176986 systemd[1]: Started cri-containerd-f3522f9b74a8ec288c93bfd5e5cae74300fa2d94b737fa48e2aabb4e10117d3c.scope - libcontainer container f3522f9b74a8ec288c93bfd5e5cae74300fa2d94b737fa48e2aabb4e10117d3c. Jul 9 23:51:50.207662 containerd[1516]: time="2025-07-09T23:51:50.207606284Z" level=info msg="StartContainer for \"2f1d8c167a79a0493f6e423cc0b4160726bb5d586d4a97e1e9bc3eb61683c097\" returns successfully" Jul 9 23:51:50.214360 containerd[1516]: time="2025-07-09T23:51:50.214323390Z" level=info msg="StartContainer for \"f3522f9b74a8ec288c93bfd5e5cae74300fa2d94b737fa48e2aabb4e10117d3c\" returns successfully" Jul 9 23:51:50.685047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646228385.mount: Deactivated successfully. Jul 9 23:51:50.925057 kubelet[2635]: E0709 23:51:50.924080 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:50.926230 kubelet[2635]: E0709 23:51:50.926037 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:50.937903 kubelet[2635]: I0709 23:51:50.937267 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xr677" podStartSLOduration=40.937246912 podStartE2EDuration="40.937246912s" podCreationTimestamp="2025-07-09 23:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:51:50.936589698 +0000 UTC m=+46.213618809" watchObservedRunningTime="2025-07-09 23:51:50.937246912 +0000 UTC m=+46.214276023" Jul 9 23:51:50.947751 kubelet[2635]: I0709 23:51:50.947678 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4tpg8" podStartSLOduration=40.947651026 podStartE2EDuration="40.947651026s" podCreationTimestamp="2025-07-09 23:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:51:50.94714187 +0000 UTC m=+46.224170981" watchObservedRunningTime="2025-07-09 23:51:50.947651026 +0000 UTC m=+46.224680147" Jul 9 23:51:51.927579 kubelet[2635]: E0709 23:51:51.927519 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:51.928347 kubelet[2635]: E0709 23:51:51.928319 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:52.425305 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:39538.service - OpenSSH per-connection server daemon (10.0.0.1:39538). Jul 9 23:51:52.466617 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 39538 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:51:52.468306 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:51:52.472814 systemd-logind[1494]: New session 16 of user core. Jul 9 23:51:52.482998 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 23:51:52.605213 sshd[4151]: Connection closed by 10.0.0.1 port 39538 Jul 9 23:51:52.605840 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:52.611418 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:39538.service: Deactivated successfully. Jul 9 23:51:52.613943 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 23:51:52.614710 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. Jul 9 23:51:52.615744 systemd-logind[1494]: Removed session 16. Jul 9 23:51:52.929122 kubelet[2635]: E0709 23:51:52.929090 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:52.929613 kubelet[2635]: E0709 23:51:52.929165 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:57.623837 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:39544.service - OpenSSH per-connection server daemon (10.0.0.1:39544). Jul 9 23:51:57.668184 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 39544 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:51:57.669976 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:51:57.674621 systemd-logind[1494]: New session 17 of user core. Jul 9 23:51:57.683995 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 23:51:57.794677 sshd[4168]: Connection closed by 10.0.0.1 port 39544 Jul 9 23:51:57.795102 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:57.799271 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:39544.service: Deactivated successfully. Jul 9 23:51:57.801913 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 23:51:57.803041 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. Jul 9 23:51:57.804251 systemd-logind[1494]: Removed session 17. Jul 9 23:52:02.856278 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:44346.service - OpenSSH per-connection server daemon (10.0.0.1:44346). Jul 9 23:52:02.951279 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 44346 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:02.955193 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:02.975588 systemd-logind[1494]: New session 18 of user core. Jul 9 23:52:02.986972 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 23:52:03.330924 sshd[4183]: Connection closed by 10.0.0.1 port 44346 Jul 9 23:52:03.332163 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:03.338420 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:44346.service: Deactivated successfully. Jul 9 23:52:03.342003 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 23:52:03.397359 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. Jul 9 23:52:03.420363 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:44348.service - OpenSSH per-connection server daemon (10.0.0.1:44348). Jul 9 23:52:03.425961 systemd-logind[1494]: Removed session 18. Jul 9 23:52:03.499453 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 44348 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:03.500221 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:03.511128 systemd-logind[1494]: New session 19 of user core. Jul 9 23:52:03.523781 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 23:52:04.544506 sshd[4198]: Connection closed by 10.0.0.1 port 44348 Jul 9 23:52:04.545548 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:04.576338 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:44348.service: Deactivated successfully. Jul 9 23:52:04.580995 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 23:52:04.589054 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. Jul 9 23:52:04.607511 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:44354.service - OpenSSH per-connection server daemon (10.0.0.1:44354). Jul 9 23:52:04.610936 systemd-logind[1494]: Removed session 19. Jul 9 23:52:04.680207 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 44354 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:04.687223 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:04.701225 systemd-logind[1494]: New session 20 of user core. Jul 9 23:52:04.710210 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 23:52:07.643775 sshd[4211]: Connection closed by 10.0.0.1 port 44354 Jul 9 23:52:07.644364 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:07.657384 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:44354.service: Deactivated successfully. Jul 9 23:52:07.659790 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 23:52:07.660163 systemd[1]: session-20.scope: Consumed 651ms CPU time, 64.1M memory peak. Jul 9 23:52:07.661642 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. Jul 9 23:52:07.668329 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:44364.service - OpenSSH per-connection server daemon (10.0.0.1:44364). Jul 9 23:52:07.669928 systemd-logind[1494]: Removed session 20. Jul 9 23:52:07.709984 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 44364 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:07.711897 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:07.717461 systemd-logind[1494]: New session 21 of user core. Jul 9 23:52:07.727019 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 23:52:08.393680 sshd[4234]: Connection closed by 10.0.0.1 port 44364 Jul 9 23:52:08.394281 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:08.405772 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:44364.service: Deactivated successfully. Jul 9 23:52:08.408279 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 23:52:08.409975 systemd-logind[1494]: Session 21 logged out. Waiting for processes to exit. Jul 9 23:52:08.417217 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:44378.service - OpenSSH per-connection server daemon (10.0.0.1:44378). Jul 9 23:52:08.418444 systemd-logind[1494]: Removed session 21. Jul 9 23:52:08.460177 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 44378 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:08.462015 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:08.467129 systemd-logind[1494]: New session 22 of user core. Jul 9 23:52:08.481061 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 23:52:08.631987 sshd[4247]: Connection closed by 10.0.0.1 port 44378 Jul 9 23:52:08.632559 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:08.639161 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:44378.service: Deactivated successfully. Jul 9 23:52:08.642524 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 23:52:08.643464 systemd-logind[1494]: Session 22 logged out. Waiting for processes to exit. Jul 9 23:52:08.644638 systemd-logind[1494]: Removed session 22. Jul 9 23:52:13.649588 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:60322.service - OpenSSH per-connection server daemon (10.0.0.1:60322). Jul 9 23:52:13.713379 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 60322 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:13.715669 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:13.721042 systemd-logind[1494]: New session 23 of user core. Jul 9 23:52:13.733130 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 23:52:13.868851 sshd[4264]: Connection closed by 10.0.0.1 port 60322 Jul 9 23:52:13.869380 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:13.875568 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:60322.service: Deactivated successfully. Jul 9 23:52:13.878426 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 23:52:13.879688 systemd-logind[1494]: Session 23 logged out. Waiting for processes to exit. Jul 9 23:52:13.882100 systemd-logind[1494]: Removed session 23. Jul 9 23:52:15.822217 kubelet[2635]: E0709 23:52:15.822131 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:18.887161 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:60334.service - OpenSSH per-connection server daemon (10.0.0.1:60334). Jul 9 23:52:18.934458 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 60334 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:18.936720 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:18.941648 systemd-logind[1494]: New session 24 of user core. Jul 9 23:52:18.950172 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 23:52:19.074786 sshd[4283]: Connection closed by 10.0.0.1 port 60334 Jul 9 23:52:19.075290 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:19.080234 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:60334.service: Deactivated successfully. Jul 9 23:52:19.082974 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 23:52:19.084401 systemd-logind[1494]: Session 24 logged out. Waiting for processes to exit. Jul 9 23:52:19.085628 systemd-logind[1494]: Removed session 24. Jul 9 23:52:19.821099 kubelet[2635]: E0709 23:52:19.821034 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:24.090036 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:41758.service - OpenSSH per-connection server daemon (10.0.0.1:41758). Jul 9 23:52:24.136560 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 41758 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:24.138611 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:24.143722 systemd-logind[1494]: New session 25 of user core. Jul 9 23:52:24.151069 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 23:52:24.266355 sshd[4298]: Connection closed by 10.0.0.1 port 41758 Jul 9 23:52:24.266875 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:24.270902 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:41758.service: Deactivated successfully. Jul 9 23:52:24.273370 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 23:52:24.274087 systemd-logind[1494]: Session 25 logged out. Waiting for processes to exit. Jul 9 23:52:24.275060 systemd-logind[1494]: Removed session 25. Jul 9 23:52:29.281319 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:41768.service - OpenSSH per-connection server daemon (10.0.0.1:41768). Jul 9 23:52:29.324653 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 41768 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:29.326496 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:29.330924 systemd-logind[1494]: New session 26 of user core. Jul 9 23:52:29.346039 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 23:52:29.520432 sshd[4313]: Connection closed by 10.0.0.1 port 41768 Jul 9 23:52:29.520787 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:29.524910 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:41768.service: Deactivated successfully. Jul 9 23:52:29.527403 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 23:52:29.528362 systemd-logind[1494]: Session 26 logged out. Waiting for processes to exit. Jul 9 23:52:29.529501 systemd-logind[1494]: Removed session 26. Jul 9 23:52:32.821188 kubelet[2635]: E0709 23:52:32.821147 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:33.820505 kubelet[2635]: E0709 23:52:33.820440 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:34.537512 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:38152.service - OpenSSH per-connection server daemon (10.0.0.1:38152). Jul 9 23:52:34.582792 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 38152 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:34.585685 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:34.593120 systemd-logind[1494]: New session 27 of user core. Jul 9 23:52:34.603553 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 23:52:34.794643 sshd[4329]: Connection closed by 10.0.0.1 port 38152 Jul 9 23:52:34.798255 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:34.811724 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:38152.service: Deactivated successfully. Jul 9 23:52:34.819252 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 23:52:34.823116 systemd-logind[1494]: Session 27 logged out. Waiting for processes to exit. Jul 9 23:52:34.836054 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:38168.service - OpenSSH per-connection server daemon (10.0.0.1:38168). Jul 9 23:52:34.840385 systemd-logind[1494]: Removed session 27. Jul 9 23:52:34.894024 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 38168 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:34.896667 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:34.908048 systemd-logind[1494]: New session 28 of user core. Jul 9 23:52:34.929433 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 9 23:52:37.131877 containerd[1516]: time="2025-07-09T23:52:37.131749692Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:52:37.134612 containerd[1516]: time="2025-07-09T23:52:37.134567499Z" level=info msg="StopContainer for \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\" with timeout 2 (s)" Jul 9 23:52:37.142125 containerd[1516]: time="2025-07-09T23:52:37.141977707Z" level=info msg="Stop container \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\" with signal terminated" Jul 9 23:52:37.143243 containerd[1516]: time="2025-07-09T23:52:37.143214123Z" level=info msg="StopContainer for \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\" with timeout 30 (s)" Jul 9 23:52:37.143645 containerd[1516]: time="2025-07-09T23:52:37.143615463Z" level=info msg="Stop container \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\" with signal terminated" Jul 9 23:52:37.152115 systemd-networkd[1428]: lxc_health: Link DOWN Jul 9 23:52:37.152130 systemd-networkd[1428]: lxc_health: Lost carrier Jul 9 23:52:37.158741 systemd[1]: cri-containerd-355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57.scope: Deactivated successfully. Jul 9 23:52:37.183311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57-rootfs.mount: Deactivated successfully. Jul 9 23:52:37.368736 systemd[1]: cri-containerd-b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd.scope: Deactivated successfully. Jul 9 23:52:37.369224 systemd[1]: cri-containerd-b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd.scope: Consumed 7.156s CPU time, 124.1M memory peak, 704K read from disk, 13.3M written to disk. Jul 9 23:52:37.391918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd-rootfs.mount: Deactivated successfully. Jul 9 23:52:37.393305 containerd[1516]: time="2025-07-09T23:52:37.392977857Z" level=info msg="shim disconnected" id=355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57 namespace=k8s.io Jul 9 23:52:37.393305 containerd[1516]: time="2025-07-09T23:52:37.393044293Z" level=warning msg="cleaning up after shim disconnected" id=355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57 namespace=k8s.io Jul 9 23:52:37.393305 containerd[1516]: time="2025-07-09T23:52:37.393054613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:52:37.407347 containerd[1516]: time="2025-07-09T23:52:37.407261535Z" level=info msg="shim disconnected" id=b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd namespace=k8s.io Jul 9 23:52:37.407347 containerd[1516]: time="2025-07-09T23:52:37.407344933Z" level=warning msg="cleaning up after shim disconnected" id=b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd namespace=k8s.io Jul 9 23:52:37.407347 containerd[1516]: time="2025-07-09T23:52:37.407353650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:52:37.419013 containerd[1516]: time="2025-07-09T23:52:37.418944580Z" level=info msg="StopContainer for \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\" returns successfully" Jul 9 23:52:37.423355 containerd[1516]: time="2025-07-09T23:52:37.423311274Z" level=info msg="StopPodSandbox for \"856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6\"" Jul 9 23:52:37.430373 containerd[1516]: time="2025-07-09T23:52:37.430319981Z" level=info msg="StopContainer for \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\" returns successfully" Jul 9 23:52:37.431022 containerd[1516]: time="2025-07-09T23:52:37.430989902Z" level=info msg="StopPodSandbox for \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\"" Jul 9 23:52:37.431095 containerd[1516]: time="2025-07-09T23:52:37.431029528Z" level=info msg="Container to stop \"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:52:37.431095 containerd[1516]: time="2025-07-09T23:52:37.431087758Z" level=info msg="Container to stop \"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:52:37.431148 containerd[1516]: time="2025-07-09T23:52:37.431096214Z" level=info msg="Container to stop \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:52:37.431148 containerd[1516]: time="2025-07-09T23:52:37.431104509Z" level=info msg="Container to stop \"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:52:37.431148 containerd[1516]: time="2025-07-09T23:52:37.431113036Z" level=info msg="Container to stop \"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:52:37.431479 containerd[1516]: time="2025-07-09T23:52:37.423361249Z" level=info msg="Container to stop \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:52:37.433695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6-shm.mount: Deactivated successfully. Jul 9 23:52:37.433875 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec-shm.mount: Deactivated successfully. Jul 9 23:52:37.445318 systemd[1]: cri-containerd-856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6.scope: Deactivated successfully. Jul 9 23:52:37.450626 systemd[1]: cri-containerd-13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec.scope: Deactivated successfully. Jul 9 23:52:37.484074 containerd[1516]: time="2025-07-09T23:52:37.483994591Z" level=info msg="shim disconnected" id=856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6 namespace=k8s.io Jul 9 23:52:37.484074 containerd[1516]: time="2025-07-09T23:52:37.484063031Z" level=warning msg="cleaning up after shim disconnected" id=856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6 namespace=k8s.io Jul 9 23:52:37.484074 containerd[1516]: time="2025-07-09T23:52:37.484074152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:52:37.484423 containerd[1516]: time="2025-07-09T23:52:37.484257620Z" level=info msg="shim disconnected" id=13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec namespace=k8s.io Jul 9 23:52:37.484423 containerd[1516]: time="2025-07-09T23:52:37.484279863Z" level=warning msg="cleaning up after shim disconnected" id=13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec namespace=k8s.io Jul 9 23:52:37.484423 containerd[1516]: time="2025-07-09T23:52:37.484288058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:52:37.503518 containerd[1516]: time="2025-07-09T23:52:37.503456193Z" level=info msg="TearDown network for sandbox \"856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6\" successfully" Jul 9 23:52:37.503518 containerd[1516]: time="2025-07-09T23:52:37.503505557Z" level=info msg="StopPodSandbox for \"856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6\" returns successfully" Jul 9 23:52:37.503738 containerd[1516]: time="2025-07-09T23:52:37.503714833Z" level=info msg="TearDown network for sandbox \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" successfully" Jul 9 23:52:37.503777 containerd[1516]: time="2025-07-09T23:52:37.503735714Z" level=info msg="StopPodSandbox for \"13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec\" returns successfully" Jul 9 23:52:37.701169 kubelet[2635]: I0709 23:52:37.700953 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-cgroup\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.701169 kubelet[2635]: I0709 23:52:37.701030 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tqjh\" (UniqueName: \"kubernetes.io/projected/7c8d4173-032a-4c40-bcef-27f445bbf0eb-kube-api-access-6tqjh\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.701169 kubelet[2635]: I0709 23:52:37.701051 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h58dj\" (UniqueName: \"kubernetes.io/projected/2a760908-a258-45da-b084-2f437acba1af-kube-api-access-h58dj\") pod \"2a760908-a258-45da-b084-2f437acba1af\" (UID: \"2a760908-a258-45da-b084-2f437acba1af\") " Jul 9 23:52:37.701169 kubelet[2635]: I0709 23:52:37.701069 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cni-path\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.701169 kubelet[2635]: I0709 23:52:37.701086 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-hostproc\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.701169 kubelet[2635]: I0709 23:52:37.701102 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-host-proc-sys-net\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.702053 kubelet[2635]: I0709 23:52:37.701118 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-config-path\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.702053 kubelet[2635]: I0709 23:52:37.701136 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a760908-a258-45da-b084-2f437acba1af-cilium-config-path\") pod \"2a760908-a258-45da-b084-2f437acba1af\" (UID: \"2a760908-a258-45da-b084-2f437acba1af\") " Jul 9 23:52:37.702053 kubelet[2635]: I0709 23:52:37.701157 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-etc-cni-netd\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.702053 kubelet[2635]: I0709 23:52:37.701174 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-run\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.702053 kubelet[2635]: I0709 23:52:37.701159 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:52:37.702053 kubelet[2635]: I0709 23:52:37.701205 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-host-proc-sys-kernel\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.702272 kubelet[2635]: I0709 23:52:37.701266 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:52:37.702272 kubelet[2635]: I0709 23:52:37.701288 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c8d4173-032a-4c40-bcef-27f445bbf0eb-clustermesh-secrets\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.702272 kubelet[2635]: I0709 23:52:37.701318 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c8d4173-032a-4c40-bcef-27f445bbf0eb-hubble-tls\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.702272 kubelet[2635]: I0709 23:52:37.701337 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-xtables-lock\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.702272 kubelet[2635]: I0709 23:52:37.701354 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-bpf-maps\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.702272 kubelet[2635]: I0709 23:52:37.701371 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-lib-modules\") pod \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\" (UID: \"7c8d4173-032a-4c40-bcef-27f445bbf0eb\") " Jul 9 23:52:37.702485 kubelet[2635]: I0709 23:52:37.701404 2635 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.702485 kubelet[2635]: I0709 23:52:37.701415 2635 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.705110 kubelet[2635]: I0709 23:52:37.701317 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cni-path" (OuterVolumeSpecName: "cni-path") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:52:37.705110 kubelet[2635]: I0709 23:52:37.701334 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-hostproc" (OuterVolumeSpecName: "hostproc") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:52:37.705110 kubelet[2635]: I0709 23:52:37.701351 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:52:37.705110 kubelet[2635]: I0709 23:52:37.701439 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:52:37.705433 kubelet[2635]: I0709 23:52:37.705410 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:52:37.705969 kubelet[2635]: I0709 23:52:37.705473 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:52:37.705969 kubelet[2635]: I0709 23:52:37.705559 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8d4173-032a-4c40-bcef-27f445bbf0eb-kube-api-access-6tqjh" (OuterVolumeSpecName: "kube-api-access-6tqjh") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "kube-api-access-6tqjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 23:52:37.705969 kubelet[2635]: I0709 23:52:37.705789 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:52:37.706164 kubelet[2635]: I0709 23:52:37.706144 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8d4173-032a-4c40-bcef-27f445bbf0eb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 9 23:52:37.706311 kubelet[2635]: I0709 23:52:37.706295 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:52:37.706570 kubelet[2635]: I0709 23:52:37.706538 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a760908-a258-45da-b084-2f437acba1af-kube-api-access-h58dj" (OuterVolumeSpecName: "kube-api-access-h58dj") pod "2a760908-a258-45da-b084-2f437acba1af" (UID: "2a760908-a258-45da-b084-2f437acba1af"). InnerVolumeSpecName "kube-api-access-h58dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 23:52:37.706653 kubelet[2635]: I0709 23:52:37.706605 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 9 23:52:37.708507 kubelet[2635]: I0709 23:52:37.708480 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8d4173-032a-4c40-bcef-27f445bbf0eb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7c8d4173-032a-4c40-bcef-27f445bbf0eb" (UID: "7c8d4173-032a-4c40-bcef-27f445bbf0eb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 23:52:37.710127 kubelet[2635]: I0709 23:52:37.710092 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a760908-a258-45da-b084-2f437acba1af-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a760908-a258-45da-b084-2f437acba1af" (UID: "2a760908-a258-45da-b084-2f437acba1af"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 9 23:52:37.801797 kubelet[2635]: I0709 23:52:37.801737 2635 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tqjh\" (UniqueName: \"kubernetes.io/projected/7c8d4173-032a-4c40-bcef-27f445bbf0eb-kube-api-access-6tqjh\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.801797 kubelet[2635]: I0709 23:52:37.801780 2635 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h58dj\" (UniqueName: \"kubernetes.io/projected/2a760908-a258-45da-b084-2f437acba1af-kube-api-access-h58dj\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.801797 kubelet[2635]: I0709 23:52:37.801794 2635 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.801797 kubelet[2635]: I0709 23:52:37.801833 2635 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.802130 kubelet[2635]: I0709 23:52:37.801847 2635 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.802130 kubelet[2635]: I0709 23:52:37.801860 2635 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.802130 kubelet[2635]: I0709 23:52:37.801872 2635 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a760908-a258-45da-b084-2f437acba1af-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.802130 kubelet[2635]: I0709 23:52:37.801883 2635 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.802130 kubelet[2635]: I0709 23:52:37.801894 2635 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.802130 kubelet[2635]: I0709 23:52:37.801906 2635 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c8d4173-032a-4c40-bcef-27f445bbf0eb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.802130 kubelet[2635]: I0709 23:52:37.801917 2635 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c8d4173-032a-4c40-bcef-27f445bbf0eb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.802130 kubelet[2635]: I0709 23:52:37.801928 2635 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.802397 kubelet[2635]: I0709 23:52:37.801939 2635 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.802397 kubelet[2635]: I0709 23:52:37.801952 2635 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c8d4173-032a-4c40-bcef-27f445bbf0eb-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 9 23:52:37.980949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-856e99feb5f9aeee1c3362f92e011698540ed6429b454b4d2a9b35d272c33dd6-rootfs.mount: Deactivated successfully. Jul 9 23:52:37.981096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13706cefb8217a46f0f6be1a26ce9f79890c22c693c153edf976fc728f930aec-rootfs.mount: Deactivated successfully. Jul 9 23:52:37.981177 systemd[1]: var-lib-kubelet-pods-2a760908\x2da258\x2d45da\x2db084\x2d2f437acba1af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh58dj.mount: Deactivated successfully. Jul 9 23:52:37.981287 systemd[1]: var-lib-kubelet-pods-7c8d4173\x2d032a\x2d4c40\x2dbcef\x2d27f445bbf0eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6tqjh.mount: Deactivated successfully. Jul 9 23:52:37.981391 systemd[1]: var-lib-kubelet-pods-7c8d4173\x2d032a\x2d4c40\x2dbcef\x2d27f445bbf0eb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 23:52:37.981476 systemd[1]: var-lib-kubelet-pods-7c8d4173\x2d032a\x2d4c40\x2dbcef\x2d27f445bbf0eb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 23:52:38.059078 kubelet[2635]: I0709 23:52:38.059027 2635 scope.go:117] "RemoveContainer" containerID="355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57" Jul 9 23:52:38.065536 systemd[1]: Removed slice kubepods-besteffort-pod2a760908_a258_45da_b084_2f437acba1af.slice - libcontainer container kubepods-besteffort-pod2a760908_a258_45da_b084_2f437acba1af.slice. Jul 9 23:52:38.067074 containerd[1516]: time="2025-07-09T23:52:38.065602890Z" level=info msg="RemoveContainer for \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\"" Jul 9 23:52:38.069701 systemd[1]: Removed slice kubepods-burstable-pod7c8d4173_032a_4c40_bcef_27f445bbf0eb.slice - libcontainer container kubepods-burstable-pod7c8d4173_032a_4c40_bcef_27f445bbf0eb.slice. Jul 9 23:52:38.069820 systemd[1]: kubepods-burstable-pod7c8d4173_032a_4c40_bcef_27f445bbf0eb.slice: Consumed 7.266s CPU time, 124.4M memory peak, 728K read from disk, 13.3M written to disk. Jul 9 23:52:38.075511 containerd[1516]: time="2025-07-09T23:52:38.075448854Z" level=info msg="RemoveContainer for \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\" returns successfully" Jul 9 23:52:38.075960 kubelet[2635]: I0709 23:52:38.075925 2635 scope.go:117] "RemoveContainer" containerID="355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57" Jul 9 23:52:38.076996 containerd[1516]: time="2025-07-09T23:52:38.076199147Z" level=error msg="ContainerStatus for \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\": not found" Jul 9 23:52:38.084930 kubelet[2635]: E0709 23:52:38.084876 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\": not found" containerID="355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57" Jul 9 23:52:38.085105 kubelet[2635]: I0709 23:52:38.084922 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57"} err="failed to get container status \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\": rpc error: code = NotFound desc = an error occurred when try to find container \"355d285e4418eb2506ff0ddebce112b5d44c5f198794f7089a4a8267eb536a57\": not found" Jul 9 23:52:38.085105 kubelet[2635]: I0709 23:52:38.085000 2635 scope.go:117] "RemoveContainer" containerID="b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd" Jul 9 23:52:38.086388 containerd[1516]: time="2025-07-09T23:52:38.086327518Z" level=info msg="RemoveContainer for \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\"" Jul 9 23:52:38.091774 containerd[1516]: time="2025-07-09T23:52:38.091715708Z" level=info msg="RemoveContainer for \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\" returns successfully" Jul 9 23:52:38.092245 kubelet[2635]: I0709 23:52:38.092098 2635 scope.go:117] "RemoveContainer" containerID="38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463" Jul 9 23:52:38.093482 containerd[1516]: time="2025-07-09T23:52:38.093127125Z" level=info msg="RemoveContainer for \"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463\"" Jul 9 23:52:38.097056 containerd[1516]: time="2025-07-09T23:52:38.097019848Z" level=info msg="RemoveContainer for \"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463\" returns successfully" Jul 9 23:52:38.097293 kubelet[2635]: I0709 23:52:38.097258 2635 scope.go:117] "RemoveContainer" containerID="c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f" Jul 9 23:52:38.098727 containerd[1516]: time="2025-07-09T23:52:38.098679306Z" level=info msg="RemoveContainer for \"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f\"" Jul 9 23:52:38.103390 containerd[1516]: time="2025-07-09T23:52:38.103353952Z" level=info msg="RemoveContainer for \"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f\" returns successfully" Jul 9 23:52:38.103602 kubelet[2635]: I0709 23:52:38.103572 2635 scope.go:117] "RemoveContainer" containerID="b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4" Jul 9 23:52:38.105043 containerd[1516]: time="2025-07-09T23:52:38.105017086Z" level=info msg="RemoveContainer for \"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4\"" Jul 9 23:52:38.109303 containerd[1516]: time="2025-07-09T23:52:38.109264703Z" level=info msg="RemoveContainer for \"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4\" returns successfully" Jul 9 23:52:38.109532 kubelet[2635]: I0709 23:52:38.109501 2635 scope.go:117] "RemoveContainer" containerID="447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e" Jul 9 23:52:38.119955 containerd[1516]: time="2025-07-09T23:52:38.119903381Z" level=info msg="RemoveContainer for \"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e\"" Jul 9 23:52:38.123709 containerd[1516]: time="2025-07-09T23:52:38.123648915Z" level=info msg="RemoveContainer for \"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e\" returns successfully" Jul 9 23:52:38.123902 kubelet[2635]: I0709 23:52:38.123862 2635 scope.go:117] "RemoveContainer" containerID="b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd" Jul 9 23:52:38.124121 containerd[1516]: time="2025-07-09T23:52:38.124069984Z" level=error msg="ContainerStatus for \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\": not found" Jul 9 23:52:38.124314 kubelet[2635]: E0709 23:52:38.124282 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\": not found" containerID="b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd" Jul 9 23:52:38.124373 kubelet[2635]: I0709 23:52:38.124323 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd"} err="failed to get container status \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\": rpc error: code = NotFound desc = an error occurred when try to find container \"b780f5a977d87083cfba247353af7b763ee7dcf6db2b6df4800e843292937edd\": not found" Jul 9 23:52:38.124373 kubelet[2635]: I0709 23:52:38.124352 2635 scope.go:117] "RemoveContainer" containerID="38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463" Jul 9 23:52:38.124548 containerd[1516]: time="2025-07-09T23:52:38.124521140Z" level=error msg="ContainerStatus for \"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463\": not found" Jul 9 23:52:38.124667 kubelet[2635]: E0709 23:52:38.124642 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463\": not found" containerID="38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463" Jul 9 23:52:38.124733 kubelet[2635]: I0709 23:52:38.124669 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463"} err="failed to get container status \"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463\": rpc error: code = NotFound desc = an error occurred when try to find container \"38c9c54f429eb2a50d125e5d80aa9aa576346423ed36288759ddf038aed3c463\": not found" Jul 9 23:52:38.124733 kubelet[2635]: I0709 23:52:38.124690 2635 scope.go:117] "RemoveContainer" containerID="c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f" Jul 9 23:52:38.124955 kubelet[2635]: E0709 23:52:38.124919 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f\": not found" containerID="c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f" Jul 9 23:52:38.124955 kubelet[2635]: I0709 23:52:38.124943 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f"} err="failed to get container status \"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f\": not found" Jul 9 23:52:38.125024 containerd[1516]: time="2025-07-09T23:52:38.124842149Z" level=error msg="ContainerStatus for \"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c05c3b040aa5b8f5d61684be721ddd050fbaaa9ed941f57b5baa5c0ceaedd97f\": not found" Jul 9 23:52:38.125063 kubelet[2635]: I0709 23:52:38.124962 2635 scope.go:117] "RemoveContainer" containerID="b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4" Jul 9 23:52:38.125137 containerd[1516]: time="2025-07-09T23:52:38.125104396Z" level=error msg="ContainerStatus for \"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4\": not found" Jul 9 23:52:38.125265 kubelet[2635]: E0709 23:52:38.125240 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4\": not found" containerID="b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4" Jul 9 23:52:38.125344 kubelet[2635]: I0709 23:52:38.125269 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4"} err="failed to get container status \"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b105c180bce9a34fee01e10ceff27e462d3cc5128ddd900c460e75e4a69219c4\": not found" Jul 9 23:52:38.125344 kubelet[2635]: I0709 23:52:38.125288 2635 scope.go:117] "RemoveContainer" containerID="447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e" Jul 9 23:52:38.125478 containerd[1516]: time="2025-07-09T23:52:38.125448148Z" level=error msg="ContainerStatus for \"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e\": not found" Jul 9 23:52:38.125640 kubelet[2635]: E0709 23:52:38.125603 2635 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e\": not found" containerID="447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e" Jul 9 23:52:38.125640 kubelet[2635]: I0709 23:52:38.125634 2635 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e"} err="failed to get container status \"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e\": rpc error: code = NotFound desc = an error occurred when try to find container \"447f1d0e24b00af0b73dde9fe5c427d10dc7d3de0b019c5333f3e23f6daaa15e\": not found" Jul 9 23:52:38.323783 sshd[4345]: Connection closed by 10.0.0.1 port 38168 Jul 9 23:52:38.324390 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:38.344496 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:38168.service: Deactivated successfully. Jul 9 23:52:38.346908 systemd[1]: session-28.scope: Deactivated successfully. Jul 9 23:52:38.347824 systemd-logind[1494]: Session 28 logged out. Waiting for processes to exit. Jul 9 23:52:38.362252 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:38176.service - OpenSSH per-connection server daemon (10.0.0.1:38176). Jul 9 23:52:38.363290 systemd-logind[1494]: Removed session 28. Jul 9 23:52:38.405255 sshd[4508]: Accepted publickey for core from 10.0.0.1 port 38176 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:38.407590 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:38.413252 systemd-logind[1494]: New session 29 of user core. Jul 9 23:52:38.429075 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 9 23:52:38.823626 kubelet[2635]: I0709 23:52:38.823572 2635 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a760908-a258-45da-b084-2f437acba1af" path="/var/lib/kubelet/pods/2a760908-a258-45da-b084-2f437acba1af/volumes" Jul 9 23:52:38.824545 kubelet[2635]: I0709 23:52:38.824368 2635 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c8d4173-032a-4c40-bcef-27f445bbf0eb" path="/var/lib/kubelet/pods/7c8d4173-032a-4c40-bcef-27f445bbf0eb/volumes" Jul 9 23:52:38.896545 sshd[4512]: Connection closed by 10.0.0.1 port 38176 Jul 9 23:52:38.897024 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:38.912528 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:38176.service: Deactivated successfully. Jul 9 23:52:38.915256 systemd[1]: session-29.scope: Deactivated successfully. Jul 9 23:52:38.919021 systemd-logind[1494]: Session 29 logged out. Waiting for processes to exit. Jul 9 23:52:38.922712 kubelet[2635]: E0709 23:52:38.922657 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c8d4173-032a-4c40-bcef-27f445bbf0eb" containerName="mount-cgroup" Jul 9 23:52:38.922712 kubelet[2635]: E0709 23:52:38.922703 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c8d4173-032a-4c40-bcef-27f445bbf0eb" containerName="apply-sysctl-overwrites" Jul 9 23:52:38.922712 kubelet[2635]: E0709 23:52:38.922714 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2a760908-a258-45da-b084-2f437acba1af" containerName="cilium-operator" Jul 9 23:52:38.922897 kubelet[2635]: E0709 23:52:38.922724 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c8d4173-032a-4c40-bcef-27f445bbf0eb" containerName="mount-bpf-fs" Jul 9 23:52:38.922897 kubelet[2635]: E0709 23:52:38.922731 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c8d4173-032a-4c40-bcef-27f445bbf0eb" containerName="clean-cilium-state" Jul 9 23:52:38.922897 kubelet[2635]: E0709 23:52:38.922738 2635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c8d4173-032a-4c40-bcef-27f445bbf0eb" containerName="cilium-agent" Jul 9 23:52:38.922897 kubelet[2635]: I0709 23:52:38.922765 2635 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8d4173-032a-4c40-bcef-27f445bbf0eb" containerName="cilium-agent" Jul 9 23:52:38.922897 kubelet[2635]: I0709 23:52:38.922772 2635 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a760908-a258-45da-b084-2f437acba1af" containerName="cilium-operator" Jul 9 23:52:38.929265 systemd[1]: Started sshd@29-10.0.0.15:22-10.0.0.1:38188.service - OpenSSH per-connection server daemon (10.0.0.1:38188). Jul 9 23:52:38.933815 kubelet[2635]: W0709 23:52:38.933758 2635 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 9 23:52:38.933929 kubelet[2635]: E0709 23:52:38.933835 2635 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 9 23:52:38.933929 kubelet[2635]: W0709 23:52:38.933887 2635 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 9 23:52:38.933929 kubelet[2635]: E0709 23:52:38.933902 2635 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 9 23:52:38.934231 systemd-logind[1494]: Removed session 29. Jul 9 23:52:38.934507 kubelet[2635]: W0709 23:52:38.934273 2635 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 9 23:52:38.934507 kubelet[2635]: E0709 23:52:38.934289 2635 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 9 23:52:38.952683 systemd[1]: Created slice kubepods-burstable-podf62e4542_6589_40fd_8305_cd3f1dc49e59.slice - libcontainer container kubepods-burstable-podf62e4542_6589_40fd_8305_cd3f1dc49e59.slice. Jul 9 23:52:39.001771 sshd[4524]: Accepted publickey for core from 10.0.0.1 port 38188 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:39.003555 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:39.017105 systemd-logind[1494]: New session 30 of user core. Jul 9 23:52:39.030096 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 9 23:52:39.086358 sshd[4527]: Connection closed by 10.0.0.1 port 38188 Jul 9 23:52:39.086689 sshd-session[4524]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:39.104208 systemd[1]: sshd@29-10.0.0.15:22-10.0.0.1:38188.service: Deactivated successfully. Jul 9 23:52:39.106446 systemd[1]: session-30.scope: Deactivated successfully. Jul 9 23:52:39.108268 systemd-logind[1494]: Session 30 logged out. Waiting for processes to exit. Jul 9 23:52:39.108675 kubelet[2635]: I0709 23:52:39.108622 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f62e4542-6589-40fd-8305-cd3f1dc49e59-etc-cni-netd\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.108675 kubelet[2635]: I0709 23:52:39.108662 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f62e4542-6589-40fd-8305-cd3f1dc49e59-xtables-lock\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.108838 kubelet[2635]: I0709 23:52:39.108681 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f62e4542-6589-40fd-8305-cd3f1dc49e59-hubble-tls\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.108838 kubelet[2635]: I0709 23:52:39.108699 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgg8g\" (UniqueName: \"kubernetes.io/projected/f62e4542-6589-40fd-8305-cd3f1dc49e59-kube-api-access-sgg8g\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.108838 kubelet[2635]: I0709 23:52:39.108716 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f62e4542-6589-40fd-8305-cd3f1dc49e59-hostproc\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.108838 kubelet[2635]: I0709 23:52:39.108729 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f62e4542-6589-40fd-8305-cd3f1dc49e59-bpf-maps\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.108838 kubelet[2635]: I0709 23:52:39.108778 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f62e4542-6589-40fd-8305-cd3f1dc49e59-host-proc-sys-net\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.108838 kubelet[2635]: I0709 23:52:39.108795 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f62e4542-6589-40fd-8305-cd3f1dc49e59-cilium-config-path\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.109018 kubelet[2635]: I0709 23:52:39.108829 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f62e4542-6589-40fd-8305-cd3f1dc49e59-cni-path\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.109018 kubelet[2635]: I0709 23:52:39.108844 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f62e4542-6589-40fd-8305-cd3f1dc49e59-cilium-run\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.109018 kubelet[2635]: I0709 23:52:39.108858 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f62e4542-6589-40fd-8305-cd3f1dc49e59-cilium-cgroup\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.109018 kubelet[2635]: I0709 23:52:39.108872 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f62e4542-6589-40fd-8305-cd3f1dc49e59-clustermesh-secrets\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.109018 kubelet[2635]: I0709 23:52:39.108887 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f62e4542-6589-40fd-8305-cd3f1dc49e59-cilium-ipsec-secrets\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.109018 kubelet[2635]: I0709 23:52:39.108902 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f62e4542-6589-40fd-8305-cd3f1dc49e59-lib-modules\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.109192 kubelet[2635]: I0709 23:52:39.108916 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f62e4542-6589-40fd-8305-cd3f1dc49e59-host-proc-sys-kernel\") pod \"cilium-kx5cw\" (UID: \"f62e4542-6589-40fd-8305-cd3f1dc49e59\") " pod="kube-system/cilium-kx5cw" Jul 9 23:52:39.117249 systemd[1]: Started sshd@30-10.0.0.15:22-10.0.0.1:38194.service - OpenSSH per-connection server daemon (10.0.0.1:38194). Jul 9 23:52:39.118882 systemd-logind[1494]: Removed session 30. Jul 9 23:52:39.158044 sshd[4533]: Accepted publickey for core from 10.0.0.1 port 38194 ssh2: RSA SHA256:7rMaG8pss/c64M22OW8iyhGUoJ1lUgBHmBtpuxeqljo Jul 9 23:52:39.160001 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:52:39.165962 systemd-logind[1494]: New session 31 of user core. Jul 9 23:52:39.174040 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 9 23:52:39.889388 kubelet[2635]: E0709 23:52:39.889303 2635 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 23:52:40.210790 kubelet[2635]: E0709 23:52:40.210556 2635 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 9 23:52:40.211000 kubelet[2635]: E0709 23:52:40.210844 2635 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f62e4542-6589-40fd-8305-cd3f1dc49e59-cilium-config-path podName:f62e4542-6589-40fd-8305-cd3f1dc49e59 nodeName:}" failed. No retries permitted until 2025-07-09 23:52:40.710770459 +0000 UTC m=+95.987799570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/f62e4542-6589-40fd-8305-cd3f1dc49e59-cilium-config-path") pod "cilium-kx5cw" (UID: "f62e4542-6589-40fd-8305-cd3f1dc49e59") : failed to sync configmap cache: timed out waiting for the condition Jul 9 23:52:40.758180 kubelet[2635]: E0709 23:52:40.758023 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:40.758864 containerd[1516]: time="2025-07-09T23:52:40.758710980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kx5cw,Uid:f62e4542-6589-40fd-8305-cd3f1dc49e59,Namespace:kube-system,Attempt:0,}" Jul 9 23:52:40.791452 containerd[1516]: time="2025-07-09T23:52:40.791189552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 23:52:40.791452 containerd[1516]: time="2025-07-09T23:52:40.791254795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 23:52:40.791452 containerd[1516]: time="2025-07-09T23:52:40.791266558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:52:40.791452 containerd[1516]: time="2025-07-09T23:52:40.791352260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 23:52:40.815991 systemd[1]: Started cri-containerd-ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a.scope - libcontainer container ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a. Jul 9 23:52:40.841036 containerd[1516]: time="2025-07-09T23:52:40.840919909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kx5cw,Uid:f62e4542-6589-40fd-8305-cd3f1dc49e59,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a\"" Jul 9 23:52:40.841749 kubelet[2635]: E0709 23:52:40.841707 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:40.843829 containerd[1516]: time="2025-07-09T23:52:40.843628805Z" level=info msg="CreateContainer within sandbox \"ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:52:40.883253 containerd[1516]: time="2025-07-09T23:52:40.883177551Z" level=info msg="CreateContainer within sandbox \"ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ffe7e67980eb8008599cee939713cf6a28b7d63ae9a70a8f220f5fe0ce8bdb8e\"" Jul 9 23:52:40.883841 containerd[1516]: time="2025-07-09T23:52:40.883737552Z" level=info msg="StartContainer for \"ffe7e67980eb8008599cee939713cf6a28b7d63ae9a70a8f220f5fe0ce8bdb8e\"" Jul 9 23:52:40.915231 systemd[1]: Started cri-containerd-ffe7e67980eb8008599cee939713cf6a28b7d63ae9a70a8f220f5fe0ce8bdb8e.scope - libcontainer container ffe7e67980eb8008599cee939713cf6a28b7d63ae9a70a8f220f5fe0ce8bdb8e. Jul 9 23:52:40.954562 containerd[1516]: time="2025-07-09T23:52:40.954476697Z" level=info msg="StartContainer for \"ffe7e67980eb8008599cee939713cf6a28b7d63ae9a70a8f220f5fe0ce8bdb8e\" returns successfully" Jul 9 23:52:40.972729 systemd[1]: cri-containerd-ffe7e67980eb8008599cee939713cf6a28b7d63ae9a70a8f220f5fe0ce8bdb8e.scope: Deactivated successfully. Jul 9 23:52:41.019560 containerd[1516]: time="2025-07-09T23:52:41.019470122Z" level=info msg="shim disconnected" id=ffe7e67980eb8008599cee939713cf6a28b7d63ae9a70a8f220f5fe0ce8bdb8e namespace=k8s.io Jul 9 23:52:41.019560 containerd[1516]: time="2025-07-09T23:52:41.019544704Z" level=warning msg="cleaning up after shim disconnected" id=ffe7e67980eb8008599cee939713cf6a28b7d63ae9a70a8f220f5fe0ce8bdb8e namespace=k8s.io Jul 9 23:52:41.019560 containerd[1516]: time="2025-07-09T23:52:41.019557818Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:52:41.074427 kubelet[2635]: E0709 23:52:41.074379 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:41.076644 containerd[1516]: time="2025-07-09T23:52:41.076584503Z" level=info msg="CreateContainer within sandbox \"ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:52:41.100981 containerd[1516]: time="2025-07-09T23:52:41.100787987Z" level=info msg="CreateContainer within sandbox \"ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d1e0705583d49a637f9d124f5c4cd9a1295550449f03356479b179d105b3b12f\"" Jul 9 23:52:41.101802 containerd[1516]: time="2025-07-09T23:52:41.101703011Z" level=info msg="StartContainer for \"d1e0705583d49a637f9d124f5c4cd9a1295550449f03356479b179d105b3b12f\"" Jul 9 23:52:41.143093 systemd[1]: Started cri-containerd-d1e0705583d49a637f9d124f5c4cd9a1295550449f03356479b179d105b3b12f.scope - libcontainer container d1e0705583d49a637f9d124f5c4cd9a1295550449f03356479b179d105b3b12f. Jul 9 23:52:41.183387 containerd[1516]: time="2025-07-09T23:52:41.183304824Z" level=info msg="StartContainer for \"d1e0705583d49a637f9d124f5c4cd9a1295550449f03356479b179d105b3b12f\" returns successfully" Jul 9 23:52:41.192352 systemd[1]: cri-containerd-d1e0705583d49a637f9d124f5c4cd9a1295550449f03356479b179d105b3b12f.scope: Deactivated successfully. Jul 9 23:52:41.232531 containerd[1516]: time="2025-07-09T23:52:41.232384536Z" level=info msg="shim disconnected" id=d1e0705583d49a637f9d124f5c4cd9a1295550449f03356479b179d105b3b12f namespace=k8s.io Jul 9 23:52:41.232531 containerd[1516]: time="2025-07-09T23:52:41.232488183Z" level=warning msg="cleaning up after shim disconnected" id=d1e0705583d49a637f9d124f5c4cd9a1295550449f03356479b179d105b3b12f namespace=k8s.io Jul 9 23:52:41.232531 containerd[1516]: time="2025-07-09T23:52:41.232500446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:52:42.080224 kubelet[2635]: E0709 23:52:42.080156 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:42.082565 containerd[1516]: time="2025-07-09T23:52:42.082496571Z" level=info msg="CreateContainer within sandbox \"ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:52:42.338700 containerd[1516]: time="2025-07-09T23:52:42.338270600Z" level=info msg="CreateContainer within sandbox \"ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b03c6031d5cbf9c3faf1c016f661abe89ba868caf0be7912f7a6b3ec44dd96a8\"" Jul 9 23:52:42.339391 containerd[1516]: time="2025-07-09T23:52:42.339284602Z" level=info msg="StartContainer for \"b03c6031d5cbf9c3faf1c016f661abe89ba868caf0be7912f7a6b3ec44dd96a8\"" Jul 9 23:52:42.380949 systemd[1]: Started cri-containerd-b03c6031d5cbf9c3faf1c016f661abe89ba868caf0be7912f7a6b3ec44dd96a8.scope - libcontainer container b03c6031d5cbf9c3faf1c016f661abe89ba868caf0be7912f7a6b3ec44dd96a8. Jul 9 23:52:42.420464 containerd[1516]: time="2025-07-09T23:52:42.420400990Z" level=info msg="StartContainer for \"b03c6031d5cbf9c3faf1c016f661abe89ba868caf0be7912f7a6b3ec44dd96a8\" returns successfully" Jul 9 23:52:42.426580 systemd[1]: cri-containerd-b03c6031d5cbf9c3faf1c016f661abe89ba868caf0be7912f7a6b3ec44dd96a8.scope: Deactivated successfully. Jul 9 23:52:42.463995 containerd[1516]: time="2025-07-09T23:52:42.463913132Z" level=info msg="shim disconnected" id=b03c6031d5cbf9c3faf1c016f661abe89ba868caf0be7912f7a6b3ec44dd96a8 namespace=k8s.io Jul 9 23:52:42.463995 containerd[1516]: time="2025-07-09T23:52:42.463986010Z" level=warning msg="cleaning up after shim disconnected" id=b03c6031d5cbf9c3faf1c016f661abe89ba868caf0be7912f7a6b3ec44dd96a8 namespace=k8s.io Jul 9 23:52:42.463995 containerd[1516]: time="2025-07-09T23:52:42.463996600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:52:42.779509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b03c6031d5cbf9c3faf1c016f661abe89ba868caf0be7912f7a6b3ec44dd96a8-rootfs.mount: Deactivated successfully. Jul 9 23:52:43.084967 kubelet[2635]: E0709 23:52:43.084782 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:43.086606 containerd[1516]: time="2025-07-09T23:52:43.086555719Z" level=info msg="CreateContainer within sandbox \"ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:52:43.680834 containerd[1516]: time="2025-07-09T23:52:43.680748950Z" level=info msg="CreateContainer within sandbox \"ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c21e39a255aca767e60a9b169668548c23c9cf171d737c0ba4b2a15c48f000b7\"" Jul 9 23:52:43.681666 containerd[1516]: time="2025-07-09T23:52:43.681593800Z" level=info msg="StartContainer for \"c21e39a255aca767e60a9b169668548c23c9cf171d737c0ba4b2a15c48f000b7\"" Jul 9 23:52:43.721035 systemd[1]: Started cri-containerd-c21e39a255aca767e60a9b169668548c23c9cf171d737c0ba4b2a15c48f000b7.scope - libcontainer container c21e39a255aca767e60a9b169668548c23c9cf171d737c0ba4b2a15c48f000b7. Jul 9 23:52:43.749707 systemd[1]: cri-containerd-c21e39a255aca767e60a9b169668548c23c9cf171d737c0ba4b2a15c48f000b7.scope: Deactivated successfully. Jul 9 23:52:43.914526 containerd[1516]: time="2025-07-09T23:52:43.914039740Z" level=info msg="StartContainer for \"c21e39a255aca767e60a9b169668548c23c9cf171d737c0ba4b2a15c48f000b7\" returns successfully" Jul 9 23:52:43.941313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c21e39a255aca767e60a9b169668548c23c9cf171d737c0ba4b2a15c48f000b7-rootfs.mount: Deactivated successfully. Jul 9 23:52:44.089627 kubelet[2635]: E0709 23:52:44.089568 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:44.105471 containerd[1516]: time="2025-07-09T23:52:44.105400103Z" level=info msg="shim disconnected" id=c21e39a255aca767e60a9b169668548c23c9cf171d737c0ba4b2a15c48f000b7 namespace=k8s.io Jul 9 23:52:44.105471 containerd[1516]: time="2025-07-09T23:52:44.105465898Z" level=warning msg="cleaning up after shim disconnected" id=c21e39a255aca767e60a9b169668548c23c9cf171d737c0ba4b2a15c48f000b7 namespace=k8s.io Jul 9 23:52:44.105471 containerd[1516]: time="2025-07-09T23:52:44.105474965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:52:44.890242 kubelet[2635]: E0709 23:52:44.890193 2635 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 23:52:45.096028 kubelet[2635]: E0709 23:52:45.095971 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:45.098911 containerd[1516]: time="2025-07-09T23:52:45.098875412Z" level=info msg="CreateContainer within sandbox \"ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:52:45.658971 containerd[1516]: time="2025-07-09T23:52:45.658911582Z" level=info msg="CreateContainer within sandbox \"ce49f2d61c17b787a145f64e9d71ef452971b319a2ddd3d65780618d1ff30d4a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"de380ba80c6796a08bb1bec1603f842a0f5fca886b58551c6ca635d19f93e531\"" Jul 9 23:52:45.659479 containerd[1516]: time="2025-07-09T23:52:45.659439723Z" level=info msg="StartContainer for \"de380ba80c6796a08bb1bec1603f842a0f5fca886b58551c6ca635d19f93e531\"" Jul 9 23:52:45.699061 systemd[1]: Started cri-containerd-de380ba80c6796a08bb1bec1603f842a0f5fca886b58551c6ca635d19f93e531.scope - libcontainer container de380ba80c6796a08bb1bec1603f842a0f5fca886b58551c6ca635d19f93e531. Jul 9 23:52:45.815579 containerd[1516]: time="2025-07-09T23:52:45.815508944Z" level=info msg="StartContainer for \"de380ba80c6796a08bb1bec1603f842a0f5fca886b58551c6ca635d19f93e531\" returns successfully" Jul 9 23:52:46.101725 kubelet[2635]: E0709 23:52:46.101493 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:46.303843 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 9 23:52:47.104096 kubelet[2635]: E0709 23:52:47.104053 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:47.603582 kubelet[2635]: I0709 23:52:47.603485 2635 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-09T23:52:47Z","lastTransitionTime":"2025-07-09T23:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 9 23:52:49.770751 systemd-networkd[1428]: lxc_health: Link UP Jul 9 23:52:49.776205 systemd-networkd[1428]: lxc_health: Gained carrier Jul 9 23:52:50.761282 kubelet[2635]: E0709 23:52:50.761227 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:50.789554 kubelet[2635]: I0709 23:52:50.789451 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kx5cw" podStartSLOduration=12.789433607 podStartE2EDuration="12.789433607s" podCreationTimestamp="2025-07-09 23:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:52:46.124165497 +0000 UTC m=+101.401194608" watchObservedRunningTime="2025-07-09 23:52:50.789433607 +0000 UTC m=+106.066462718" Jul 9 23:52:51.113033 kubelet[2635]: E0709 23:52:51.112962 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:51.157058 systemd-networkd[1428]: lxc_health: Gained IPv6LL Jul 9 23:52:55.820974 kubelet[2635]: E0709 23:52:55.820875 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:52:56.780263 sshd[4536]: Connection closed by 10.0.0.1 port 38194 Jul 9 23:52:56.780779 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Jul 9 23:52:56.785522 systemd[1]: sshd@30-10.0.0.15:22-10.0.0.1:38194.service: Deactivated successfully. Jul 9 23:52:56.787869 systemd[1]: session-31.scope: Deactivated successfully. Jul 9 23:52:56.788620 systemd-logind[1494]: Session 31 logged out. Waiting for processes to exit. Jul 9 23:52:56.789648 systemd-logind[1494]: Removed session 31.