Jul 14 22:20:01.925272 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 20:23:49 -00 2025 Jul 14 22:20:01.925292 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:20:01.925303 kernel: BIOS-provided physical RAM map: Jul 14 22:20:01.925310 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 14 22:20:01.925316 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 14 22:20:01.925322 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 14 22:20:01.925329 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 14 22:20:01.925348 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 14 22:20:01.925355 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 14 22:20:01.925364 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 14 22:20:01.925370 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 14 22:20:01.925376 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 14 22:20:01.925382 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 14 22:20:01.925388 kernel: NX (Execute Disable) protection: active Jul 14 22:20:01.925396 kernel: APIC: Static calls initialized Jul 14 22:20:01.925405 kernel: SMBIOS 2.8 present. Jul 14 22:20:01.925412 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 14 22:20:01.925418 kernel: Hypervisor detected: KVM Jul 14 22:20:01.925425 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 14 22:20:01.925431 kernel: kvm-clock: using sched offset of 2202315714 cycles Jul 14 22:20:01.925438 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 14 22:20:01.925445 kernel: tsc: Detected 2794.748 MHz processor Jul 14 22:20:01.925453 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:20:01.925460 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:20:01.925467 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 14 22:20:01.925476 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 14 22:20:01.925483 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:20:01.925489 kernel: Using GB pages for direct mapping Jul 14 22:20:01.925496 kernel: ACPI: Early table checksum verification disabled Jul 14 22:20:01.925504 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 14 22:20:01.925513 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:20:01.925521 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:20:01.925530 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:20:01.925541 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 14 22:20:01.925549 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:20:01.925558 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:20:01.925567 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:20:01.925575 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:20:01.925584 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 14 22:20:01.925602 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 14 22:20:01.925615 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 14 22:20:01.925627 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 14 22:20:01.925636 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 14 22:20:01.925645 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 14 22:20:01.925654 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 14 22:20:01.925663 kernel: No NUMA configuration found Jul 14 22:20:01.925672 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 14 22:20:01.925681 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 14 22:20:01.925693 kernel: Zone ranges: Jul 14 22:20:01.925702 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:20:01.925711 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 14 22:20:01.925720 kernel: Normal empty Jul 14 22:20:01.925729 kernel: Movable zone start for each node Jul 14 22:20:01.925736 kernel: Early memory node ranges Jul 14 22:20:01.925743 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 14 22:20:01.925750 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 14 22:20:01.925757 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 14 22:20:01.925767 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:20:01.925774 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 14 22:20:01.925781 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 14 22:20:01.925788 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 14 22:20:01.925795 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 14 22:20:01.925802 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:20:01.925809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 14 22:20:01.925816 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 14 22:20:01.925824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 14 22:20:01.925833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 14 22:20:01.925840 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 14 22:20:01.925847 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:20:01.925854 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 14 22:20:01.925861 kernel: TSC deadline timer available Jul 14 22:20:01.925868 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 14 22:20:01.925876 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 14 22:20:01.925883 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 14 22:20:01.925890 kernel: kvm-guest: setup PV sched yield Jul 14 22:20:01.925899 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 14 22:20:01.925906 kernel: Booting paravirtualized kernel on KVM Jul 14 22:20:01.925914 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:20:01.925921 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 14 22:20:01.925929 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 14 22:20:01.925936 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 14 22:20:01.925943 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 14 22:20:01.925950 kernel: kvm-guest: PV spinlocks enabled Jul 14 22:20:01.925958 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 14 22:20:01.925966 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:20:01.925976 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:20:01.925983 kernel: random: crng init done Jul 14 22:20:01.925990 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 22:20:01.925997 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:20:01.926004 kernel: Fallback order for Node 0: 0 Jul 14 22:20:01.926012 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 14 22:20:01.926019 kernel: Policy zone: DMA32 Jul 14 22:20:01.926026 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:20:01.926035 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 136900K reserved, 0K cma-reserved) Jul 14 22:20:01.926043 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 22:20:01.926050 kernel: ftrace: allocating 37970 entries in 149 pages Jul 14 22:20:01.926057 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 22:20:01.926064 kernel: Dynamic Preempt: voluntary Jul 14 22:20:01.926071 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:20:01.926079 kernel: rcu: RCU event tracing is enabled. Jul 14 22:20:01.926087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 22:20:01.926094 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:20:01.926103 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:20:01.926111 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:20:01.926118 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:20:01.926125 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 22:20:01.926132 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 14 22:20:01.926139 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 22:20:01.926146 kernel: Console: colour VGA+ 80x25 Jul 14 22:20:01.926154 kernel: printk: console [ttyS0] enabled Jul 14 22:20:01.926161 kernel: ACPI: Core revision 20230628 Jul 14 22:20:01.926170 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 14 22:20:01.926177 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:20:01.926185 kernel: x2apic enabled Jul 14 22:20:01.926192 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 22:20:01.926199 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 14 22:20:01.926206 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 14 22:20:01.926214 kernel: kvm-guest: setup PV IPIs Jul 14 22:20:01.926230 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:20:01.926238 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 14 22:20:01.926246 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 14 22:20:01.926253 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 14 22:20:01.926260 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 14 22:20:01.926270 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 14 22:20:01.926278 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:20:01.926285 kernel: Spectre V2 : Mitigation: Retpolines Jul 14 22:20:01.926293 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 14 22:20:01.926303 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 14 22:20:01.926310 kernel: RETBleed: Mitigation: untrained return thunk Jul 14 22:20:01.926318 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:20:01.926326 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 22:20:01.926333 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 14 22:20:01.926355 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 14 22:20:01.926362 kernel: x86/bugs: return thunk changed Jul 14 22:20:01.926370 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 14 22:20:01.926377 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:20:01.926387 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:20:01.926395 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:20:01.926402 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:20:01.926410 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 22:20:01.926418 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:20:01.926425 kernel: pid_max: default: 32768 minimum: 301 Jul 14 22:20:01.926434 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 22:20:01.926443 kernel: landlock: Up and running. Jul 14 22:20:01.926452 kernel: SELinux: Initializing. Jul 14 22:20:01.926465 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:20:01.926474 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:20:01.926484 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 14 22:20:01.926493 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:20:01.926503 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:20:01.926512 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:20:01.926522 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 14 22:20:01.926531 kernel: ... version: 0 Jul 14 22:20:01.926543 kernel: ... bit width: 48 Jul 14 22:20:01.926552 kernel: ... generic registers: 6 Jul 14 22:20:01.926561 kernel: ... value mask: 0000ffffffffffff Jul 14 22:20:01.926570 kernel: ... max period: 00007fffffffffff Jul 14 22:20:01.926580 kernel: ... fixed-purpose events: 0 Jul 14 22:20:01.926588 kernel: ... event mask: 000000000000003f Jul 14 22:20:01.926602 kernel: signal: max sigframe size: 1776 Jul 14 22:20:01.926609 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:20:01.926617 kernel: rcu: Max phase no-delay instances is 400. Jul 14 22:20:01.926625 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:20:01.926634 kernel: smpboot: x86: Booting SMP configuration: Jul 14 22:20:01.926642 kernel: .... node #0, CPUs: #1 #2 #3 Jul 14 22:20:01.926649 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 22:20:01.926657 kernel: smpboot: Max logical packages: 1 Jul 14 22:20:01.926664 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 14 22:20:01.926671 kernel: devtmpfs: initialized Jul 14 22:20:01.926679 kernel: x86/mm: Memory block size: 128MB Jul 14 22:20:01.926686 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:20:01.926694 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 22:20:01.926703 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:20:01.926711 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:20:01.926718 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:20:01.926726 kernel: audit: type=2000 audit(1752531601.517:1): state=initialized audit_enabled=0 res=1 Jul 14 22:20:01.926733 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:20:01.926740 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:20:01.926748 kernel: cpuidle: using governor menu Jul 14 22:20:01.926756 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:20:01.926763 kernel: dca service started, version 1.12.1 Jul 14 22:20:01.926773 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 14 22:20:01.926781 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 14 22:20:01.926788 kernel: PCI: Using configuration type 1 for base access Jul 14 22:20:01.926796 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:20:01.926803 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:20:01.926811 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 22:20:01.926818 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:20:01.926826 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 22:20:01.926833 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:20:01.926842 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:20:01.926850 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:20:01.926857 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:20:01.926865 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 22:20:01.926872 kernel: ACPI: Interpreter enabled Jul 14 22:20:01.926879 kernel: ACPI: PM: (supports S0 S3 S5) Jul 14 22:20:01.926887 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:20:01.926894 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:20:01.926902 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 22:20:01.926911 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 14 22:20:01.926919 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 22:20:01.927100 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:20:01.927233 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 14 22:20:01.927386 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 14 22:20:01.927407 kernel: PCI host bridge to bus 0000:00 Jul 14 22:20:01.927540 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:20:01.927698 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 14 22:20:01.927819 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:20:01.927932 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 14 22:20:01.928043 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:20:01.928155 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 14 22:20:01.928265 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 22:20:01.928420 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 14 22:20:01.928561 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 14 22:20:01.928694 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 14 22:20:01.928816 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 14 22:20:01.928938 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 14 22:20:01.929059 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:20:01.929189 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 22:20:01.929315 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 14 22:20:01.929453 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 14 22:20:01.929576 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 14 22:20:01.929713 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 14 22:20:01.929836 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 14 22:20:01.929959 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 14 22:20:01.930080 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 14 22:20:01.930214 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 14 22:20:01.930349 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 14 22:20:01.930476 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 14 22:20:01.930616 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 14 22:20:01.930747 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 14 22:20:01.930900 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 14 22:20:01.931033 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 14 22:20:01.931174 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 14 22:20:01.931296 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 14 22:20:01.931439 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 14 22:20:01.931573 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 14 22:20:01.931705 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 14 22:20:01.931715 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 14 22:20:01.931723 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 14 22:20:01.931735 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:20:01.931743 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 14 22:20:01.931751 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 14 22:20:01.931758 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 14 22:20:01.931766 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 14 22:20:01.931773 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 14 22:20:01.931781 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 14 22:20:01.931788 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 14 22:20:01.931796 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 14 22:20:01.931806 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 14 22:20:01.931813 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 14 22:20:01.931821 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 14 22:20:01.931828 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 14 22:20:01.931836 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 14 22:20:01.931843 kernel: iommu: Default domain type: Translated Jul 14 22:20:01.931851 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:20:01.931858 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:20:01.931866 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:20:01.931875 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 14 22:20:01.931883 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 14 22:20:01.932011 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 14 22:20:01.932134 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 14 22:20:01.932255 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:20:01.932266 kernel: vgaarb: loaded Jul 14 22:20:01.932273 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 14 22:20:01.932281 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 14 22:20:01.932292 kernel: clocksource: Switched to clocksource kvm-clock Jul 14 22:20:01.932299 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:20:01.932307 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:20:01.932314 kernel: pnp: PnP ACPI init Jul 14 22:20:01.932496 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 14 22:20:01.932508 kernel: pnp: PnP ACPI: found 6 devices Jul 14 22:20:01.932516 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:20:01.932524 kernel: NET: Registered PF_INET protocol family Jul 14 22:20:01.932535 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 22:20:01.932542 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 22:20:01.932550 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:20:01.932558 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:20:01.932567 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 22:20:01.932575 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 22:20:01.932583 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:20:01.932592 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:20:01.932610 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:20:01.932621 kernel: NET: Registered PF_XDP protocol family Jul 14 22:20:01.932735 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 14 22:20:01.932847 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 14 22:20:01.932957 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 14 22:20:01.933068 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 14 22:20:01.933178 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 14 22:20:01.933290 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 14 22:20:01.933300 kernel: PCI: CLS 0 bytes, default 64 Jul 14 22:20:01.933311 kernel: Initialise system trusted keyrings Jul 14 22:20:01.933319 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 22:20:01.933326 kernel: Key type asymmetric registered Jul 14 22:20:01.933334 kernel: Asymmetric key parser 'x509' registered Jul 14 22:20:01.933354 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 22:20:01.933362 kernel: io scheduler mq-deadline registered Jul 14 22:20:01.933369 kernel: io scheduler kyber registered Jul 14 22:20:01.933377 kernel: io scheduler bfq registered Jul 14 22:20:01.933384 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:20:01.933395 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 14 22:20:01.933403 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 14 22:20:01.933411 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 14 22:20:01.933418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:20:01.933426 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:20:01.933434 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 14 22:20:01.933441 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:20:01.933449 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:20:01.933579 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 14 22:20:01.933601 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 22:20:01.933716 kernel: rtc_cmos 00:04: registered as rtc0 Jul 14 22:20:01.933831 kernel: rtc_cmos 00:04: setting system clock to 2025-07-14T22:20:01 UTC (1752531601) Jul 14 22:20:01.933945 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 14 22:20:01.933955 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 14 22:20:01.933963 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:20:01.933970 kernel: Segment Routing with IPv6 Jul 14 22:20:01.933978 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:20:01.933989 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:20:01.933996 kernel: Key type dns_resolver registered Jul 14 22:20:01.934003 kernel: IPI shorthand broadcast: enabled Jul 14 22:20:01.934011 kernel: sched_clock: Marking stable (580002711, 105661585)->(698479568, -12815272) Jul 14 22:20:01.934019 kernel: registered taskstats version 1 Jul 14 22:20:01.934026 kernel: Loading compiled-in X.509 certificates Jul 14 22:20:01.934034 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: ff10e110ca3923b510cf0133f4e9f48dd636b870' Jul 14 22:20:01.934041 kernel: Key type .fscrypt registered Jul 14 22:20:01.934048 kernel: Key type fscrypt-provisioning registered Jul 14 22:20:01.934058 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:20:01.934066 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:20:01.934073 kernel: ima: No architecture policies found Jul 14 22:20:01.934080 kernel: clk: Disabling unused clocks Jul 14 22:20:01.934088 kernel: Freeing unused kernel image (initmem) memory: 42876K Jul 14 22:20:01.934095 kernel: Write protecting the kernel read-only data: 36864k Jul 14 22:20:01.934103 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 14 22:20:01.934110 kernel: Run /init as init process Jul 14 22:20:01.934118 kernel: with arguments: Jul 14 22:20:01.934128 kernel: /init Jul 14 22:20:01.934135 kernel: with environment: Jul 14 22:20:01.934142 kernel: HOME=/ Jul 14 22:20:01.934149 kernel: TERM=linux Jul 14 22:20:01.934157 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:20:01.934166 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:20:01.934176 systemd[1]: Detected virtualization kvm. Jul 14 22:20:01.934184 systemd[1]: Detected architecture x86-64. Jul 14 22:20:01.934194 systemd[1]: Running in initrd. Jul 14 22:20:01.934201 systemd[1]: No hostname configured, using default hostname. Jul 14 22:20:01.934209 systemd[1]: Hostname set to . Jul 14 22:20:01.934217 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:20:01.934225 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:20:01.934233 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:20:01.934241 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:20:01.934250 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 22:20:01.934260 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:20:01.934281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 22:20:01.934292 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 22:20:01.934302 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 22:20:01.934313 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 22:20:01.934321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:20:01.934329 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:20:01.934351 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:20:01.934359 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:20:01.934367 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:20:01.934376 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:20:01.934384 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:20:01.934392 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:20:01.934404 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:20:01.934412 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:20:01.934420 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:20:01.934428 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:20:01.934437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:20:01.934445 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:20:01.934453 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 22:20:01.934462 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:20:01.934470 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 22:20:01.934483 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:20:01.934491 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:20:01.934499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:20:01.934507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:20:01.934516 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 22:20:01.934524 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:20:01.934532 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:20:01.934561 systemd-journald[190]: Collecting audit messages is disabled. Jul 14 22:20:01.934584 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:20:01.934603 systemd-journald[190]: Journal started Jul 14 22:20:01.934626 systemd-journald[190]: Runtime Journal (/run/log/journal/4de90a77db0449de86f5ea81782ff1c9) is 6.0M, max 48.4M, 42.3M free. Jul 14 22:20:01.919473 systemd-modules-load[193]: Inserted module 'overlay' Jul 14 22:20:01.954898 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:20:01.954914 kernel: Bridge firewalling registered Jul 14 22:20:01.946909 systemd-modules-load[193]: Inserted module 'br_netfilter' Jul 14 22:20:01.956588 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:20:01.957967 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:20:01.960280 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:20:01.962655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:20:01.979537 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:20:01.982643 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:20:01.985211 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:20:01.989655 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:20:01.997015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:20:02.000129 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:20:02.003511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:20:02.004989 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:20:02.013501 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 22:20:02.015958 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:20:02.027367 dracut-cmdline[227]: dracut-dracut-053 Jul 14 22:20:02.030444 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:20:02.049044 systemd-resolved[231]: Positive Trust Anchors: Jul 14 22:20:02.049063 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:20:02.049094 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:20:02.051568 systemd-resolved[231]: Defaulting to hostname 'linux'. Jul 14 22:20:02.052698 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:20:02.058783 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:20:02.116373 kernel: SCSI subsystem initialized Jul 14 22:20:02.125369 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:20:02.136373 kernel: iscsi: registered transport (tcp) Jul 14 22:20:02.157410 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:20:02.157478 kernel: QLogic iSCSI HBA Driver Jul 14 22:20:02.209447 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 22:20:02.218462 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 22:20:02.245328 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:20:02.245408 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:20:02.245421 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 22:20:02.287369 kernel: raid6: avx2x4 gen() 29998 MB/s Jul 14 22:20:02.304363 kernel: raid6: avx2x2 gen() 30634 MB/s Jul 14 22:20:02.321410 kernel: raid6: avx2x1 gen() 25942 MB/s Jul 14 22:20:02.321442 kernel: raid6: using algorithm avx2x2 gen() 30634 MB/s Jul 14 22:20:02.339403 kernel: raid6: .... xor() 19843 MB/s, rmw enabled Jul 14 22:20:02.339433 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:20:02.359364 kernel: xor: automatically using best checksumming function avx Jul 14 22:20:02.514370 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 22:20:02.527056 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:20:02.539466 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:20:02.550615 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jul 14 22:20:02.555152 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:20:02.568522 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 22:20:02.582872 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jul 14 22:20:02.614464 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:20:02.626455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:20:02.691525 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:20:02.702563 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 22:20:02.714268 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 22:20:02.717601 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:20:02.720117 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:20:02.722684 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:20:02.731972 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 22:20:02.735528 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:20:02.735549 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 14 22:20:02.737564 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 22:20:02.743122 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 22:20:02.743164 kernel: GPT:9289727 != 19775487 Jul 14 22:20:02.743176 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 22:20:02.743187 kernel: GPT:9289727 != 19775487 Jul 14 22:20:02.743196 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 22:20:02.743207 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:20:02.743055 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:20:02.754363 kernel: libata version 3.00 loaded. Jul 14 22:20:02.758962 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 22:20:02.758984 kernel: AES CTR mode by8 optimization enabled Jul 14 22:20:02.761091 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:20:02.761213 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:20:02.764783 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:20:02.768411 kernel: ahci 0000:00:1f.2: version 3.0 Jul 14 22:20:02.768595 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 14 22:20:02.768390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:20:02.774581 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 14 22:20:02.774780 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 14 22:20:02.774922 kernel: scsi host0: ahci Jul 14 22:20:02.775080 kernel: scsi host1: ahci Jul 14 22:20:02.775224 kernel: scsi host2: ahci Jul 14 22:20:02.768521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:20:02.771385 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:20:02.779565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:20:02.782662 kernel: scsi host3: ahci Jul 14 22:20:02.784525 kernel: scsi host4: ahci Jul 14 22:20:02.789197 kernel: scsi host5: ahci Jul 14 22:20:02.789402 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 14 22:20:02.789415 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 14 22:20:02.789425 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 14 22:20:02.789440 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 14 22:20:02.789450 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 14 22:20:02.789460 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 14 22:20:02.793292 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 22:20:02.800832 kernel: BTRFS: device fsid d23b6972-ad36-4741-bf36-4d440b923127 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (475) Jul 14 22:20:02.800856 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Jul 14 22:20:02.813593 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 22:20:02.843745 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:20:02.845150 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:20:02.851400 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 22:20:02.852649 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 22:20:02.864541 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 22:20:02.866552 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:20:02.874372 disk-uuid[556]: Primary Header is updated. Jul 14 22:20:02.874372 disk-uuid[556]: Secondary Entries is updated. Jul 14 22:20:02.874372 disk-uuid[556]: Secondary Header is updated. Jul 14 22:20:02.879377 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:20:02.883368 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:20:02.884007 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:20:03.100374 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 14 22:20:03.100448 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 14 22:20:03.101365 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 14 22:20:03.102373 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 14 22:20:03.102465 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 14 22:20:03.103370 kernel: ata3.00: applying bridge limits Jul 14 22:20:03.103385 kernel: ata3.00: configured for UDMA/100 Jul 14 22:20:03.104371 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 14 22:20:03.108364 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 14 22:20:03.108391 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 14 22:20:03.149368 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 14 22:20:03.149668 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:20:03.163380 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:20:03.884373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:20:03.884426 disk-uuid[559]: The operation has completed successfully. Jul 14 22:20:03.907411 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:20:03.907529 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 22:20:03.943520 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 22:20:03.946843 sh[592]: Success Jul 14 22:20:03.958354 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 14 22:20:03.991429 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 22:20:04.003780 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 22:20:04.008143 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 22:20:04.019063 kernel: BTRFS info (device dm-0): first mount of filesystem d23b6972-ad36-4741-bf36-4d440b923127 Jul 14 22:20:04.019097 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:20:04.019108 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 22:20:04.020137 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 22:20:04.020928 kernel: BTRFS info (device dm-0): using free space tree Jul 14 22:20:04.026711 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 22:20:04.029033 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 22:20:04.043477 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 22:20:04.046047 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 22:20:04.053932 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:20:04.053966 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:20:04.053977 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:20:04.057447 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:20:04.066731 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:20:04.068575 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:20:04.161376 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:20:04.176465 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:20:04.229981 systemd-networkd[770]: lo: Link UP Jul 14 22:20:04.229993 systemd-networkd[770]: lo: Gained carrier Jul 14 22:20:04.231922 systemd-networkd[770]: Enumeration completed Jul 14 22:20:04.232408 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:20:04.232413 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:20:04.232683 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:20:04.236769 systemd-networkd[770]: eth0: Link UP Jul 14 22:20:04.236780 systemd-networkd[770]: eth0: Gained carrier Jul 14 22:20:04.236793 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:20:04.241389 systemd[1]: Reached target network.target - Network. Jul 14 22:20:04.265383 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:20:04.506664 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 22:20:04.542481 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 22:20:04.594860 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.139 Jul 14 22:20:04.594869 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jul 14 22:20:04.643501 ignition[775]: Ignition 2.19.0 Jul 14 22:20:04.643524 ignition[775]: Stage: fetch-offline Jul 14 22:20:04.643572 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:20:04.643582 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:20:04.643686 ignition[775]: parsed url from cmdline: "" Jul 14 22:20:04.643692 ignition[775]: no config URL provided Jul 14 22:20:04.643698 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:20:04.643712 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:20:04.643747 ignition[775]: op(1): [started] loading QEMU firmware config module Jul 14 22:20:04.643754 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 22:20:04.655756 ignition[775]: op(1): [finished] loading QEMU firmware config module Jul 14 22:20:04.692936 ignition[775]: parsing config with SHA512: b19d0c2b14180dd2c9c99097d2c422b18338df1b455cd112805895d904c12d8fc4fcd4127f9984020b4768cd05483910b8eda89e7453c60fce133a57cb7760e6 Jul 14 22:20:04.697958 unknown[775]: fetched base config from "system" Jul 14 22:20:04.697971 unknown[775]: fetched user config from "qemu" Jul 14 22:20:04.698361 ignition[775]: fetch-offline: fetch-offline passed Jul 14 22:20:04.698425 ignition[775]: Ignition finished successfully Jul 14 22:20:04.701586 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:20:04.704013 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:20:04.711610 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 22:20:04.724378 ignition[784]: Ignition 2.19.0 Jul 14 22:20:04.724388 ignition[784]: Stage: kargs Jul 14 22:20:04.724573 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:20:04.724584 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:20:04.725447 ignition[784]: kargs: kargs passed Jul 14 22:20:04.725496 ignition[784]: Ignition finished successfully Jul 14 22:20:04.775631 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 22:20:04.791524 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 22:20:04.803962 ignition[792]: Ignition 2.19.0 Jul 14 22:20:04.803973 ignition[792]: Stage: disks Jul 14 22:20:04.804149 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:20:04.804160 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:20:04.807846 ignition[792]: disks: disks passed Jul 14 22:20:04.807896 ignition[792]: Ignition finished successfully Jul 14 22:20:04.811041 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 22:20:04.813159 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 22:20:04.813611 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:20:04.813924 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:20:04.814250 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:20:04.814740 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:20:04.832481 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 22:20:04.928605 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 22:20:04.982315 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 22:20:04.995578 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 22:20:05.081375 kernel: EXT4-fs (vda9): mounted filesystem dda007d3-640b-4d11-976f-3b761ca7aabd r/w with ordered data mode. Quota mode: none. Jul 14 22:20:05.082217 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 22:20:05.083235 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 22:20:05.092529 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:20:05.094747 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 22:20:05.096159 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 22:20:05.096207 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:20:05.103581 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Jul 14 22:20:05.096236 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:20:05.107551 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:20:05.107565 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:20:05.107576 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:20:05.109365 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:20:05.110706 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:20:05.140596 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 22:20:05.142249 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 22:20:05.178616 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:20:05.183651 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:20:05.188420 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:20:05.192316 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:20:05.275892 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 22:20:05.282542 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 22:20:05.285798 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 22:20:05.289590 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 22:20:05.290946 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:20:05.311443 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 22:20:05.451928 ignition[930]: INFO : Ignition 2.19.0 Jul 14 22:20:05.451928 ignition[930]: INFO : Stage: mount Jul 14 22:20:05.453590 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:20:05.453590 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:20:05.453590 ignition[930]: INFO : mount: mount passed Jul 14 22:20:05.453590 ignition[930]: INFO : Ignition finished successfully Jul 14 22:20:05.459418 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 22:20:05.471525 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 22:20:05.605629 systemd-networkd[770]: eth0: Gained IPv6LL Jul 14 22:20:06.091521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:20:06.098611 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Jul 14 22:20:06.098634 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:20:06.098646 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:20:06.099457 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:20:06.102363 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:20:06.103882 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:20:06.123235 ignition[956]: INFO : Ignition 2.19.0 Jul 14 22:20:06.123235 ignition[956]: INFO : Stage: files Jul 14 22:20:06.124868 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:20:06.124868 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:20:06.124868 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:20:06.128398 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:20:06.128398 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:20:06.133157 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:20:06.134634 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:20:06.136272 unknown[956]: wrote ssh authorized keys file for user: core Jul 14 22:20:06.137426 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:20:06.139030 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 14 22:20:06.141060 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 14 22:20:16.209191 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 22:20:16.343915 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 14 22:20:16.345969 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 22:20:16.345969 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 14 22:20:19.618451 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 22:20:19.689492 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 22:20:19.689492 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:20:19.693246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 14 22:20:30.366230 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 22:20:30.905613 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:20:30.905613 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 14 22:20:30.909088 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:20:30.909088 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:20:30.909088 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 14 22:20:30.909088 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 14 22:20:30.909088 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:20:30.909088 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:20:30.909088 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 14 22:20:30.909088 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:20:30.934301 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:20:30.938822 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:20:30.940476 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:20:30.940476 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:20:30.943201 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:20:30.944572 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:20:30.946273 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:20:30.947855 ignition[956]: INFO : files: files passed Jul 14 22:20:30.948544 ignition[956]: INFO : Ignition finished successfully Jul 14 22:20:30.951657 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:20:30.961514 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:20:30.964216 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:20:30.966687 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:20:30.967635 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:20:30.973366 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 22:20:30.977145 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:20:30.977145 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:20:30.980085 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:20:30.983660 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:20:30.985035 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:20:31.001473 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:20:31.024493 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:20:31.024641 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:20:31.025689 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:20:31.028079 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:20:31.028586 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:20:31.029458 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:20:31.046917 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:20:31.064546 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:20:31.073655 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:20:31.074843 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:20:31.076897 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:20:31.078736 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:20:31.078840 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:20:31.080834 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:20:31.082439 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:20:31.084418 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:20:31.086329 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:20:31.088198 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:20:31.090185 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:20:31.092143 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:20:31.094259 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:20:31.096169 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:20:31.098320 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:20:31.100015 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:20:31.100122 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:20:31.102191 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:20:31.103743 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:20:31.105753 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:20:31.105869 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:20:31.163631 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:20:31.163827 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:20:31.165826 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:20:31.165968 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:20:31.167825 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:20:31.169519 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:20:31.173433 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:20:31.175053 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:20:31.177109 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:20:31.178973 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:20:31.179102 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:20:31.181120 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:20:31.181242 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:20:31.183732 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:20:31.183889 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:20:31.185966 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:20:31.186109 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:20:31.195516 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:20:31.196599 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:20:31.196764 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:20:31.199889 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:20:31.201029 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:20:31.201159 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:20:31.208561 ignition[1010]: INFO : Ignition 2.19.0 Jul 14 22:20:31.208561 ignition[1010]: INFO : Stage: umount Jul 14 22:20:31.203969 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:20:31.211457 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:20:31.211457 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:20:31.211457 ignition[1010]: INFO : umount: umount passed Jul 14 22:20:31.211457 ignition[1010]: INFO : Ignition finished successfully Jul 14 22:20:31.204078 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:20:31.211773 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:20:31.211886 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:20:31.214148 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:20:31.214271 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:20:31.217883 systemd[1]: Stopped target network.target - Network. Jul 14 22:20:31.219411 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:20:31.219472 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:20:31.221287 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:20:31.221354 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:20:31.223476 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:20:31.223525 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:20:31.225355 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:20:31.225407 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:20:31.227394 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:20:31.229323 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:20:31.232265 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:20:31.232377 systemd-networkd[770]: eth0: DHCPv6 lease lost Jul 14 22:20:31.234443 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:20:31.234563 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:20:31.236751 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:20:31.236852 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:20:31.238493 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:20:31.238606 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:20:31.242353 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:20:31.242507 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:20:31.244609 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:20:31.244664 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:20:31.252481 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:20:31.253937 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:20:31.254011 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:20:31.256398 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:20:31.256461 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:20:31.258420 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:20:31.258482 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:20:31.260602 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:20:31.260662 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:20:31.262844 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:20:31.272572 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:20:31.272755 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:20:31.274589 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:20:31.274812 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:20:31.278262 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:20:31.278333 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:20:31.280457 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:20:31.280509 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:20:31.282505 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:20:31.282567 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:20:31.284864 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:20:31.284925 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:20:31.287050 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:20:31.287110 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:20:31.303470 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:20:31.304590 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:20:31.304657 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:20:31.307028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:20:31.307088 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:20:31.310599 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:20:31.310742 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:20:31.312640 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:20:31.314921 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:20:31.325471 systemd[1]: Switching root. Jul 14 22:20:31.398768 systemd-journald[190]: Journal stopped Jul 14 22:20:33.438357 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Jul 14 22:20:33.438441 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:20:33.438470 kernel: SELinux: policy capability open_perms=1 Jul 14 22:20:33.438485 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:20:33.438500 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:20:33.438514 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:20:33.438529 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:20:33.438549 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:20:33.438563 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:20:33.438584 kernel: audit: type=1403 audit(1752531632.665:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:20:33.438600 systemd[1]: Successfully loaded SELinux policy in 40.070ms. Jul 14 22:20:33.438633 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.574ms. Jul 14 22:20:33.438653 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:20:33.438669 systemd[1]: Detected virtualization kvm. Jul 14 22:20:33.438685 systemd[1]: Detected architecture x86-64. Jul 14 22:20:33.438701 systemd[1]: Detected first boot. Jul 14 22:20:33.438721 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:20:33.438736 zram_generator::config[1053]: No configuration found. Jul 14 22:20:33.438754 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:20:33.438769 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 22:20:33.438785 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 22:20:33.438801 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 22:20:33.438818 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 22:20:33.438834 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 22:20:33.438854 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 22:20:33.438870 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 22:20:33.438886 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 22:20:33.438901 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 22:20:33.438917 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 22:20:33.438933 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 22:20:33.438948 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:20:33.438964 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:20:33.438980 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 22:20:33.439002 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 22:20:33.439019 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 22:20:33.439035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:20:33.439050 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 22:20:33.439066 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:20:33.439082 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 22:20:33.439097 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 22:20:33.439119 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 22:20:33.439138 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 22:20:33.439154 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:20:33.439169 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:20:33.439184 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:20:33.439199 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:20:33.439216 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 22:20:33.439232 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 22:20:33.439248 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:20:33.439263 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:20:33.439283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:20:33.439300 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 22:20:33.439316 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 22:20:33.439332 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 22:20:33.439366 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 22:20:33.439382 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:33.439398 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 22:20:33.439415 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 22:20:33.439430 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 22:20:33.439449 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:20:33.439463 systemd[1]: Reached target machines.target - Containers. Jul 14 22:20:33.439477 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 22:20:33.439492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:20:33.439506 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:20:33.439520 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:20:33.439534 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:20:33.439548 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:20:33.439565 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:20:33.439579 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:20:33.439593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:20:33.439609 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:20:33.439634 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 22:20:33.439649 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 22:20:33.439664 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 22:20:33.439679 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 22:20:33.439700 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:20:33.439716 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:20:33.439756 systemd-journald[1116]: Collecting audit messages is disabled. Jul 14 22:20:33.439791 kernel: fuse: init (API version 7.39) Jul 14 22:20:33.439806 kernel: loop: module loaded Jul 14 22:20:33.439823 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:20:33.439838 systemd-journald[1116]: Journal started Jul 14 22:20:33.439879 systemd-journald[1116]: Runtime Journal (/run/log/journal/4de90a77db0449de86f5ea81782ff1c9) is 6.0M, max 48.4M, 42.3M free. Jul 14 22:20:33.165431 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:20:33.184909 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 22:20:33.185400 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 22:20:33.447408 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 22:20:33.451558 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:20:33.453558 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 22:20:33.453584 systemd[1]: Stopped verity-setup.service. Jul 14 22:20:33.456445 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:33.459372 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:20:33.460664 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 22:20:33.461768 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 22:20:33.463041 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 22:20:33.464084 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 22:20:33.465238 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 22:20:33.466420 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 22:20:33.467618 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:20:33.469147 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:20:33.469363 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:20:33.470772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:20:33.470963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:20:33.472327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:20:33.472532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:20:33.473975 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:20:33.474165 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:20:33.475514 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:20:33.475718 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:20:33.477022 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:20:33.478363 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:20:33.479883 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 22:20:33.490366 kernel: ACPI: bus type drm_connector registered Jul 14 22:20:33.491061 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:20:33.491278 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:20:33.494611 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:20:33.506438 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 22:20:33.508829 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 22:20:33.510174 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:20:33.510215 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:20:33.512733 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 22:20:33.515468 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 22:20:33.521094 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 22:20:33.522435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:20:33.524843 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 22:20:33.529098 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 22:20:33.530565 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:20:33.537058 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 22:20:33.547577 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:20:33.550526 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:20:33.552980 systemd-journald[1116]: Time spent on flushing to /var/log/journal/4de90a77db0449de86f5ea81782ff1c9 is 15.338ms for 952 entries. Jul 14 22:20:33.552980 systemd-journald[1116]: System Journal (/var/log/journal/4de90a77db0449de86f5ea81782ff1c9) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:20:33.598464 systemd-journald[1116]: Received client request to flush runtime journal. Jul 14 22:20:33.598506 kernel: loop0: detected capacity change from 0 to 142488 Jul 14 22:20:33.556335 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 22:20:33.559167 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 22:20:33.560467 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 22:20:33.561997 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 22:20:33.563832 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:20:33.582187 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 22:20:33.584366 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 22:20:33.587104 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 22:20:33.592199 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 22:20:33.595852 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:20:33.603681 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 22:20:33.610671 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 22:20:33.626796 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 22:20:33.637368 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:20:33.636686 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 22:20:33.649236 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:20:33.650263 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 22:20:33.664371 kernel: loop1: detected capacity change from 0 to 140768 Jul 14 22:20:33.666501 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 22:20:33.674536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:20:33.699656 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jul 14 22:20:33.699677 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jul 14 22:20:33.705489 kernel: loop2: detected capacity change from 0 to 229808 Jul 14 22:20:33.706544 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:20:33.737368 kernel: loop3: detected capacity change from 0 to 142488 Jul 14 22:20:33.748376 kernel: loop4: detected capacity change from 0 to 140768 Jul 14 22:20:33.760377 kernel: loop5: detected capacity change from 0 to 229808 Jul 14 22:20:33.767286 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 22:20:33.767939 (sd-merge)[1192]: Merged extensions into '/usr'. Jul 14 22:20:33.771967 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 22:20:33.772047 systemd[1]: Reloading... Jul 14 22:20:33.836103 zram_generator::config[1221]: No configuration found. Jul 14 22:20:33.896836 ldconfig[1140]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:20:33.953232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:20:34.005715 systemd[1]: Reloading finished in 233 ms. Jul 14 22:20:34.040473 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 22:20:34.042178 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 22:20:34.061623 systemd[1]: Starting ensure-sysext.service... Jul 14 22:20:34.063925 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:20:34.069148 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Jul 14 22:20:34.069164 systemd[1]: Reloading... Jul 14 22:20:34.091410 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:20:34.091798 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 22:20:34.092798 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:20:34.093095 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 14 22:20:34.093178 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 14 22:20:34.096445 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:20:34.096458 systemd-tmpfiles[1257]: Skipping /boot Jul 14 22:20:34.109782 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:20:34.109796 systemd-tmpfiles[1257]: Skipping /boot Jul 14 22:20:34.125368 zram_generator::config[1286]: No configuration found. Jul 14 22:20:34.231979 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:20:34.280771 systemd[1]: Reloading finished in 211 ms. Jul 14 22:20:34.298020 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 22:20:34.311950 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:20:34.318802 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:20:34.321165 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 22:20:34.323431 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 22:20:34.327753 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:20:34.331631 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:20:34.334215 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 22:20:34.339077 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:34.339248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:20:34.342016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:20:34.345075 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:20:34.348566 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:20:34.350472 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:20:34.357552 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 22:20:34.358626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:34.359692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:20:34.359906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:20:34.361799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:20:34.362229 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:20:34.364710 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:20:34.364903 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:20:34.366692 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 22:20:34.369958 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jul 14 22:20:34.377599 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:34.377818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:20:34.383920 augenrules[1352]: No rules Jul 14 22:20:34.386657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:20:34.389672 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:20:34.392064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:20:34.393158 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:20:34.394549 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 22:20:34.398377 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:34.399381 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 22:20:34.401021 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:20:34.402865 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 22:20:34.404913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:20:34.405137 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:20:34.406741 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:20:34.407013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:20:34.408825 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:20:34.409048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:20:34.412823 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:20:34.418604 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 22:20:34.429060 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:34.429266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:20:34.436531 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:20:34.439580 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:20:34.447547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:20:34.451517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:20:34.452640 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:20:34.454485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:20:34.455500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:34.456205 systemd[1]: Finished ensure-sysext.service. Jul 14 22:20:34.459740 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 22:20:34.461613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:20:34.461795 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:20:34.463294 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:20:34.463483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:20:34.464890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:20:34.465058 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:20:34.478575 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 14 22:20:34.478742 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:20:34.486510 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 22:20:34.488294 systemd-resolved[1326]: Positive Trust Anchors: Jul 14 22:20:34.488318 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:20:34.488375 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:20:34.488447 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:20:34.488920 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:20:34.489113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:20:34.492057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:20:34.492810 systemd-resolved[1326]: Defaulting to hostname 'linux'. Jul 14 22:20:34.494922 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:20:34.496579 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:20:34.510366 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1374) Jul 14 22:20:34.529201 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:20:34.540360 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 14 22:20:34.540379 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 22:20:34.548356 kernel: ACPI: button: Power Button [PWRF] Jul 14 22:20:34.559513 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 22:20:34.561105 systemd-networkd[1393]: lo: Link UP Jul 14 22:20:34.561119 systemd-networkd[1393]: lo: Gained carrier Jul 14 22:20:34.563337 systemd-networkd[1393]: Enumeration completed Jul 14 22:20:34.563528 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:20:34.564781 systemd[1]: Reached target network.target - Network. Jul 14 22:20:34.565266 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:20:34.565276 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:20:34.566525 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:20:34.566557 systemd-networkd[1393]: eth0: Link UP Jul 14 22:20:34.566561 systemd-networkd[1393]: eth0: Gained carrier Jul 14 22:20:34.566571 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:20:34.573122 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 14 22:20:34.574118 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 14 22:20:34.575112 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 14 22:20:34.573535 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 22:20:34.578236 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 22:20:34.580518 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 22:20:34.582401 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 14 22:20:34.583464 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:20:34.584793 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Jul 14 22:20:35.185080 systemd-resolved[1326]: Clock change detected. Flushing caches. Jul 14 22:20:35.185165 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:20:35.185250 systemd-timesyncd[1401]: Initial clock synchronization to Mon 2025-07-14 22:20:35.184981 UTC. Jul 14 22:20:35.216774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:20:35.287576 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 22:20:35.298570 kernel: kvm_amd: TSC scaling supported Jul 14 22:20:35.298622 kernel: kvm_amd: Nested Virtualization enabled Jul 14 22:20:35.298641 kernel: kvm_amd: Nested Paging enabled Jul 14 22:20:35.298653 kernel: kvm_amd: LBR virtualization supported Jul 14 22:20:35.298673 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 14 22:20:35.298684 kernel: kvm_amd: Virtual GIF supported Jul 14 22:20:35.314652 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:20:35.318593 kernel: EDAC MC: Ver: 3.0.0 Jul 14 22:20:35.356720 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 22:20:35.369696 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 22:20:35.378469 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:20:35.412745 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 22:20:35.414198 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:20:35.415279 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:20:35.416398 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 22:20:35.417643 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 22:20:35.419076 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 22:20:35.420265 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 22:20:35.421475 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 22:20:35.422689 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:20:35.422713 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:20:35.423597 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:20:35.425297 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 22:20:35.427885 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 22:20:35.438066 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 22:20:35.440306 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 22:20:35.441841 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 22:20:35.442959 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:20:35.443913 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:20:35.444872 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:20:35.444900 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:20:35.445856 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 22:20:35.447811 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 22:20:35.449685 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 22:20:35.455252 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:20:35.455288 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 22:20:35.456285 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 22:20:35.458105 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 22:20:35.460649 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 22:20:35.462417 jq[1433]: false Jul 14 22:20:35.463753 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 22:20:35.466692 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 22:20:35.473640 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 22:20:35.475072 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:20:35.475522 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:20:35.477218 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 22:20:35.481378 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 22:20:35.484524 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 22:20:35.490197 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:20:35.492478 extend-filesystems[1434]: Found loop3 Jul 14 22:20:35.492478 extend-filesystems[1434]: Found loop4 Jul 14 22:20:35.492478 extend-filesystems[1434]: Found loop5 Jul 14 22:20:35.492478 extend-filesystems[1434]: Found sr0 Jul 14 22:20:35.492478 extend-filesystems[1434]: Found vda Jul 14 22:20:35.492478 extend-filesystems[1434]: Found vda1 Jul 14 22:20:35.492478 extend-filesystems[1434]: Found vda2 Jul 14 22:20:35.492478 extend-filesystems[1434]: Found vda3 Jul 14 22:20:35.492478 extend-filesystems[1434]: Found usr Jul 14 22:20:35.492478 extend-filesystems[1434]: Found vda4 Jul 14 22:20:35.492478 extend-filesystems[1434]: Found vda6 Jul 14 22:20:35.492478 extend-filesystems[1434]: Found vda7 Jul 14 22:20:35.492478 extend-filesystems[1434]: Found vda9 Jul 14 22:20:35.492478 extend-filesystems[1434]: Checking size of /dev/vda9 Jul 14 22:20:35.490408 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 22:20:35.521382 extend-filesystems[1434]: Resized partition /dev/vda9 Jul 14 22:20:35.502649 dbus-daemon[1432]: [system] SELinux support is enabled Jul 14 22:20:35.522534 update_engine[1442]: I20250714 22:20:35.501585 1442 main.cc:92] Flatcar Update Engine starting Jul 14 22:20:35.522534 update_engine[1442]: I20250714 22:20:35.505242 1442 update_check_scheduler.cc:74] Next update check in 11m53s Jul 14 22:20:35.522891 jq[1444]: true Jul 14 22:20:35.494946 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:20:35.495154 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 22:20:35.503037 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 22:20:35.516120 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:20:35.516617 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 22:20:35.527787 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Jul 14 22:20:35.529399 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:20:35.529441 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 22:20:35.530902 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:20:35.530926 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 22:20:35.533592 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:20:35.536267 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 22:20:35.537068 systemd[1]: Started update-engine.service - Update Engine. Jul 14 22:20:35.539614 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1374) Jul 14 22:20:35.544167 tar[1451]: linux-amd64/LICENSE Jul 14 22:20:35.544469 tar[1451]: linux-amd64/helm Jul 14 22:20:35.551805 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 22:20:35.553332 jq[1455]: true Jul 14 22:20:35.561627 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:20:35.561120 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 22:20:35.561147 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 22:20:35.562184 systemd-logind[1439]: New seat seat0. Jul 14 22:20:35.576386 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 22:20:35.589395 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:20:35.589395 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:20:35.589395 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:20:35.600988 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Jul 14 22:20:35.590963 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:20:35.592012 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 22:20:35.605329 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:20:35.616878 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:20:35.618758 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 22:20:35.620734 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 22:20:35.740643 containerd[1460]: time="2025-07-14T22:20:35.740542709Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 22:20:35.750274 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:20:35.766042 containerd[1460]: time="2025-07-14T22:20:35.765996934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:35.768044 containerd[1460]: time="2025-07-14T22:20:35.767999761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:20:35.768044 containerd[1460]: time="2025-07-14T22:20:35.768041960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:20:35.768099 containerd[1460]: time="2025-07-14T22:20:35.768061797Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:20:35.768243 containerd[1460]: time="2025-07-14T22:20:35.768226106Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 22:20:35.768269 containerd[1460]: time="2025-07-14T22:20:35.768246584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:35.768331 containerd[1460]: time="2025-07-14T22:20:35.768312397Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:20:35.768351 containerd[1460]: time="2025-07-14T22:20:35.768329139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:35.768564 containerd[1460]: time="2025-07-14T22:20:35.768531248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:20:35.768591 containerd[1460]: time="2025-07-14T22:20:35.768564801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:35.768591 containerd[1460]: time="2025-07-14T22:20:35.768580480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:20:35.768625 containerd[1460]: time="2025-07-14T22:20:35.768590990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:35.768700 containerd[1460]: time="2025-07-14T22:20:35.768683774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:35.768965 containerd[1460]: time="2025-07-14T22:20:35.768936859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:35.769077 containerd[1460]: time="2025-07-14T22:20:35.769059369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:20:35.769077 containerd[1460]: time="2025-07-14T22:20:35.769075258Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:20:35.769186 containerd[1460]: time="2025-07-14T22:20:35.769170357Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:20:35.769242 containerd[1460]: time="2025-07-14T22:20:35.769226672Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:20:35.774230 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 22:20:35.775826 containerd[1460]: time="2025-07-14T22:20:35.775796235Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:20:35.775862 containerd[1460]: time="2025-07-14T22:20:35.775845758Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:20:35.775883 containerd[1460]: time="2025-07-14T22:20:35.775861708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 22:20:35.775883 containerd[1460]: time="2025-07-14T22:20:35.775878589Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 22:20:35.775934 containerd[1460]: time="2025-07-14T22:20:35.775894469Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776021868Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776271617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776371985Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776385871Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776398114Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776412601Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776427569Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776440403Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776454319Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776467204Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776479386Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776491359Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776505365Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:20:35.776586 containerd[1460]: time="2025-07-14T22:20:35.776523710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776536253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776568403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776617225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776630610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776643705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776654996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776668621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776681505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776695372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776707244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776720348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776732511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776749643Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776770312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.776847 containerd[1460]: time="2025-07-14T22:20:35.776790420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.777131 containerd[1460]: time="2025-07-14T22:20:35.776801881Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:20:35.777131 containerd[1460]: time="2025-07-14T22:20:35.776865941Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:20:35.777131 containerd[1460]: time="2025-07-14T22:20:35.776884126Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 22:20:35.777131 containerd[1460]: time="2025-07-14T22:20:35.776977551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:20:35.777131 containerd[1460]: time="2025-07-14T22:20:35.776992990Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 22:20:35.777131 containerd[1460]: time="2025-07-14T22:20:35.777012336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.777131 containerd[1460]: time="2025-07-14T22:20:35.777025280Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 22:20:35.777131 containerd[1460]: time="2025-07-14T22:20:35.777035750Z" level=info msg="NRI interface is disabled by configuration." Jul 14 22:20:35.777131 containerd[1460]: time="2025-07-14T22:20:35.777045699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:20:35.778013 containerd[1460]: time="2025-07-14T22:20:35.777289987Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:20:35.778013 containerd[1460]: time="2025-07-14T22:20:35.777342024Z" level=info msg="Connect containerd service" Jul 14 22:20:35.778013 containerd[1460]: time="2025-07-14T22:20:35.777373674Z" level=info msg="using legacy CRI server" Jul 14 22:20:35.778013 containerd[1460]: time="2025-07-14T22:20:35.777380076Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 22:20:35.778013 containerd[1460]: time="2025-07-14T22:20:35.777452772Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:20:35.778263 containerd[1460]: time="2025-07-14T22:20:35.778062977Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:20:35.778263 containerd[1460]: time="2025-07-14T22:20:35.778186709Z" level=info msg="Start subscribing containerd event" Jul 14 22:20:35.778263 containerd[1460]: time="2025-07-14T22:20:35.778225482Z" level=info msg="Start recovering state" Jul 14 22:20:35.778318 containerd[1460]: time="2025-07-14T22:20:35.778277950Z" level=info msg="Start event monitor" Jul 14 22:20:35.778318 containerd[1460]: time="2025-07-14T22:20:35.778292457Z" level=info msg="Start snapshots syncer" Jul 14 22:20:35.778318 containerd[1460]: time="2025-07-14T22:20:35.778301634Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:20:35.778318 containerd[1460]: time="2025-07-14T22:20:35.778309890Z" level=info msg="Start streaming server" Jul 14 22:20:35.778823 containerd[1460]: time="2025-07-14T22:20:35.778803045Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:20:35.778876 containerd[1460]: time="2025-07-14T22:20:35.778861234Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:20:35.779507 containerd[1460]: time="2025-07-14T22:20:35.779473212Z" level=info msg="containerd successfully booted in 0.040767s" Jul 14 22:20:35.785886 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 22:20:35.786921 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 22:20:35.792516 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:20:35.792840 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 22:20:35.800082 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 22:20:35.810581 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 22:20:35.813441 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 22:20:35.816204 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 22:20:35.817434 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 22:20:35.991423 tar[1451]: linux-amd64/README.md Jul 14 22:20:36.004029 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 22:20:37.180767 systemd-networkd[1393]: eth0: Gained IPv6LL Jul 14 22:20:37.184023 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 22:20:37.185835 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 22:20:37.195946 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 22:20:37.198657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:20:37.200789 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 22:20:37.218949 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:20:37.219191 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 22:20:37.221024 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 22:20:37.224457 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 22:20:37.909832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:20:37.911485 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 22:20:37.912744 systemd[1]: Startup finished in 713ms (kernel) + 30.971s (initrd) + 4.687s (userspace) = 36.372s. Jul 14 22:20:37.913951 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:20:38.331068 kubelet[1546]: E0714 22:20:38.330997 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:20:38.335185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:20:38.335414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:20:45.388779 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 22:20:45.389979 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:37232.service - OpenSSH per-connection server daemon (10.0.0.1:37232). Jul 14 22:20:45.432668 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 37232 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:45.434795 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:45.443725 systemd-logind[1439]: New session 1 of user core. Jul 14 22:20:45.444986 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 22:20:45.455840 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 22:20:45.468191 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 22:20:45.479890 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 22:20:45.482719 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:20:45.584210 systemd[1564]: Queued start job for default target default.target. Jul 14 22:20:45.598982 systemd[1564]: Created slice app.slice - User Application Slice. Jul 14 22:20:45.599009 systemd[1564]: Reached target paths.target - Paths. Jul 14 22:20:45.599023 systemd[1564]: Reached target timers.target - Timers. Jul 14 22:20:45.600651 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 22:20:45.611463 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 22:20:45.611540 systemd[1564]: Reached target sockets.target - Sockets. Jul 14 22:20:45.611569 systemd[1564]: Reached target basic.target - Basic System. Jul 14 22:20:45.611607 systemd[1564]: Reached target default.target - Main User Target. Jul 14 22:20:45.611640 systemd[1564]: Startup finished in 122ms. Jul 14 22:20:45.612267 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 22:20:45.613899 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 22:20:45.678012 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:37240.service - OpenSSH per-connection server daemon (10.0.0.1:37240). Jul 14 22:20:45.716433 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 37240 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:45.718064 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:45.722138 systemd-logind[1439]: New session 2 of user core. Jul 14 22:20:45.731660 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 22:20:45.785492 sshd[1575]: pam_unix(sshd:session): session closed for user core Jul 14 22:20:45.804298 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:37240.service: Deactivated successfully. Jul 14 22:20:45.805923 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:20:45.807219 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:20:45.816810 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:37242.service - OpenSSH per-connection server daemon (10.0.0.1:37242). Jul 14 22:20:45.817632 systemd-logind[1439]: Removed session 2. Jul 14 22:20:45.850927 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 37242 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:45.852341 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:45.855939 systemd-logind[1439]: New session 3 of user core. Jul 14 22:20:45.870674 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 22:20:45.919404 sshd[1582]: pam_unix(sshd:session): session closed for user core Jul 14 22:20:45.933428 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:37242.service: Deactivated successfully. Jul 14 22:20:45.935167 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:20:45.936681 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:20:45.942957 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:37254.service - OpenSSH per-connection server daemon (10.0.0.1:37254). Jul 14 22:20:45.943804 systemd-logind[1439]: Removed session 3. Jul 14 22:20:45.975766 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 37254 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:45.977200 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:45.980708 systemd-logind[1439]: New session 4 of user core. Jul 14 22:20:45.996659 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 22:20:46.050420 sshd[1589]: pam_unix(sshd:session): session closed for user core Jul 14 22:20:46.070203 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:37254.service: Deactivated successfully. Jul 14 22:20:46.071781 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:20:46.073142 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:20:46.080858 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:37262.service - OpenSSH per-connection server daemon (10.0.0.1:37262). Jul 14 22:20:46.082153 systemd-logind[1439]: Removed session 4. Jul 14 22:20:46.115167 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 37262 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:46.116763 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:46.120698 systemd-logind[1439]: New session 5 of user core. Jul 14 22:20:46.136675 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 22:20:46.384813 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:20:46.385150 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:20:46.403816 sudo[1599]: pam_unix(sudo:session): session closed for user root Jul 14 22:20:46.405841 sshd[1596]: pam_unix(sshd:session): session closed for user core Jul 14 22:20:46.420314 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:37262.service: Deactivated successfully. Jul 14 22:20:46.421968 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:20:46.423473 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:20:46.434817 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:37266.service - OpenSSH per-connection server daemon (10.0.0.1:37266). Jul 14 22:20:46.435704 systemd-logind[1439]: Removed session 5. Jul 14 22:20:46.468401 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 37266 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:46.470072 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:46.473995 systemd-logind[1439]: New session 6 of user core. Jul 14 22:20:46.485662 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 22:20:46.539387 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:20:46.539750 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:20:46.543566 sudo[1608]: pam_unix(sudo:session): session closed for user root Jul 14 22:20:46.551297 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 22:20:46.551779 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:20:46.582834 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 22:20:46.584658 auditctl[1611]: No rules Jul 14 22:20:46.585097 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:20:46.585317 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 22:20:46.587949 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:20:46.618693 augenrules[1629]: No rules Jul 14 22:20:46.620382 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:20:46.622036 sudo[1607]: pam_unix(sudo:session): session closed for user root Jul 14 22:20:46.623920 sshd[1604]: pam_unix(sshd:session): session closed for user core Jul 14 22:20:46.637342 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:37266.service: Deactivated successfully. Jul 14 22:20:46.639259 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:20:46.640846 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:20:46.650800 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:37280.service - OpenSSH per-connection server daemon (10.0.0.1:37280). Jul 14 22:20:46.651744 systemd-logind[1439]: Removed session 6. Jul 14 22:20:46.687009 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 37280 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:46.688637 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:46.692715 systemd-logind[1439]: New session 7 of user core. Jul 14 22:20:46.707684 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 22:20:46.762139 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:20:46.762575 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:20:47.041784 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 22:20:47.041930 (dockerd)[1658]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 22:20:47.312871 dockerd[1658]: time="2025-07-14T22:20:47.312743410Z" level=info msg="Starting up" Jul 14 22:20:47.513874 systemd[1]: var-lib-docker-metacopy\x2dcheck530914501-merged.mount: Deactivated successfully. Jul 14 22:20:47.539896 dockerd[1658]: time="2025-07-14T22:20:47.539857251Z" level=info msg="Loading containers: start." Jul 14 22:20:47.644597 kernel: Initializing XFRM netlink socket Jul 14 22:20:47.717788 systemd-networkd[1393]: docker0: Link UP Jul 14 22:20:47.738980 dockerd[1658]: time="2025-07-14T22:20:47.738942217Z" level=info msg="Loading containers: done." Jul 14 22:20:47.752256 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck756260574-merged.mount: Deactivated successfully. Jul 14 22:20:47.756770 dockerd[1658]: time="2025-07-14T22:20:47.756727948Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:20:47.756851 dockerd[1658]: time="2025-07-14T22:20:47.756832925Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 22:20:47.756983 dockerd[1658]: time="2025-07-14T22:20:47.756953962Z" level=info msg="Daemon has completed initialization" Jul 14 22:20:47.793031 dockerd[1658]: time="2025-07-14T22:20:47.791779104Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:20:47.792277 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 22:20:48.490034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:20:48.499772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:20:48.675408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:20:48.679961 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:20:48.719374 kubelet[1813]: E0714 22:20:48.719256 1813 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:20:48.726158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:20:48.726364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:20:58.121081 containerd[1460]: time="2025-07-14T22:20:58.121036679Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.0\"" Jul 14 22:20:58.740119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:20:58.753735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:20:58.923845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:20:58.928069 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:20:58.966769 kubelet[1832]: E0714 22:20:58.966707 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:20:58.971064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:20:58.971279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:21:08.908869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2601937967.mount: Deactivated successfully. Jul 14 22:21:08.989950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:21:08.996715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:09.169724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:09.182880 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:21:09.218921 kubelet[1862]: E0714 22:21:09.218788 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:21:09.223356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:21:09.223640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:21:10.189016 containerd[1460]: time="2025-07-14T22:21:10.188949653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:10.189748 containerd[1460]: time="2025-07-14T22:21:10.189685912Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.0: active requests=0, bytes read=30074507" Jul 14 22:21:10.190795 containerd[1460]: time="2025-07-14T22:21:10.190767478Z" level=info msg="ImageCreate event name:\"sha256:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:10.194209 containerd[1460]: time="2025-07-14T22:21:10.194169486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6679a9970a8b2f18647b33bf02e5e9895d286689256e2f7172481b4096e46a32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:10.195239 containerd[1460]: time="2025-07-14T22:21:10.195191666Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.0\" with image id \"sha256:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6679a9970a8b2f18647b33bf02e5e9895d286689256e2f7172481b4096e46a32\", size \"30071307\" in 12.074109831s" Jul 14 22:21:10.195296 containerd[1460]: time="2025-07-14T22:21:10.195240449Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.0\" returns image reference \"sha256:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4\"" Jul 14 22:21:10.195947 containerd[1460]: time="2025-07-14T22:21:10.195914098Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.0\"" Jul 14 22:21:11.690524 containerd[1460]: time="2025-07-14T22:21:11.690462208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:11.691406 containerd[1460]: time="2025-07-14T22:21:11.691353394Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.0: active requests=0, bytes read=26007510" Jul 14 22:21:11.692672 containerd[1460]: time="2025-07-14T22:21:11.692635121Z" level=info msg="ImageCreate event name:\"sha256:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:11.695413 containerd[1460]: time="2025-07-14T22:21:11.695372969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0b32ab11fd06504608cdb9084f7284106b4f5f07f35eb8823e70ea0eaaf252a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:11.696339 containerd[1460]: time="2025-07-14T22:21:11.696295163Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.0\" with image id \"sha256:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0b32ab11fd06504608cdb9084f7284106b4f5f07f35eb8823e70ea0eaaf252a\", size \"27635030\" in 1.500344094s" Jul 14 22:21:11.696339 containerd[1460]: time="2025-07-14T22:21:11.696330351Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.0\" returns image reference \"sha256:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02\"" Jul 14 22:21:11.697223 containerd[1460]: time="2025-07-14T22:21:11.697200776Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.0\"" Jul 14 22:21:13.102483 containerd[1460]: time="2025-07-14T22:21:13.102424215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:13.103301 containerd[1460]: time="2025-07-14T22:21:13.103247946Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.0: active requests=0, bytes read=20148946" Jul 14 22:21:13.104527 containerd[1460]: time="2025-07-14T22:21:13.104495739Z" level=info msg="ImageCreate event name:\"sha256:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:13.107514 containerd[1460]: time="2025-07-14T22:21:13.107463984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8dd2fbeb7f711da53a89ded239e54133f34110d98de887a39a9021e651b51f1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:13.108413 containerd[1460]: time="2025-07-14T22:21:13.108378318Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.0\" with image id \"sha256:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8dd2fbeb7f711da53a89ded239e54133f34110d98de887a39a9021e651b51f1f\", size \"21776484\" in 1.41115023s" Jul 14 22:21:13.108413 containerd[1460]: time="2025-07-14T22:21:13.108409157Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.0\" returns image reference \"sha256:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4\"" Jul 14 22:21:13.108902 containerd[1460]: time="2025-07-14T22:21:13.108877516Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.0\"" Jul 14 22:21:14.083186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3075747735.mount: Deactivated successfully. Jul 14 22:21:14.915160 containerd[1460]: time="2025-07-14T22:21:14.915093503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:14.915882 containerd[1460]: time="2025-07-14T22:21:14.915846756Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.0: active requests=0, bytes read=31888707" Jul 14 22:21:14.916972 containerd[1460]: time="2025-07-14T22:21:14.916937115Z" level=info msg="ImageCreate event name:\"sha256:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:14.919036 containerd[1460]: time="2025-07-14T22:21:14.918975600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:14.919533 containerd[1460]: time="2025-07-14T22:21:14.919500115Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.0\" with image id \"sha256:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68\", repo tag \"registry.k8s.io/kube-proxy:v1.33.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b\", size \"31887726\" in 1.8105914s" Jul 14 22:21:14.919576 containerd[1460]: time="2025-07-14T22:21:14.919530173Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.0\" returns image reference \"sha256:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68\"" Jul 14 22:21:14.920063 containerd[1460]: time="2025-07-14T22:21:14.920008610Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 14 22:21:15.436199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924769697.mount: Deactivated successfully. Jul 14 22:21:16.528403 containerd[1460]: time="2025-07-14T22:21:16.528336951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:16.529185 containerd[1460]: time="2025-07-14T22:21:16.529141078Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 14 22:21:16.530338 containerd[1460]: time="2025-07-14T22:21:16.530306214Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:16.533087 containerd[1460]: time="2025-07-14T22:21:16.533052091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:16.534062 containerd[1460]: time="2025-07-14T22:21:16.534034409Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.61399999s" Jul 14 22:21:16.534062 containerd[1460]: time="2025-07-14T22:21:16.534060709Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 14 22:21:16.534638 containerd[1460]: time="2025-07-14T22:21:16.534458459Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:21:16.962157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1426305287.mount: Deactivated successfully. Jul 14 22:21:16.968743 containerd[1460]: time="2025-07-14T22:21:16.968705057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:16.969564 containerd[1460]: time="2025-07-14T22:21:16.969507841Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 14 22:21:16.970750 containerd[1460]: time="2025-07-14T22:21:16.970714908Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:16.973158 containerd[1460]: time="2025-07-14T22:21:16.973124122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:16.974049 containerd[1460]: time="2025-07-14T22:21:16.974013561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 439.525716ms" Jul 14 22:21:16.974089 containerd[1460]: time="2025-07-14T22:21:16.974046755Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 22:21:16.974454 containerd[1460]: time="2025-07-14T22:21:16.974429997Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 14 22:21:19.240078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:21:19.249745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:20.439422 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:20.444908 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:21:20.481273 kubelet[1999]: E0714 22:21:20.481156 1999 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:21:20.485584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:21:20.485822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:21:20.572719 update_engine[1442]: I20250714 22:21:20.572631 1442 update_attempter.cc:509] Updating boot flags... Jul 14 22:21:20.778587 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2018) Jul 14 22:21:20.825581 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2022) Jul 14 22:21:20.858582 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2022) Jul 14 22:21:20.921922 containerd[1460]: time="2025-07-14T22:21:20.921873177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:20.922925 containerd[1460]: time="2025-07-14T22:21:20.922890784Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" Jul 14 22:21:20.924681 containerd[1460]: time="2025-07-14T22:21:20.924643690Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:20.927904 containerd[1460]: time="2025-07-14T22:21:20.927868847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:20.929046 containerd[1460]: time="2025-07-14T22:21:20.929006642Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.954546357s" Jul 14 22:21:20.929086 containerd[1460]: time="2025-07-14T22:21:20.929045656Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 14 22:21:30.490097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 14 22:21:30.499700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:30.665827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:30.671103 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:21:30.715594 kubelet[2040]: E0714 22:21:30.715457 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:21:30.720035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:21:30.720241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:21:33.705889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:33.713750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:33.737491 systemd[1]: Reloading requested from client PID 2074 ('systemctl') (unit session-7.scope)... Jul 14 22:21:33.737504 systemd[1]: Reloading... Jul 14 22:21:33.823596 zram_generator::config[2115]: No configuration found. Jul 14 22:21:34.967937 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:21:35.050810 systemd[1]: Reloading finished in 1312 ms. Jul 14 22:21:35.109184 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:35.111584 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:21:35.111929 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:35.114118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:35.284511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:35.289500 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:21:35.327388 kubelet[2163]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:21:35.327388 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 22:21:35.327388 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:21:35.327388 kubelet[2163]: I0714 22:21:35.327180 2163 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:21:36.562770 kubelet[2163]: I0714 22:21:36.562715 2163 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 14 22:21:36.562770 kubelet[2163]: I0714 22:21:36.562759 2163 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:21:36.563275 kubelet[2163]: I0714 22:21:36.563023 2163 server.go:956] "Client rotation is on, will bootstrap in background" Jul 14 22:21:36.586993 kubelet[2163]: I0714 22:21:36.586935 2163 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:21:36.587833 kubelet[2163]: E0714 22:21:36.587760 2163 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 14 22:21:36.591727 kubelet[2163]: E0714 22:21:36.591696 2163 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:21:36.591727 kubelet[2163]: I0714 22:21:36.591724 2163 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:21:36.666128 kubelet[2163]: I0714 22:21:36.666074 2163 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:21:36.666411 kubelet[2163]: I0714 22:21:36.666370 2163 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:21:36.666602 kubelet[2163]: I0714 22:21:36.666399 2163 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:21:36.666709 kubelet[2163]: I0714 22:21:36.666608 2163 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:21:36.666709 kubelet[2163]: I0714 22:21:36.666619 2163 container_manager_linux.go:303] "Creating device plugin manager" Jul 14 22:21:36.666794 kubelet[2163]: I0714 22:21:36.666777 2163 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:21:36.668762 kubelet[2163]: I0714 22:21:36.668736 2163 kubelet.go:480] "Attempting to sync node with API server" Jul 14 22:21:36.668762 kubelet[2163]: I0714 22:21:36.668755 2163 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:21:36.668842 kubelet[2163]: I0714 22:21:36.668781 2163 kubelet.go:386] "Adding apiserver pod source" Jul 14 22:21:36.668842 kubelet[2163]: I0714 22:21:36.668796 2163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:21:36.676108 kubelet[2163]: E0714 22:21:36.676076 2163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 14 22:21:36.676233 kubelet[2163]: I0714 22:21:36.676207 2163 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:21:36.676319 kubelet[2163]: E0714 22:21:36.675987 2163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 14 22:21:36.676780 kubelet[2163]: I0714 22:21:36.676757 2163 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 14 22:21:36.678354 kubelet[2163]: W0714 22:21:36.678326 2163 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:21:36.680809 kubelet[2163]: I0714 22:21:36.680786 2163 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 22:21:36.680856 kubelet[2163]: I0714 22:21:36.680834 2163 server.go:1289] "Started kubelet" Jul 14 22:21:36.680970 kubelet[2163]: I0714 22:21:36.680916 2163 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:21:36.681350 kubelet[2163]: I0714 22:21:36.681331 2163 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:21:36.681407 kubelet[2163]: I0714 22:21:36.681329 2163 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:21:36.683004 kubelet[2163]: I0714 22:21:36.682478 2163 server.go:317] "Adding debug handlers to kubelet server" Jul 14 22:21:36.683083 kubelet[2163]: I0714 22:21:36.683034 2163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:21:36.684020 kubelet[2163]: I0714 22:21:36.683878 2163 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:21:36.685055 kubelet[2163]: E0714 22:21:36.684114 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:36.685055 kubelet[2163]: I0714 22:21:36.684159 2163 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 22:21:36.685055 kubelet[2163]: I0714 22:21:36.684353 2163 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 22:21:36.685055 kubelet[2163]: I0714 22:21:36.684410 2163 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:21:36.685055 kubelet[2163]: E0714 22:21:36.684713 2163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 14 22:21:36.685055 kubelet[2163]: E0714 22:21:36.683898 2163 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523e4af645b02a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:21:36.68080849 +0000 UTC m=+1.387085481,LastTimestamp:2025-07-14 22:21:36.68080849 +0000 UTC m=+1.387085481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:21:36.685055 kubelet[2163]: E0714 22:21:36.684914 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Jul 14 22:21:36.686148 kubelet[2163]: I0714 22:21:36.685900 2163 factory.go:223] Registration of the systemd container factory successfully Jul 14 22:21:36.686148 kubelet[2163]: I0714 22:21:36.685989 2163 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:21:36.686853 kubelet[2163]: I0714 22:21:36.686837 2163 factory.go:223] Registration of the containerd container factory successfully Jul 14 22:21:36.686980 kubelet[2163]: E0714 22:21:36.686897 2163 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:21:36.703256 kubelet[2163]: I0714 22:21:36.703196 2163 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 14 22:21:36.704672 kubelet[2163]: I0714 22:21:36.704632 2163 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 14 22:21:36.704672 kubelet[2163]: I0714 22:21:36.704660 2163 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 14 22:21:36.704672 kubelet[2163]: I0714 22:21:36.704682 2163 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 22:21:36.704672 kubelet[2163]: I0714 22:21:36.704693 2163 kubelet.go:2436] "Starting kubelet main sync loop" Jul 14 22:21:36.704945 kubelet[2163]: E0714 22:21:36.704752 2163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:21:36.705802 kubelet[2163]: I0714 22:21:36.705762 2163 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 22:21:36.705802 kubelet[2163]: I0714 22:21:36.705785 2163 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 22:21:36.705802 kubelet[2163]: I0714 22:21:36.705802 2163 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:21:36.706033 kubelet[2163]: E0714 22:21:36.706005 2163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 14 22:21:36.784541 kubelet[2163]: E0714 22:21:36.784494 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:36.805645 kubelet[2163]: E0714 22:21:36.805605 2163 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:21:36.808688 kubelet[2163]: I0714 22:21:36.808657 2163 policy_none.go:49] "None policy: Start" Jul 14 22:21:36.808688 kubelet[2163]: I0714 22:21:36.808686 2163 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 22:21:36.808784 kubelet[2163]: I0714 22:21:36.808704 2163 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:21:36.874942 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 22:21:36.886368 kubelet[2163]: E0714 22:21:36.884803 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:36.886368 kubelet[2163]: E0714 22:21:36.886226 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Jul 14 22:21:36.890455 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 22:21:36.894045 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 22:21:36.904962 kubelet[2163]: E0714 22:21:36.904909 2163 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 14 22:21:36.905235 kubelet[2163]: I0714 22:21:36.905203 2163 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:21:36.905306 kubelet[2163]: I0714 22:21:36.905222 2163 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:21:36.905511 kubelet[2163]: I0714 22:21:36.905478 2163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:21:36.906322 kubelet[2163]: E0714 22:21:36.906300 2163 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 22:21:36.906386 kubelet[2163]: E0714 22:21:36.906335 2163 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:21:37.009486 kubelet[2163]: I0714 22:21:37.009428 2163 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:21:37.009906 kubelet[2163]: E0714 22:21:37.009844 2163 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jul 14 22:21:37.073925 systemd[1]: Created slice kubepods-burstable-pod2b9a176823429c41dbded160d7916968.slice - libcontainer container kubepods-burstable-pod2b9a176823429c41dbded160d7916968.slice. Jul 14 22:21:37.084342 kubelet[2163]: E0714 22:21:37.084308 2163 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:37.085415 kubelet[2163]: I0714 22:21:37.085381 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:37.085503 kubelet[2163]: I0714 22:21:37.085475 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:37.085532 kubelet[2163]: I0714 22:21:37.085512 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:37.085565 kubelet[2163]: I0714 22:21:37.085538 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:37.085592 kubelet[2163]: I0714 22:21:37.085570 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:37.085642 kubelet[2163]: I0714 22:21:37.085615 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b9a176823429c41dbded160d7916968-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2b9a176823429c41dbded160d7916968\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:37.085671 kubelet[2163]: I0714 22:21:37.085652 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b9a176823429c41dbded160d7916968-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2b9a176823429c41dbded160d7916968\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:37.085696 kubelet[2163]: I0714 22:21:37.085678 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b9a176823429c41dbded160d7916968-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2b9a176823429c41dbded160d7916968\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:37.096072 kubelet[2163]: E0714 22:21:37.095987 2163 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523e4af645b02a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:21:36.68080849 +0000 UTC m=+1.387085481,LastTimestamp:2025-07-14 22:21:36.68080849 +0000 UTC m=+1.387085481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:21:37.144630 systemd[1]: Created slice kubepods-burstable-pod7883c2a360afc5b3c9b064549b9b0c8d.slice - libcontainer container kubepods-burstable-pod7883c2a360afc5b3c9b064549b9b0c8d.slice. Jul 14 22:21:37.146811 kubelet[2163]: E0714 22:21:37.146781 2163 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:37.186514 kubelet[2163]: I0714 22:21:37.186404 2163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fad475d3be2e7026903cdccc200d075f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fad475d3be2e7026903cdccc200d075f\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:37.211820 kubelet[2163]: I0714 22:21:37.211786 2163 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:21:37.212214 kubelet[2163]: E0714 22:21:37.212173 2163 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jul 14 22:21:37.286891 kubelet[2163]: E0714 22:21:37.286838 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Jul 14 22:21:37.347265 systemd[1]: Created slice kubepods-burstable-podfad475d3be2e7026903cdccc200d075f.slice - libcontainer container kubepods-burstable-podfad475d3be2e7026903cdccc200d075f.slice. Jul 14 22:21:37.348979 kubelet[2163]: E0714 22:21:37.348947 2163 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:37.349299 kubelet[2163]: E0714 22:21:37.349285 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:37.349917 containerd[1460]: time="2025-07-14T22:21:37.349887161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fad475d3be2e7026903cdccc200d075f,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:37.385260 kubelet[2163]: E0714 22:21:37.385204 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:37.385811 containerd[1460]: time="2025-07-14T22:21:37.385772087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2b9a176823429c41dbded160d7916968,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:37.448374 kubelet[2163]: E0714 22:21:37.448232 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:37.448823 containerd[1460]: time="2025-07-14T22:21:37.448778161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7883c2a360afc5b3c9b064549b9b0c8d,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:37.614014 kubelet[2163]: I0714 22:21:37.613940 2163 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:21:37.614455 kubelet[2163]: E0714 22:21:37.614395 2163 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jul 14 22:21:37.662911 kubelet[2163]: E0714 22:21:37.662861 2163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 14 22:21:37.762931 kubelet[2163]: E0714 22:21:37.762889 2163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 14 22:21:37.878051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754862353.mount: Deactivated successfully. Jul 14 22:21:37.884996 kubelet[2163]: E0714 22:21:37.884962 2163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 14 22:21:37.885120 containerd[1460]: time="2025-07-14T22:21:37.885057325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:21:37.885961 containerd[1460]: time="2025-07-14T22:21:37.885934017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:21:37.886740 containerd[1460]: time="2025-07-14T22:21:37.886702015Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:21:37.888234 containerd[1460]: time="2025-07-14T22:21:37.888210098Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:21:37.889161 containerd[1460]: time="2025-07-14T22:21:37.889129390Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 14 22:21:37.890173 containerd[1460]: time="2025-07-14T22:21:37.890143040Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:21:37.891211 containerd[1460]: time="2025-07-14T22:21:37.891184574Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:21:37.896002 containerd[1460]: time="2025-07-14T22:21:37.895967428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:21:37.897000 containerd[1460]: time="2025-07-14T22:21:37.896965228Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.115085ms" Jul 14 22:21:37.898488 containerd[1460]: time="2025-07-14T22:21:37.898451120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 548.495449ms" Jul 14 22:21:37.900037 containerd[1460]: time="2025-07-14T22:21:37.899995220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 451.136907ms" Jul 14 22:21:38.047915 containerd[1460]: time="2025-07-14T22:21:38.047682916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:38.047915 containerd[1460]: time="2025-07-14T22:21:38.047750985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:38.047915 containerd[1460]: time="2025-07-14T22:21:38.047765262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:38.047915 containerd[1460]: time="2025-07-14T22:21:38.047845173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:38.049492 containerd[1460]: time="2025-07-14T22:21:38.049415070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:38.049492 containerd[1460]: time="2025-07-14T22:21:38.049462801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:38.049603 containerd[1460]: time="2025-07-14T22:21:38.049489541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:38.049656 containerd[1460]: time="2025-07-14T22:21:38.049588127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:38.049976 containerd[1460]: time="2025-07-14T22:21:38.049920814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:38.049976 containerd[1460]: time="2025-07-14T22:21:38.049954207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:38.049976 containerd[1460]: time="2025-07-14T22:21:38.049964676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:38.050083 containerd[1460]: time="2025-07-14T22:21:38.050035149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:38.077847 systemd[1]: Started cri-containerd-1e3f564ee2e8ba291e29cc71e2dc1988c5202b45a8c1ecf499ee68d102d35958.scope - libcontainer container 1e3f564ee2e8ba291e29cc71e2dc1988c5202b45a8c1ecf499ee68d102d35958. Jul 14 22:21:38.079834 systemd[1]: Started cri-containerd-9636788cf84dec304d238576e108c010c1882d4685baf553ccebb51d5acfb901.scope - libcontainer container 9636788cf84dec304d238576e108c010c1882d4685baf553ccebb51d5acfb901. Jul 14 22:21:38.081942 systemd[1]: Started cri-containerd-bac70dd3b3434975f0840cab1bf6e7ae159db2f4df705815c944173ad0c67052.scope - libcontainer container bac70dd3b3434975f0840cab1bf6e7ae159db2f4df705815c944173ad0c67052. Jul 14 22:21:38.087489 kubelet[2163]: E0714 22:21:38.087423 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="1.6s" Jul 14 22:21:38.096528 kubelet[2163]: E0714 22:21:38.096472 2163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 14 22:21:38.119005 containerd[1460]: time="2025-07-14T22:21:38.118933700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fad475d3be2e7026903cdccc200d075f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e3f564ee2e8ba291e29cc71e2dc1988c5202b45a8c1ecf499ee68d102d35958\"" Jul 14 22:21:38.119933 kubelet[2163]: E0714 22:21:38.119855 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:38.126518 containerd[1460]: time="2025-07-14T22:21:38.126390110Z" level=info msg="CreateContainer within sandbox \"1e3f564ee2e8ba291e29cc71e2dc1988c5202b45a8c1ecf499ee68d102d35958\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:21:38.127532 containerd[1460]: time="2025-07-14T22:21:38.127505742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7883c2a360afc5b3c9b064549b9b0c8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bac70dd3b3434975f0840cab1bf6e7ae159db2f4df705815c944173ad0c67052\"" Jul 14 22:21:38.128802 kubelet[2163]: E0714 22:21:38.128773 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:38.132630 containerd[1460]: time="2025-07-14T22:21:38.132580054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2b9a176823429c41dbded160d7916968,Namespace:kube-system,Attempt:0,} returns sandbox id \"9636788cf84dec304d238576e108c010c1882d4685baf553ccebb51d5acfb901\"" Jul 14 22:21:38.133265 kubelet[2163]: E0714 22:21:38.133241 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:38.133522 containerd[1460]: time="2025-07-14T22:21:38.133477494Z" level=info msg="CreateContainer within sandbox \"bac70dd3b3434975f0840cab1bf6e7ae159db2f4df705815c944173ad0c67052\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:21:38.140286 containerd[1460]: time="2025-07-14T22:21:38.140136822Z" level=info msg="CreateContainer within sandbox \"9636788cf84dec304d238576e108c010c1882d4685baf553ccebb51d5acfb901\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:21:38.157675 containerd[1460]: time="2025-07-14T22:21:38.157631947Z" level=info msg="CreateContainer within sandbox \"1e3f564ee2e8ba291e29cc71e2dc1988c5202b45a8c1ecf499ee68d102d35958\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c86b8da97b351cc693b178cb9ba0d6360522e4b3b11c2ff64a8c8770a6067871\"" Jul 14 22:21:38.158628 containerd[1460]: time="2025-07-14T22:21:38.158580054Z" level=info msg="StartContainer for \"c86b8da97b351cc693b178cb9ba0d6360522e4b3b11c2ff64a8c8770a6067871\"" Jul 14 22:21:38.161377 containerd[1460]: time="2025-07-14T22:21:38.161352176Z" level=info msg="CreateContainer within sandbox \"bac70dd3b3434975f0840cab1bf6e7ae159db2f4df705815c944173ad0c67052\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8355112c080f1e52718865e7ff0af5ebd811674ddcd2f11a647821a6f0f553c3\"" Jul 14 22:21:38.162044 containerd[1460]: time="2025-07-14T22:21:38.162014064Z" level=info msg="StartContainer for \"8355112c080f1e52718865e7ff0af5ebd811674ddcd2f11a647821a6f0f553c3\"" Jul 14 22:21:38.170506 containerd[1460]: time="2025-07-14T22:21:38.170453927Z" level=info msg="CreateContainer within sandbox \"9636788cf84dec304d238576e108c010c1882d4685baf553ccebb51d5acfb901\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"647cad91b6a05ca1035551e71a54a4c65c076e66a58e59c89feabeb10e2c4142\"" Jul 14 22:21:38.171055 containerd[1460]: time="2025-07-14T22:21:38.171033509Z" level=info msg="StartContainer for \"647cad91b6a05ca1035551e71a54a4c65c076e66a58e59c89feabeb10e2c4142\"" Jul 14 22:21:38.184778 systemd[1]: Started cri-containerd-c86b8da97b351cc693b178cb9ba0d6360522e4b3b11c2ff64a8c8770a6067871.scope - libcontainer container c86b8da97b351cc693b178cb9ba0d6360522e4b3b11c2ff64a8c8770a6067871. Jul 14 22:21:38.188098 systemd[1]: Started cri-containerd-8355112c080f1e52718865e7ff0af5ebd811674ddcd2f11a647821a6f0f553c3.scope - libcontainer container 8355112c080f1e52718865e7ff0af5ebd811674ddcd2f11a647821a6f0f553c3. Jul 14 22:21:38.195245 systemd[1]: Started cri-containerd-647cad91b6a05ca1035551e71a54a4c65c076e66a58e59c89feabeb10e2c4142.scope - libcontainer container 647cad91b6a05ca1035551e71a54a4c65c076e66a58e59c89feabeb10e2c4142. Jul 14 22:21:38.229085 containerd[1460]: time="2025-07-14T22:21:38.229035299Z" level=info msg="StartContainer for \"c86b8da97b351cc693b178cb9ba0d6360522e4b3b11c2ff64a8c8770a6067871\" returns successfully" Jul 14 22:21:38.245314 containerd[1460]: time="2025-07-14T22:21:38.245016440Z" level=info msg="StartContainer for \"647cad91b6a05ca1035551e71a54a4c65c076e66a58e59c89feabeb10e2c4142\" returns successfully" Jul 14 22:21:38.245314 containerd[1460]: time="2025-07-14T22:21:38.245086573Z" level=info msg="StartContainer for \"8355112c080f1e52718865e7ff0af5ebd811674ddcd2f11a647821a6f0f553c3\" returns successfully" Jul 14 22:21:38.416737 kubelet[2163]: I0714 22:21:38.416064 2163 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:21:38.713923 kubelet[2163]: E0714 22:21:38.713821 2163 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:38.714755 kubelet[2163]: E0714 22:21:38.714662 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:38.716470 kubelet[2163]: E0714 22:21:38.716341 2163 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:38.716991 kubelet[2163]: E0714 22:21:38.716442 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:38.720318 kubelet[2163]: E0714 22:21:38.720135 2163 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:38.720318 kubelet[2163]: E0714 22:21:38.720224 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:39.310389 kubelet[2163]: I0714 22:21:39.310344 2163 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 22:21:39.310651 kubelet[2163]: E0714 22:21:39.310403 2163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 22:21:39.328466 kubelet[2163]: E0714 22:21:39.328429 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:39.429138 kubelet[2163]: E0714 22:21:39.429077 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:39.529812 kubelet[2163]: E0714 22:21:39.529771 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:39.630729 kubelet[2163]: E0714 22:21:39.630589 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:39.721663 kubelet[2163]: E0714 22:21:39.721633 2163 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:39.722091 kubelet[2163]: E0714 22:21:39.721818 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:39.722091 kubelet[2163]: E0714 22:21:39.721836 2163 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:39.722091 kubelet[2163]: E0714 22:21:39.722017 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:39.731085 kubelet[2163]: E0714 22:21:39.731058 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:39.831883 kubelet[2163]: E0714 22:21:39.831823 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:39.932752 kubelet[2163]: E0714 22:21:39.932582 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:40.033350 kubelet[2163]: E0714 22:21:40.033289 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:40.134060 kubelet[2163]: E0714 22:21:40.134008 2163 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:40.284984 kubelet[2163]: I0714 22:21:40.284940 2163 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:40.292170 kubelet[2163]: I0714 22:21:40.292015 2163 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:40.298799 kubelet[2163]: I0714 22:21:40.298757 2163 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:40.538265 kubelet[2163]: I0714 22:21:40.538156 2163 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:40.543466 kubelet[2163]: E0714 22:21:40.543093 2163 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:40.543466 kubelet[2163]: E0714 22:21:40.543298 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:40.677533 kubelet[2163]: I0714 22:21:40.677492 2163 apiserver.go:52] "Watching apiserver" Jul 14 22:21:40.685488 kubelet[2163]: I0714 22:21:40.685455 2163 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 22:21:40.722085 kubelet[2163]: E0714 22:21:40.722049 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:40.722491 kubelet[2163]: E0714 22:21:40.722186 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:40.722491 kubelet[2163]: E0714 22:21:40.722186 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:41.228241 systemd[1]: Reloading requested from client PID 2456 ('systemctl') (unit session-7.scope)... Jul 14 22:21:41.228256 systemd[1]: Reloading... Jul 14 22:21:41.304588 zram_generator::config[2496]: No configuration found. Jul 14 22:21:41.413529 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:21:41.508727 systemd[1]: Reloading finished in 280 ms. Jul 14 22:21:41.550299 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:41.572204 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:21:41.572583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:41.580876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:41.745197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:41.751144 (kubelet)[2540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:21:41.797133 kubelet[2540]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:21:41.797133 kubelet[2540]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 22:21:41.797133 kubelet[2540]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:21:41.797465 kubelet[2540]: I0714 22:21:41.797137 2540 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:21:41.803636 kubelet[2540]: I0714 22:21:41.803602 2540 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 14 22:21:41.803636 kubelet[2540]: I0714 22:21:41.803633 2540 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:21:41.803838 kubelet[2540]: I0714 22:21:41.803820 2540 server.go:956] "Client rotation is on, will bootstrap in background" Jul 14 22:21:41.804943 kubelet[2540]: I0714 22:21:41.804923 2540 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 14 22:21:41.807007 kubelet[2540]: I0714 22:21:41.806985 2540 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:21:41.811496 kubelet[2540]: E0714 22:21:41.811455 2540 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:21:41.811496 kubelet[2540]: I0714 22:21:41.811490 2540 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:21:41.816315 kubelet[2540]: I0714 22:21:41.816295 2540 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:21:41.816533 kubelet[2540]: I0714 22:21:41.816499 2540 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:21:41.816715 kubelet[2540]: I0714 22:21:41.816529 2540 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:21:41.816798 kubelet[2540]: I0714 22:21:41.816719 2540 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:21:41.816798 kubelet[2540]: I0714 22:21:41.816728 2540 container_manager_linux.go:303] "Creating device plugin manager" Jul 14 22:21:41.817603 kubelet[2540]: I0714 22:21:41.817589 2540 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:21:41.817789 kubelet[2540]: I0714 22:21:41.817776 2540 kubelet.go:480] "Attempting to sync node with API server" Jul 14 22:21:41.817834 kubelet[2540]: I0714 22:21:41.817790 2540 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:21:41.817834 kubelet[2540]: I0714 22:21:41.817812 2540 kubelet.go:386] "Adding apiserver pod source" Jul 14 22:21:41.817834 kubelet[2540]: I0714 22:21:41.817825 2540 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:21:41.819081 kubelet[2540]: I0714 22:21:41.818872 2540 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:21:41.820108 kubelet[2540]: I0714 22:21:41.820093 2540 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 14 22:21:41.822667 kubelet[2540]: I0714 22:21:41.822652 2540 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 22:21:41.824436 kubelet[2540]: I0714 22:21:41.822691 2540 server.go:1289] "Started kubelet" Jul 14 22:21:41.824436 kubelet[2540]: I0714 22:21:41.823141 2540 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:21:41.824436 kubelet[2540]: I0714 22:21:41.823379 2540 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:21:41.824436 kubelet[2540]: I0714 22:21:41.823417 2540 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:21:41.824436 kubelet[2540]: I0714 22:21:41.823607 2540 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:21:41.824436 kubelet[2540]: I0714 22:21:41.824118 2540 server.go:317] "Adding debug handlers to kubelet server" Jul 14 22:21:41.825140 kubelet[2540]: I0714 22:21:41.824563 2540 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:21:41.826312 kubelet[2540]: E0714 22:21:41.825223 2540 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:41.826312 kubelet[2540]: I0714 22:21:41.825280 2540 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 22:21:41.826312 kubelet[2540]: I0714 22:21:41.825468 2540 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 22:21:41.826312 kubelet[2540]: I0714 22:21:41.825614 2540 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:21:41.838956 kubelet[2540]: I0714 22:21:41.838918 2540 factory.go:223] Registration of the containerd container factory successfully Jul 14 22:21:41.838956 kubelet[2540]: I0714 22:21:41.838945 2540 factory.go:223] Registration of the systemd container factory successfully Jul 14 22:21:41.839126 kubelet[2540]: I0714 22:21:41.839028 2540 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:21:41.839491 kubelet[2540]: E0714 22:21:41.839464 2540 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:21:41.846341 kubelet[2540]: I0714 22:21:41.846308 2540 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 14 22:21:41.847645 kubelet[2540]: I0714 22:21:41.847624 2540 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 14 22:21:41.847645 kubelet[2540]: I0714 22:21:41.847643 2540 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 14 22:21:41.847701 kubelet[2540]: I0714 22:21:41.847661 2540 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 22:21:41.847701 kubelet[2540]: I0714 22:21:41.847669 2540 kubelet.go:2436] "Starting kubelet main sync loop" Jul 14 22:21:41.847739 kubelet[2540]: E0714 22:21:41.847708 2540 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:21:41.870435 kubelet[2540]: I0714 22:21:41.870394 2540 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 22:21:41.870435 kubelet[2540]: I0714 22:21:41.870413 2540 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 22:21:41.870435 kubelet[2540]: I0714 22:21:41.870434 2540 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:21:41.870659 kubelet[2540]: I0714 22:21:41.870643 2540 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:21:41.870686 kubelet[2540]: I0714 22:21:41.870657 2540 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:21:41.870686 kubelet[2540]: I0714 22:21:41.870678 2540 policy_none.go:49] "None policy: Start" Jul 14 22:21:41.870738 kubelet[2540]: I0714 22:21:41.870688 2540 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 22:21:41.870738 kubelet[2540]: I0714 22:21:41.870701 2540 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:21:41.870815 kubelet[2540]: I0714 22:21:41.870801 2540 state_mem.go:75] "Updated machine memory state" Jul 14 22:21:41.874905 kubelet[2540]: E0714 22:21:41.874782 2540 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 14 22:21:41.874984 kubelet[2540]: I0714 22:21:41.874969 2540 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:21:41.875022 kubelet[2540]: I0714 22:21:41.874986 2540 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:21:41.875244 kubelet[2540]: I0714 22:21:41.875233 2540 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:21:41.876251 kubelet[2540]: E0714 22:21:41.876227 2540 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 22:21:41.948638 kubelet[2540]: I0714 22:21:41.948594 2540 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:41.948904 kubelet[2540]: I0714 22:21:41.948671 2540 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:41.949010 kubelet[2540]: I0714 22:21:41.948993 2540 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:41.954112 kubelet[2540]: E0714 22:21:41.954048 2540 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:41.954218 kubelet[2540]: E0714 22:21:41.954205 2540 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:41.954263 kubelet[2540]: E0714 22:21:41.954247 2540 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:41.979412 kubelet[2540]: I0714 22:21:41.979376 2540 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:21:41.985449 kubelet[2540]: I0714 22:21:41.985427 2540 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 14 22:21:41.985518 kubelet[2540]: I0714 22:21:41.985493 2540 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 22:21:42.126817 kubelet[2540]: I0714 22:21:42.126698 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b9a176823429c41dbded160d7916968-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2b9a176823429c41dbded160d7916968\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:42.126817 kubelet[2540]: I0714 22:21:42.126726 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:42.126817 kubelet[2540]: I0714 22:21:42.126742 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:42.126817 kubelet[2540]: I0714 22:21:42.126756 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:42.126817 kubelet[2540]: I0714 22:21:42.126773 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:42.127013 kubelet[2540]: I0714 22:21:42.126804 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:42.127013 kubelet[2540]: I0714 22:21:42.126840 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fad475d3be2e7026903cdccc200d075f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fad475d3be2e7026903cdccc200d075f\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:42.127013 kubelet[2540]: I0714 22:21:42.126855 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b9a176823429c41dbded160d7916968-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2b9a176823429c41dbded160d7916968\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:42.127013 kubelet[2540]: I0714 22:21:42.126872 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b9a176823429c41dbded160d7916968-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2b9a176823429c41dbded160d7916968\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:42.233540 sudo[2583]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 22:21:42.233925 sudo[2583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 14 22:21:42.254658 kubelet[2540]: E0714 22:21:42.254619 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:42.254658 kubelet[2540]: E0714 22:21:42.254659 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:42.254833 kubelet[2540]: E0714 22:21:42.254703 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:42.693302 sudo[2583]: pam_unix(sudo:session): session closed for user root Jul 14 22:21:42.818438 kubelet[2540]: I0714 22:21:42.818389 2540 apiserver.go:52] "Watching apiserver" Jul 14 22:21:42.825711 kubelet[2540]: I0714 22:21:42.825685 2540 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 22:21:42.856388 kubelet[2540]: I0714 22:21:42.856253 2540 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:42.856697 kubelet[2540]: I0714 22:21:42.856667 2540 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:42.856858 kubelet[2540]: I0714 22:21:42.856840 2540 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:42.863737 kubelet[2540]: E0714 22:21:42.863685 2540 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:42.863910 kubelet[2540]: E0714 22:21:42.863871 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:42.865340 kubelet[2540]: E0714 22:21:42.865010 2540 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:42.865340 kubelet[2540]: E0714 22:21:42.865216 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:42.865507 kubelet[2540]: E0714 22:21:42.865327 2540 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:42.865778 kubelet[2540]: E0714 22:21:42.865734 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:42.875725 kubelet[2540]: I0714 22:21:42.875439 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.875397424 podStartE2EDuration="2.875397424s" podCreationTimestamp="2025-07-14 22:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:21:42.875338693 +0000 UTC m=+1.119387091" watchObservedRunningTime="2025-07-14 22:21:42.875397424 +0000 UTC m=+1.119445812" Jul 14 22:21:42.904704 kubelet[2540]: I0714 22:21:42.904649 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.904632303 podStartE2EDuration="2.904632303s" podCreationTimestamp="2025-07-14 22:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:21:42.904482551 +0000 UTC m=+1.148530939" watchObservedRunningTime="2025-07-14 22:21:42.904632303 +0000 UTC m=+1.148680691" Jul 14 22:21:42.912358 kubelet[2540]: I0714 22:21:42.912271 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.912252577 podStartE2EDuration="2.912252577s" podCreationTimestamp="2025-07-14 22:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:21:42.91219072 +0000 UTC m=+1.156239108" watchObservedRunningTime="2025-07-14 22:21:42.912252577 +0000 UTC m=+1.156300965" Jul 14 22:21:43.857347 kubelet[2540]: E0714 22:21:43.857307 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:43.857796 kubelet[2540]: E0714 22:21:43.857525 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:43.857936 kubelet[2540]: E0714 22:21:43.857910 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:44.514727 sudo[1640]: pam_unix(sudo:session): session closed for user root Jul 14 22:21:44.516526 sshd[1637]: pam_unix(sshd:session): session closed for user core Jul 14 22:21:44.520736 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:37280.service: Deactivated successfully. Jul 14 22:21:44.522631 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:21:44.522814 systemd[1]: session-7.scope: Consumed 5.124s CPU time, 164.6M memory peak, 0B memory swap peak. Jul 14 22:21:44.523309 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:21:44.524145 systemd-logind[1439]: Removed session 7. Jul 14 22:21:44.858395 kubelet[2540]: E0714 22:21:44.858282 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:47.567460 kubelet[2540]: E0714 22:21:47.567421 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:48.936368 kubelet[2540]: I0714 22:21:48.936318 2540 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:21:48.936854 containerd[1460]: time="2025-07-14T22:21:48.936708201Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:21:48.937100 kubelet[2540]: I0714 22:21:48.936875 2540 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:21:49.484370 kubelet[2540]: E0714 22:21:49.484335 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:49.765394 systemd[1]: Created slice kubepods-besteffort-podae5f8fc1_809c_49d2_abb3_7bbf633f370e.slice - libcontainer container kubepods-besteffort-podae5f8fc1_809c_49d2_abb3_7bbf633f370e.slice. Jul 14 22:21:49.776490 kubelet[2540]: I0714 22:21:49.776331 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae5f8fc1-809c-49d2-abb3-7bbf633f370e-kube-proxy\") pod \"kube-proxy-pjngd\" (UID: \"ae5f8fc1-809c-49d2-abb3-7bbf633f370e\") " pod="kube-system/kube-proxy-pjngd" Jul 14 22:21:49.776490 kubelet[2540]: I0714 22:21:49.776483 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-config-path\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.776685 kubelet[2540]: I0714 22:21:49.776591 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-host-proc-sys-net\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.776685 kubelet[2540]: I0714 22:21:49.776634 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae5f8fc1-809c-49d2-abb3-7bbf633f370e-lib-modules\") pod \"kube-proxy-pjngd\" (UID: \"ae5f8fc1-809c-49d2-abb3-7bbf633f370e\") " pod="kube-system/kube-proxy-pjngd" Jul 14 22:21:49.776779 kubelet[2540]: I0714 22:21:49.776753 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-cgroup\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.776779 kubelet[2540]: I0714 22:21:49.776777 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-host-proc-sys-kernel\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.776832 kubelet[2540]: I0714 22:21:49.776792 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc08ba04-3a42-4245-94c4-59ea976d1374-hubble-tls\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.777475 kubelet[2540]: I0714 22:21:49.776916 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdm6\" (UniqueName: \"kubernetes.io/projected/ae5f8fc1-809c-49d2-abb3-7bbf633f370e-kube-api-access-drdm6\") pod \"kube-proxy-pjngd\" (UID: \"ae5f8fc1-809c-49d2-abb3-7bbf633f370e\") " pod="kube-system/kube-proxy-pjngd" Jul 14 22:21:49.777475 kubelet[2540]: I0714 22:21:49.776950 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-run\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.777475 kubelet[2540]: I0714 22:21:49.776970 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-bpf-maps\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.777475 kubelet[2540]: I0714 22:21:49.777133 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cni-path\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.777475 kubelet[2540]: I0714 22:21:49.777154 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f5xk\" (UniqueName: \"kubernetes.io/projected/dc08ba04-3a42-4245-94c4-59ea976d1374-kube-api-access-7f5xk\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.777475 kubelet[2540]: I0714 22:21:49.777224 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae5f8fc1-809c-49d2-abb3-7bbf633f370e-xtables-lock\") pod \"kube-proxy-pjngd\" (UID: \"ae5f8fc1-809c-49d2-abb3-7bbf633f370e\") " pod="kube-system/kube-proxy-pjngd" Jul 14 22:21:49.777649 kubelet[2540]: I0714 22:21:49.777248 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-hostproc\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.777649 kubelet[2540]: I0714 22:21:49.777394 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-etc-cni-netd\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.777649 kubelet[2540]: I0714 22:21:49.777415 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-lib-modules\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.777649 kubelet[2540]: I0714 22:21:49.777486 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-xtables-lock\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.778237 kubelet[2540]: I0714 22:21:49.777518 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc08ba04-3a42-4245-94c4-59ea976d1374-clustermesh-secrets\") pod \"cilium-z6xbz\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " pod="kube-system/cilium-z6xbz" Jul 14 22:21:49.786353 systemd[1]: Created slice kubepods-burstable-poddc08ba04_3a42_4245_94c4_59ea976d1374.slice - libcontainer container kubepods-burstable-poddc08ba04_3a42_4245_94c4_59ea976d1374.slice. Jul 14 22:21:49.866465 kubelet[2540]: E0714 22:21:49.866398 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:49.913810 systemd[1]: Created slice kubepods-besteffort-pod956c140b_924e_4b61_8dca_576bc309c767.slice - libcontainer container kubepods-besteffort-pod956c140b_924e_4b61_8dca_576bc309c767.slice. Jul 14 22:21:49.980595 kubelet[2540]: I0714 22:21:49.980535 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z57fp\" (UniqueName: \"kubernetes.io/projected/956c140b-924e-4b61-8dca-576bc309c767-kube-api-access-z57fp\") pod \"cilium-operator-6c4d7847fc-rbxww\" (UID: \"956c140b-924e-4b61-8dca-576bc309c767\") " pod="kube-system/cilium-operator-6c4d7847fc-rbxww" Jul 14 22:21:49.980595 kubelet[2540]: I0714 22:21:49.980600 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/956c140b-924e-4b61-8dca-576bc309c767-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rbxww\" (UID: \"956c140b-924e-4b61-8dca-576bc309c767\") " pod="kube-system/cilium-operator-6c4d7847fc-rbxww" Jul 14 22:21:50.081988 kubelet[2540]: E0714 22:21:50.081837 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:50.082592 containerd[1460]: time="2025-07-14T22:21:50.082504438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjngd,Uid:ae5f8fc1-809c-49d2-abb3-7bbf633f370e,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:50.090295 kubelet[2540]: E0714 22:21:50.090249 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:50.090998 containerd[1460]: time="2025-07-14T22:21:50.090891774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z6xbz,Uid:dc08ba04-3a42-4245-94c4-59ea976d1374,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:50.122166 containerd[1460]: time="2025-07-14T22:21:50.121993615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:50.122494 containerd[1460]: time="2025-07-14T22:21:50.122128447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:50.122494 containerd[1460]: time="2025-07-14T22:21:50.122359742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:50.122929 containerd[1460]: time="2025-07-14T22:21:50.122757280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:50.122929 containerd[1460]: time="2025-07-14T22:21:50.122478516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:50.122929 containerd[1460]: time="2025-07-14T22:21:50.122586479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:50.122929 containerd[1460]: time="2025-07-14T22:21:50.122600004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:50.122929 containerd[1460]: time="2025-07-14T22:21:50.122686737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:50.159802 systemd[1]: Started cri-containerd-9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710.scope - libcontainer container 9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710. Jul 14 22:21:50.162569 systemd[1]: Started cri-containerd-f05e800d2424f9c11c3fc71ac57fda3c32673256e9963cc7be2eadaa679a8d30.scope - libcontainer container f05e800d2424f9c11c3fc71ac57fda3c32673256e9963cc7be2eadaa679a8d30. Jul 14 22:21:50.189381 containerd[1460]: time="2025-07-14T22:21:50.189188366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjngd,Uid:ae5f8fc1-809c-49d2-abb3-7bbf633f370e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f05e800d2424f9c11c3fc71ac57fda3c32673256e9963cc7be2eadaa679a8d30\"" Jul 14 22:21:50.189381 containerd[1460]: time="2025-07-14T22:21:50.189332157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z6xbz,Uid:dc08ba04-3a42-4245-94c4-59ea976d1374,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\"" Jul 14 22:21:50.190071 kubelet[2540]: E0714 22:21:50.190046 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:50.190270 kubelet[2540]: E0714 22:21:50.190254 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:50.191492 containerd[1460]: time="2025-07-14T22:21:50.191439287Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 22:21:50.196190 containerd[1460]: time="2025-07-14T22:21:50.196054923Z" level=info msg="CreateContainer within sandbox \"f05e800d2424f9c11c3fc71ac57fda3c32673256e9963cc7be2eadaa679a8d30\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:21:50.215450 containerd[1460]: time="2025-07-14T22:21:50.215386930Z" level=info msg="CreateContainer within sandbox \"f05e800d2424f9c11c3fc71ac57fda3c32673256e9963cc7be2eadaa679a8d30\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d600e4fc43935f9b4fc448000879cb79a1355377fdfff25ed67e82eb83fd736e\"" Jul 14 22:21:50.215939 containerd[1460]: time="2025-07-14T22:21:50.215901497Z" level=info msg="StartContainer for \"d600e4fc43935f9b4fc448000879cb79a1355377fdfff25ed67e82eb83fd736e\"" Jul 14 22:21:50.218566 kubelet[2540]: E0714 22:21:50.217151 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:50.218639 containerd[1460]: time="2025-07-14T22:21:50.217696292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rbxww,Uid:956c140b-924e-4b61-8dca-576bc309c767,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:50.244236 containerd[1460]: time="2025-07-14T22:21:50.243821978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:50.244236 containerd[1460]: time="2025-07-14T22:21:50.243935402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:50.244236 containerd[1460]: time="2025-07-14T22:21:50.243954237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:50.244236 containerd[1460]: time="2025-07-14T22:21:50.244089110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:50.253745 systemd[1]: Started cri-containerd-d600e4fc43935f9b4fc448000879cb79a1355377fdfff25ed67e82eb83fd736e.scope - libcontainer container d600e4fc43935f9b4fc448000879cb79a1355377fdfff25ed67e82eb83fd736e. Jul 14 22:21:50.265838 systemd[1]: Started cri-containerd-e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3.scope - libcontainer container e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3. Jul 14 22:21:50.293085 containerd[1460]: time="2025-07-14T22:21:50.293030463Z" level=info msg="StartContainer for \"d600e4fc43935f9b4fc448000879cb79a1355377fdfff25ed67e82eb83fd736e\" returns successfully" Jul 14 22:21:50.306961 containerd[1460]: time="2025-07-14T22:21:50.306913508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rbxww,Uid:956c140b-924e-4b61-8dca-576bc309c767,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\"" Jul 14 22:21:50.308088 kubelet[2540]: E0714 22:21:50.308029 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:50.871007 kubelet[2540]: E0714 22:21:50.870932 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:50.879840 kubelet[2540]: I0714 22:21:50.879774 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pjngd" podStartSLOduration=1.879755307 podStartE2EDuration="1.879755307s" podCreationTimestamp="2025-07-14 22:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:21:50.879252111 +0000 UTC m=+9.123300510" watchObservedRunningTime="2025-07-14 22:21:50.879755307 +0000 UTC m=+9.123803695" Jul 14 22:21:54.677963 kubelet[2540]: E0714 22:21:54.677932 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:57.163675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1847309283.mount: Deactivated successfully. Jul 14 22:21:57.571842 kubelet[2540]: E0714 22:21:57.571788 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:00.449269 containerd[1460]: time="2025-07-14T22:22:00.449179236Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:00.450331 containerd[1460]: time="2025-07-14T22:22:00.450294830Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 14 22:22:00.452173 containerd[1460]: time="2025-07-14T22:22:00.452100622Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:00.453662 containerd[1460]: time="2025-07-14T22:22:00.453627368Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.262127888s" Jul 14 22:22:00.453721 containerd[1460]: time="2025-07-14T22:22:00.453666281Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 14 22:22:00.458246 containerd[1460]: time="2025-07-14T22:22:00.458211055Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 22:22:00.469788 containerd[1460]: time="2025-07-14T22:22:00.469741056Z" level=info msg="CreateContainer within sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 22:22:00.486534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001072481.mount: Deactivated successfully. Jul 14 22:22:00.489926 containerd[1460]: time="2025-07-14T22:22:00.489886325Z" level=info msg="CreateContainer within sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32\"" Jul 14 22:22:00.490627 containerd[1460]: time="2025-07-14T22:22:00.490602990Z" level=info msg="StartContainer for \"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32\"" Jul 14 22:22:00.518707 systemd[1]: Started cri-containerd-6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32.scope - libcontainer container 6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32. Jul 14 22:22:00.544037 containerd[1460]: time="2025-07-14T22:22:00.543984249Z" level=info msg="StartContainer for \"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32\" returns successfully" Jul 14 22:22:00.563886 systemd[1]: cri-containerd-6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32.scope: Deactivated successfully. Jul 14 22:22:00.876406 containerd[1460]: time="2025-07-14T22:22:00.872288049Z" level=info msg="shim disconnected" id=6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32 namespace=k8s.io Jul 14 22:22:00.876406 containerd[1460]: time="2025-07-14T22:22:00.876388770Z" level=warning msg="cleaning up after shim disconnected" id=6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32 namespace=k8s.io Jul 14 22:22:00.876406 containerd[1460]: time="2025-07-14T22:22:00.876412114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:22:01.129362 kubelet[2540]: E0714 22:22:01.129220 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:01.135341 containerd[1460]: time="2025-07-14T22:22:01.135286554Z" level=info msg="CreateContainer within sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 22:22:01.151420 containerd[1460]: time="2025-07-14T22:22:01.151346640Z" level=info msg="CreateContainer within sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0\"" Jul 14 22:22:01.152171 containerd[1460]: time="2025-07-14T22:22:01.152087601Z" level=info msg="StartContainer for \"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0\"" Jul 14 22:22:01.189694 systemd[1]: Started cri-containerd-058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0.scope - libcontainer container 058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0. Jul 14 22:22:01.218107 containerd[1460]: time="2025-07-14T22:22:01.218060204Z" level=info msg="StartContainer for \"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0\" returns successfully" Jul 14 22:22:01.231961 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:22:01.232330 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:22:01.232424 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:22:01.238970 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:22:01.239226 systemd[1]: cri-containerd-058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0.scope: Deactivated successfully. Jul 14 22:22:01.264348 containerd[1460]: time="2025-07-14T22:22:01.264270531Z" level=info msg="shim disconnected" id=058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0 namespace=k8s.io Jul 14 22:22:01.264348 containerd[1460]: time="2025-07-14T22:22:01.264330584Z" level=warning msg="cleaning up after shim disconnected" id=058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0 namespace=k8s.io Jul 14 22:22:01.264348 containerd[1460]: time="2025-07-14T22:22:01.264338699Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:22:01.265780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:22:01.484792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32-rootfs.mount: Deactivated successfully. Jul 14 22:22:01.852792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2059929347.mount: Deactivated successfully. Jul 14 22:22:02.128825 kubelet[2540]: E0714 22:22:02.128602 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:02.413181 containerd[1460]: time="2025-07-14T22:22:02.413043050Z" level=info msg="CreateContainer within sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 22:22:02.665778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123101974.mount: Deactivated successfully. Jul 14 22:22:02.669534 containerd[1460]: time="2025-07-14T22:22:02.669490627Z" level=info msg="CreateContainer within sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61\"" Jul 14 22:22:02.670060 containerd[1460]: time="2025-07-14T22:22:02.670036352Z" level=info msg="StartContainer for \"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61\"" Jul 14 22:22:02.684473 containerd[1460]: time="2025-07-14T22:22:02.683646986Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:02.685713 containerd[1460]: time="2025-07-14T22:22:02.685673902Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 14 22:22:02.687174 containerd[1460]: time="2025-07-14T22:22:02.687143190Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:02.688578 containerd[1460]: time="2025-07-14T22:22:02.688519434Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.230269566s" Jul 14 22:22:02.688627 containerd[1460]: time="2025-07-14T22:22:02.688582112Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 14 22:22:02.707750 systemd[1]: Started cri-containerd-cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61.scope - libcontainer container cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61. Jul 14 22:22:02.754627 systemd[1]: cri-containerd-cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61.scope: Deactivated successfully. Jul 14 22:22:02.911490 containerd[1460]: time="2025-07-14T22:22:02.911436046Z" level=info msg="CreateContainer within sandbox \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 22:22:02.913686 containerd[1460]: time="2025-07-14T22:22:02.913636708Z" level=info msg="StartContainer for \"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61\" returns successfully" Jul 14 22:22:02.926535 containerd[1460]: time="2025-07-14T22:22:02.926435337Z" level=info msg="CreateContainer within sandbox \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\"" Jul 14 22:22:02.927865 containerd[1460]: time="2025-07-14T22:22:02.927827281Z" level=info msg="StartContainer for \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\"" Jul 14 22:22:02.942296 containerd[1460]: time="2025-07-14T22:22:02.942240693Z" level=info msg="shim disconnected" id=cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61 namespace=k8s.io Jul 14 22:22:02.942296 containerd[1460]: time="2025-07-14T22:22:02.942288172Z" level=warning msg="cleaning up after shim disconnected" id=cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61 namespace=k8s.io Jul 14 22:22:02.942577 containerd[1460]: time="2025-07-14T22:22:02.942305143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:22:02.957715 systemd[1]: Started cri-containerd-538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca.scope - libcontainer container 538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca. Jul 14 22:22:02.982077 containerd[1460]: time="2025-07-14T22:22:02.982028705Z" level=info msg="StartContainer for \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\" returns successfully" Jul 14 22:22:03.131139 kubelet[2540]: E0714 22:22:03.131081 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:03.134027 kubelet[2540]: E0714 22:22:03.133982 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:03.140338 kubelet[2540]: I0714 22:22:03.140181 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rbxww" podStartSLOduration=1.7591973699999999 podStartE2EDuration="14.140167219s" podCreationTimestamp="2025-07-14 22:21:49 +0000 UTC" firstStartedPulling="2025-07-14 22:21:50.308542279 +0000 UTC m=+8.552590667" lastFinishedPulling="2025-07-14 22:22:02.689512128 +0000 UTC m=+20.933560516" observedRunningTime="2025-07-14 22:22:03.139873688 +0000 UTC m=+21.383922076" watchObservedRunningTime="2025-07-14 22:22:03.140167219 +0000 UTC m=+21.384215617" Jul 14 22:22:03.140976 containerd[1460]: time="2025-07-14T22:22:03.140934108Z" level=info msg="CreateContainer within sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 22:22:03.158094 containerd[1460]: time="2025-07-14T22:22:03.158038401Z" level=info msg="CreateContainer within sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777\"" Jul 14 22:22:03.158658 containerd[1460]: time="2025-07-14T22:22:03.158615073Z" level=info msg="StartContainer for \"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777\"" Jul 14 22:22:03.190699 systemd[1]: Started cri-containerd-566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777.scope - libcontainer container 566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777. Jul 14 22:22:03.215382 systemd[1]: cri-containerd-566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777.scope: Deactivated successfully. Jul 14 22:22:03.218139 containerd[1460]: time="2025-07-14T22:22:03.218096174Z" level=info msg="StartContainer for \"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777\" returns successfully" Jul 14 22:22:03.244159 containerd[1460]: time="2025-07-14T22:22:03.244091037Z" level=info msg="shim disconnected" id=566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777 namespace=k8s.io Jul 14 22:22:03.244159 containerd[1460]: time="2025-07-14T22:22:03.244157992Z" level=warning msg="cleaning up after shim disconnected" id=566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777 namespace=k8s.io Jul 14 22:22:03.244369 containerd[1460]: time="2025-07-14T22:22:03.244167420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:22:03.663257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61-rootfs.mount: Deactivated successfully. Jul 14 22:22:04.137797 kubelet[2540]: E0714 22:22:04.137567 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:04.138224 kubelet[2540]: E0714 22:22:04.137805 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:04.151123 containerd[1460]: time="2025-07-14T22:22:04.150602627Z" level=info msg="CreateContainer within sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 22:22:04.170858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1356348172.mount: Deactivated successfully. Jul 14 22:22:04.182931 containerd[1460]: time="2025-07-14T22:22:04.182881975Z" level=info msg="CreateContainer within sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\"" Jul 14 22:22:04.183706 containerd[1460]: time="2025-07-14T22:22:04.183600153Z" level=info msg="StartContainer for \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\"" Jul 14 22:22:04.241853 systemd[1]: Started cri-containerd-43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3.scope - libcontainer container 43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3. Jul 14 22:22:04.278299 containerd[1460]: time="2025-07-14T22:22:04.278254048Z" level=info msg="StartContainer for \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\" returns successfully" Jul 14 22:22:04.395686 kubelet[2540]: I0714 22:22:04.394848 2540 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 22:22:04.452895 systemd[1]: Created slice kubepods-burstable-pod098c5ec3_7362_4d64_8131_bd59e7a51184.slice - libcontainer container kubepods-burstable-pod098c5ec3_7362_4d64_8131_bd59e7a51184.slice. Jul 14 22:22:04.459915 systemd[1]: Created slice kubepods-burstable-pod4fdaf4f1_24da_42b5_b4f4_8e7d8a480458.slice - libcontainer container kubepods-burstable-pod4fdaf4f1_24da_42b5_b4f4_8e7d8a480458.slice. Jul 14 22:22:04.542738 kubelet[2540]: I0714 22:22:04.542681 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/098c5ec3-7362-4d64-8131-bd59e7a51184-config-volume\") pod \"coredns-674b8bbfcf-jxmzs\" (UID: \"098c5ec3-7362-4d64-8131-bd59e7a51184\") " pod="kube-system/coredns-674b8bbfcf-jxmzs" Jul 14 22:22:04.542738 kubelet[2540]: I0714 22:22:04.542729 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p82lb\" (UniqueName: \"kubernetes.io/projected/4fdaf4f1-24da-42b5-b4f4-8e7d8a480458-kube-api-access-p82lb\") pod \"coredns-674b8bbfcf-jd46g\" (UID: \"4fdaf4f1-24da-42b5-b4f4-8e7d8a480458\") " pod="kube-system/coredns-674b8bbfcf-jd46g" Jul 14 22:22:04.542738 kubelet[2540]: I0714 22:22:04.542749 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fdaf4f1-24da-42b5-b4f4-8e7d8a480458-config-volume\") pod \"coredns-674b8bbfcf-jd46g\" (UID: \"4fdaf4f1-24da-42b5-b4f4-8e7d8a480458\") " pod="kube-system/coredns-674b8bbfcf-jd46g" Jul 14 22:22:04.542930 kubelet[2540]: I0714 22:22:04.542766 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7djb5\" (UniqueName: \"kubernetes.io/projected/098c5ec3-7362-4d64-8131-bd59e7a51184-kube-api-access-7djb5\") pod \"coredns-674b8bbfcf-jxmzs\" (UID: \"098c5ec3-7362-4d64-8131-bd59e7a51184\") " pod="kube-system/coredns-674b8bbfcf-jxmzs" Jul 14 22:22:04.756463 kubelet[2540]: E0714 22:22:04.756431 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:04.757079 containerd[1460]: time="2025-07-14T22:22:04.757048912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jxmzs,Uid:098c5ec3-7362-4d64-8131-bd59e7a51184,Namespace:kube-system,Attempt:0,}" Jul 14 22:22:04.764688 kubelet[2540]: E0714 22:22:04.764645 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:04.765086 containerd[1460]: time="2025-07-14T22:22:04.765047529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jd46g,Uid:4fdaf4f1-24da-42b5-b4f4-8e7d8a480458,Namespace:kube-system,Attempt:0,}" Jul 14 22:22:05.142177 kubelet[2540]: E0714 22:22:05.142059 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:06.143957 kubelet[2540]: E0714 22:22:06.143910 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:07.145510 kubelet[2540]: E0714 22:22:07.145457 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:07.730136 systemd-networkd[1393]: cilium_host: Link UP Jul 14 22:22:07.730313 systemd-networkd[1393]: cilium_net: Link UP Jul 14 22:22:07.730485 systemd-networkd[1393]: cilium_net: Gained carrier Jul 14 22:22:07.730673 systemd-networkd[1393]: cilium_host: Gained carrier Jul 14 22:22:07.754678 systemd-networkd[1393]: cilium_host: Gained IPv6LL Jul 14 22:22:07.847895 systemd-networkd[1393]: cilium_vxlan: Link UP Jul 14 22:22:07.847905 systemd-networkd[1393]: cilium_vxlan: Gained carrier Jul 14 22:22:08.093581 kernel: NET: Registered PF_ALG protocol family Jul 14 22:22:08.509692 systemd-networkd[1393]: cilium_net: Gained IPv6LL Jul 14 22:22:08.788689 systemd-networkd[1393]: lxc_health: Link UP Jul 14 22:22:08.796713 systemd-networkd[1393]: lxc_health: Gained carrier Jul 14 22:22:09.333072 systemd-networkd[1393]: lxcd780cbd6131a: Link UP Jul 14 22:22:09.351132 systemd-networkd[1393]: lxc04dd0999d167: Link UP Jul 14 22:22:09.352767 kernel: eth0: renamed from tmp6750c Jul 14 22:22:09.358665 kernel: eth0: renamed from tmpf1654 Jul 14 22:22:09.366447 systemd-networkd[1393]: lxcd780cbd6131a: Gained carrier Jul 14 22:22:09.367150 systemd-networkd[1393]: lxc04dd0999d167: Gained carrier Jul 14 22:22:09.660760 systemd-networkd[1393]: cilium_vxlan: Gained IPv6LL Jul 14 22:22:10.092497 kubelet[2540]: E0714 22:22:10.092461 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:10.150900 kubelet[2540]: E0714 22:22:10.150675 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:10.728618 kubelet[2540]: I0714 22:22:10.727695 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z6xbz" podStartSLOduration=11.460759554 podStartE2EDuration="21.727676796s" podCreationTimestamp="2025-07-14 22:21:49 +0000 UTC" firstStartedPulling="2025-07-14 22:21:50.191125638 +0000 UTC m=+8.435174026" lastFinishedPulling="2025-07-14 22:22:00.458042869 +0000 UTC m=+18.702091268" observedRunningTime="2025-07-14 22:22:05.746409882 +0000 UTC m=+23.990458290" watchObservedRunningTime="2025-07-14 22:22:10.727676796 +0000 UTC m=+28.971725194" Jul 14 22:22:10.748723 systemd-networkd[1393]: lxc_health: Gained IPv6LL Jul 14 22:22:10.812924 systemd-networkd[1393]: lxcd780cbd6131a: Gained IPv6LL Jul 14 22:22:11.199715 systemd-networkd[1393]: lxc04dd0999d167: Gained IPv6LL Jul 14 22:22:12.939289 containerd[1460]: time="2025-07-14T22:22:12.939124536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:12.939931 containerd[1460]: time="2025-07-14T22:22:12.939871679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:12.939931 containerd[1460]: time="2025-07-14T22:22:12.939893361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:12.940110 containerd[1460]: time="2025-07-14T22:22:12.940002012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:12.942099 containerd[1460]: time="2025-07-14T22:22:12.941991110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:12.942654 containerd[1460]: time="2025-07-14T22:22:12.942108768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:12.942654 containerd[1460]: time="2025-07-14T22:22:12.942136943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:12.942654 containerd[1460]: time="2025-07-14T22:22:12.942239984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:12.960816 systemd[1]: run-containerd-runc-k8s.io-f16545889a0f2c2833fbec2a0a33fb17dd7fea4d282b6f1a50f6b013240c338c-runc.0jKdkG.mount: Deactivated successfully. Jul 14 22:22:12.980870 systemd[1]: Started cri-containerd-6750cf630d718f9d7ff3988d2a0def0d7ea8fb9d2ca332b0415ef99c81a0313b.scope - libcontainer container 6750cf630d718f9d7ff3988d2a0def0d7ea8fb9d2ca332b0415ef99c81a0313b. Jul 14 22:22:12.982738 systemd[1]: Started cri-containerd-f16545889a0f2c2833fbec2a0a33fb17dd7fea4d282b6f1a50f6b013240c338c.scope - libcontainer container f16545889a0f2c2833fbec2a0a33fb17dd7fea4d282b6f1a50f6b013240c338c. Jul 14 22:22:12.995789 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:22:12.998035 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:22:13.025094 containerd[1460]: time="2025-07-14T22:22:13.024943867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jd46g,Uid:4fdaf4f1-24da-42b5-b4f4-8e7d8a480458,Namespace:kube-system,Attempt:0,} returns sandbox id \"f16545889a0f2c2833fbec2a0a33fb17dd7fea4d282b6f1a50f6b013240c338c\"" Jul 14 22:22:13.025593 containerd[1460]: time="2025-07-14T22:22:13.025488305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jxmzs,Uid:098c5ec3-7362-4d64-8131-bd59e7a51184,Namespace:kube-system,Attempt:0,} returns sandbox id \"6750cf630d718f9d7ff3988d2a0def0d7ea8fb9d2ca332b0415ef99c81a0313b\"" Jul 14 22:22:13.025838 kubelet[2540]: E0714 22:22:13.025697 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:13.026590 kubelet[2540]: E0714 22:22:13.026368 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:13.040787 containerd[1460]: time="2025-07-14T22:22:13.040715262Z" level=info msg="CreateContainer within sandbox \"f16545889a0f2c2833fbec2a0a33fb17dd7fea4d282b6f1a50f6b013240c338c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:22:13.044137 containerd[1460]: time="2025-07-14T22:22:13.044105057Z" level=info msg="CreateContainer within sandbox \"6750cf630d718f9d7ff3988d2a0def0d7ea8fb9d2ca332b0415ef99c81a0313b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:22:13.061764 containerd[1460]: time="2025-07-14T22:22:13.061706668Z" level=info msg="CreateContainer within sandbox \"f16545889a0f2c2833fbec2a0a33fb17dd7fea4d282b6f1a50f6b013240c338c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f4738ee9b40eea00278f81d178298c260430117e6b7ed93bf24caa78c9ce372\"" Jul 14 22:22:13.062497 containerd[1460]: time="2025-07-14T22:22:13.062469379Z" level=info msg="StartContainer for \"0f4738ee9b40eea00278f81d178298c260430117e6b7ed93bf24caa78c9ce372\"" Jul 14 22:22:13.079507 containerd[1460]: time="2025-07-14T22:22:13.079456656Z" level=info msg="CreateContainer within sandbox \"6750cf630d718f9d7ff3988d2a0def0d7ea8fb9d2ca332b0415ef99c81a0313b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"33b37187c6280fdfdf8882b2c1f1ed6465883fccc00ea5b0df32767ff706ad9b\"" Jul 14 22:22:13.080515 containerd[1460]: time="2025-07-14T22:22:13.080406061Z" level=info msg="StartContainer for \"33b37187c6280fdfdf8882b2c1f1ed6465883fccc00ea5b0df32767ff706ad9b\"" Jul 14 22:22:13.094804 systemd[1]: Started cri-containerd-0f4738ee9b40eea00278f81d178298c260430117e6b7ed93bf24caa78c9ce372.scope - libcontainer container 0f4738ee9b40eea00278f81d178298c260430117e6b7ed93bf24caa78c9ce372. Jul 14 22:22:13.117798 systemd[1]: Started cri-containerd-33b37187c6280fdfdf8882b2c1f1ed6465883fccc00ea5b0df32767ff706ad9b.scope - libcontainer container 33b37187c6280fdfdf8882b2c1f1ed6465883fccc00ea5b0df32767ff706ad9b. Jul 14 22:22:13.156824 containerd[1460]: time="2025-07-14T22:22:13.156777531Z" level=info msg="StartContainer for \"33b37187c6280fdfdf8882b2c1f1ed6465883fccc00ea5b0df32767ff706ad9b\" returns successfully" Jul 14 22:22:13.156824 containerd[1460]: time="2025-07-14T22:22:13.156838118Z" level=info msg="StartContainer for \"0f4738ee9b40eea00278f81d178298c260430117e6b7ed93bf24caa78c9ce372\" returns successfully" Jul 14 22:22:13.160686 kubelet[2540]: E0714 22:22:13.160458 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:13.171359 kubelet[2540]: I0714 22:22:13.171166 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jd46g" podStartSLOduration=24.171148185 podStartE2EDuration="24.171148185s" podCreationTimestamp="2025-07-14 22:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:22:13.17069296 +0000 UTC m=+31.414741368" watchObservedRunningTime="2025-07-14 22:22:13.171148185 +0000 UTC m=+31.415196573" Jul 14 22:22:14.162710 kubelet[2540]: E0714 22:22:14.162546 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:14.162710 kubelet[2540]: E0714 22:22:14.162575 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:14.563064 kubelet[2540]: I0714 22:22:14.562862 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jxmzs" podStartSLOduration=25.562845177 podStartE2EDuration="25.562845177s" podCreationTimestamp="2025-07-14 22:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:22:14.562318125 +0000 UTC m=+32.806366513" watchObservedRunningTime="2025-07-14 22:22:14.562845177 +0000 UTC m=+32.806893565" Jul 14 22:22:15.164000 kubelet[2540]: E0714 22:22:15.163954 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:17.809219 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:42364.service - OpenSSH per-connection server daemon (10.0.0.1:42364). Jul 14 22:22:17.852450 sshd[3935]: Accepted publickey for core from 10.0.0.1 port 42364 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:17.854222 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:17.858546 systemd-logind[1439]: New session 8 of user core. Jul 14 22:22:17.875737 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 22:22:18.135042 sshd[3935]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:18.138791 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:42364.service: Deactivated successfully. Jul 14 22:22:18.140958 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:22:18.141696 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:22:18.142697 systemd-logind[1439]: Removed session 8. Jul 14 22:22:23.146592 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:45022.service - OpenSSH per-connection server daemon (10.0.0.1:45022). Jul 14 22:22:23.186893 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 45022 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:23.188844 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:23.192833 systemd-logind[1439]: New session 9 of user core. Jul 14 22:22:23.202742 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 22:22:23.316247 sshd[3954]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:23.320827 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:45022.service: Deactivated successfully. Jul 14 22:22:23.322813 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:22:23.323728 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:22:23.324946 systemd-logind[1439]: Removed session 9. Jul 14 22:22:24.163857 kubelet[2540]: E0714 22:22:24.163767 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:24.179329 kubelet[2540]: E0714 22:22:24.179300 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:28.333054 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:45026.service - OpenSSH per-connection server daemon (10.0.0.1:45026). Jul 14 22:22:28.375917 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 45026 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:28.377358 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:28.381739 systemd-logind[1439]: New session 10 of user core. Jul 14 22:22:28.390690 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 22:22:28.506194 sshd[3973]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:28.510434 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:45026.service: Deactivated successfully. Jul 14 22:22:28.512377 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:22:28.513086 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:22:28.514190 systemd-logind[1439]: Removed session 10. Jul 14 22:22:33.523771 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:37340.service - OpenSSH per-connection server daemon (10.0.0.1:37340). Jul 14 22:22:33.564109 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 37340 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:33.566455 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:33.570859 systemd-logind[1439]: New session 11 of user core. Jul 14 22:22:33.580827 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 22:22:33.681705 sshd[3988]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:33.685306 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:37340.service: Deactivated successfully. Jul 14 22:22:33.686992 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:22:33.687656 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:22:33.688480 systemd-logind[1439]: Removed session 11. Jul 14 22:22:38.699672 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:37354.service - OpenSSH per-connection server daemon (10.0.0.1:37354). Jul 14 22:22:38.740147 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 37354 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:38.741739 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:38.746302 systemd-logind[1439]: New session 12 of user core. Jul 14 22:22:38.756719 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 22:22:38.863071 sshd[4004]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:38.876732 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:37354.service: Deactivated successfully. Jul 14 22:22:38.878705 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:22:38.880482 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:22:38.887978 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:37356.service - OpenSSH per-connection server daemon (10.0.0.1:37356). Jul 14 22:22:38.889174 systemd-logind[1439]: Removed session 12. Jul 14 22:22:38.922040 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 37356 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:38.923803 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:38.927949 systemd-logind[1439]: New session 13 of user core. Jul 14 22:22:38.940698 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 22:22:39.084057 sshd[4020]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:39.093797 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:37356.service: Deactivated successfully. Jul 14 22:22:39.098012 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:22:39.100297 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:22:39.107008 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:39908.service - OpenSSH per-connection server daemon (10.0.0.1:39908). Jul 14 22:22:39.108835 systemd-logind[1439]: Removed session 13. Jul 14 22:22:39.147136 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 39908 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:39.148632 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:39.152507 systemd-logind[1439]: New session 14 of user core. Jul 14 22:22:39.164680 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 22:22:39.282822 sshd[4032]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:39.287151 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:39908.service: Deactivated successfully. Jul 14 22:22:39.289491 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:22:39.290195 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:22:39.291083 systemd-logind[1439]: Removed session 14. Jul 14 22:22:44.294327 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:39912.service - OpenSSH per-connection server daemon (10.0.0.1:39912). Jul 14 22:22:44.332188 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 39912 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:44.333597 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:44.337473 systemd-logind[1439]: New session 15 of user core. Jul 14 22:22:44.346693 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 22:22:44.453007 sshd[4049]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:44.457380 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:39912.service: Deactivated successfully. Jul 14 22:22:44.459400 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:22:44.460225 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:22:44.461212 systemd-logind[1439]: Removed session 15. Jul 14 22:22:49.473705 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:38996.service - OpenSSH per-connection server daemon (10.0.0.1:38996). Jul 14 22:22:49.519490 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 38996 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:49.521266 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:49.524826 systemd-logind[1439]: New session 16 of user core. Jul 14 22:22:49.532876 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 22:22:49.642303 sshd[4064]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:49.648634 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:38996.service: Deactivated successfully. Jul 14 22:22:49.650779 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:22:49.652567 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:22:49.661959 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:39000.service - OpenSSH per-connection server daemon (10.0.0.1:39000). Jul 14 22:22:49.663024 systemd-logind[1439]: Removed session 16. Jul 14 22:22:49.699794 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 39000 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:49.701301 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:49.705191 systemd-logind[1439]: New session 17 of user core. Jul 14 22:22:49.711709 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 22:22:49.885889 sshd[4078]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:49.896380 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:39000.service: Deactivated successfully. Jul 14 22:22:49.898165 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:22:49.899344 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:22:49.900699 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:39010.service - OpenSSH per-connection server daemon (10.0.0.1:39010). Jul 14 22:22:49.901529 systemd-logind[1439]: Removed session 17. Jul 14 22:22:49.950892 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 39010 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:49.952447 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:49.956178 systemd-logind[1439]: New session 18 of user core. Jul 14 22:22:49.965680 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 22:23:04.848805 kubelet[2540]: E0714 22:23:04.848766 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:05.583946 sshd[4091]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:05.592589 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:39010.service: Deactivated successfully. Jul 14 22:23:05.594515 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:23:05.598502 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:23:05.606235 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:45488.service - OpenSSH per-connection server daemon (10.0.0.1:45488). Jul 14 22:23:05.607819 systemd-logind[1439]: Removed session 18. Jul 14 22:23:05.644441 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 45488 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:05.645997 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:05.650016 systemd-logind[1439]: New session 19 of user core. Jul 14 22:23:05.659669 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 22:23:05.849539 kubelet[2540]: E0714 22:23:05.849022 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:05.903372 sshd[4114]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:05.913314 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:45488.service: Deactivated successfully. Jul 14 22:23:05.915226 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:23:05.918120 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:23:05.927908 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:45502.service - OpenSSH per-connection server daemon (10.0.0.1:45502). Jul 14 22:23:05.928897 systemd-logind[1439]: Removed session 19. Jul 14 22:23:05.962074 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 45502 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:05.963766 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:05.967801 systemd-logind[1439]: New session 20 of user core. Jul 14 22:23:05.976697 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 22:23:06.081109 sshd[4127]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:06.085085 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:45502.service: Deactivated successfully. Jul 14 22:23:06.087613 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:23:06.088190 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:23:06.089103 systemd-logind[1439]: Removed session 20. Jul 14 22:23:08.848664 kubelet[2540]: E0714 22:23:08.848619 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:10.848820 kubelet[2540]: E0714 22:23:10.848771 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:11.098967 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:39136.service - OpenSSH per-connection server daemon (10.0.0.1:39136). Jul 14 22:23:11.137823 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 39136 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:11.139336 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:11.143277 systemd-logind[1439]: New session 21 of user core. Jul 14 22:23:11.152693 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 22:23:11.257602 sshd[4143]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:11.261609 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:39136.service: Deactivated successfully. Jul 14 22:23:11.263615 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:23:11.264225 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:23:11.265065 systemd-logind[1439]: Removed session 21. Jul 14 22:23:16.272702 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:39146.service - OpenSSH per-connection server daemon (10.0.0.1:39146). Jul 14 22:23:16.315971 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 39146 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:16.317749 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:16.322586 systemd-logind[1439]: New session 22 of user core. Jul 14 22:23:16.333799 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 22:23:16.446818 sshd[4158]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:16.450867 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:39146.service: Deactivated successfully. Jul 14 22:23:16.452683 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:23:16.453319 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:23:16.454202 systemd-logind[1439]: Removed session 22. Jul 14 22:23:17.848526 kubelet[2540]: E0714 22:23:17.848449 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:19.848504 kubelet[2540]: E0714 22:23:19.848468 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:21.458809 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:58428.service - OpenSSH per-connection server daemon (10.0.0.1:58428). Jul 14 22:23:21.497619 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 58428 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:21.499115 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:21.503052 systemd-logind[1439]: New session 23 of user core. Jul 14 22:23:21.514697 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 22:23:21.622177 sshd[4175]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:21.632490 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:58428.service: Deactivated successfully. Jul 14 22:23:21.634458 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 22:23:21.635968 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. Jul 14 22:23:21.646854 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:58434.service - OpenSSH per-connection server daemon (10.0.0.1:58434). Jul 14 22:23:21.647861 systemd-logind[1439]: Removed session 23. Jul 14 22:23:21.680509 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 58434 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:21.681900 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:21.685635 systemd-logind[1439]: New session 24 of user core. Jul 14 22:23:21.695681 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 22:23:23.016518 containerd[1460]: time="2025-07-14T22:23:23.016474849Z" level=info msg="StopContainer for \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\" with timeout 30 (s)" Jul 14 22:23:23.017322 containerd[1460]: time="2025-07-14T22:23:23.017216531Z" level=info msg="Stop container \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\" with signal terminated" Jul 14 22:23:23.044762 systemd[1]: cri-containerd-538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca.scope: Deactivated successfully. Jul 14 22:23:23.062983 containerd[1460]: time="2025-07-14T22:23:23.062937943Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:23:23.065461 containerd[1460]: time="2025-07-14T22:23:23.065387202Z" level=info msg="StopContainer for \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\" with timeout 2 (s)" Jul 14 22:23:23.066105 containerd[1460]: time="2025-07-14T22:23:23.065567151Z" level=info msg="Stop container \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\" with signal terminated" Jul 14 22:23:23.068231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca-rootfs.mount: Deactivated successfully. Jul 14 22:23:23.072624 systemd-networkd[1393]: lxc_health: Link DOWN Jul 14 22:23:23.072633 systemd-networkd[1393]: lxc_health: Lost carrier Jul 14 22:23:23.075708 containerd[1460]: time="2025-07-14T22:23:23.075646980Z" level=info msg="shim disconnected" id=538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca namespace=k8s.io Jul 14 22:23:23.075849 containerd[1460]: time="2025-07-14T22:23:23.075724186Z" level=warning msg="cleaning up after shim disconnected" id=538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca namespace=k8s.io Jul 14 22:23:23.075849 containerd[1460]: time="2025-07-14T22:23:23.075734355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:23:23.093515 containerd[1460]: time="2025-07-14T22:23:23.093469591Z" level=info msg="StopContainer for \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\" returns successfully" Jul 14 22:23:23.094164 containerd[1460]: time="2025-07-14T22:23:23.094134296Z" level=info msg="StopPodSandbox for \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\"" Jul 14 22:23:23.094266 containerd[1460]: time="2025-07-14T22:23:23.094173320Z" level=info msg="Container to stop \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:23:23.096187 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3-shm.mount: Deactivated successfully. Jul 14 22:23:23.097953 systemd[1]: cri-containerd-43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3.scope: Deactivated successfully. Jul 14 22:23:23.098234 systemd[1]: cri-containerd-43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3.scope: Consumed 7.063s CPU time. Jul 14 22:23:23.108102 systemd[1]: cri-containerd-e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3.scope: Deactivated successfully. Jul 14 22:23:23.118906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3-rootfs.mount: Deactivated successfully. Jul 14 22:23:23.128596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3-rootfs.mount: Deactivated successfully. Jul 14 22:23:23.129187 containerd[1460]: time="2025-07-14T22:23:23.129132764Z" level=info msg="shim disconnected" id=43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3 namespace=k8s.io Jul 14 22:23:23.129361 containerd[1460]: time="2025-07-14T22:23:23.129324687Z" level=warning msg="cleaning up after shim disconnected" id=43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3 namespace=k8s.io Jul 14 22:23:23.129361 containerd[1460]: time="2025-07-14T22:23:23.129339786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:23:23.134058 containerd[1460]: time="2025-07-14T22:23:23.133974033Z" level=info msg="shim disconnected" id=e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3 namespace=k8s.io Jul 14 22:23:23.134058 containerd[1460]: time="2025-07-14T22:23:23.134037864Z" level=warning msg="cleaning up after shim disconnected" id=e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3 namespace=k8s.io Jul 14 22:23:23.134058 containerd[1460]: time="2025-07-14T22:23:23.134050347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:23:23.145861 containerd[1460]: time="2025-07-14T22:23:23.145819819Z" level=info msg="StopContainer for \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\" returns successfully" Jul 14 22:23:23.146466 containerd[1460]: time="2025-07-14T22:23:23.146410986Z" level=info msg="StopPodSandbox for \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\"" Jul 14 22:23:23.146466 containerd[1460]: time="2025-07-14T22:23:23.146459889Z" level=info msg="Container to stop \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:23:23.146548 containerd[1460]: time="2025-07-14T22:23:23.146478774Z" level=info msg="Container to stop \"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:23:23.146548 containerd[1460]: time="2025-07-14T22:23:23.146504984Z" level=info msg="Container to stop \"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:23:23.146548 containerd[1460]: time="2025-07-14T22:23:23.146517127Z" level=info msg="Container to stop \"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:23:23.146548 containerd[1460]: time="2025-07-14T22:23:23.146529801Z" level=info msg="Container to stop \"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:23:23.152333 systemd[1]: cri-containerd-9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710.scope: Deactivated successfully. Jul 14 22:23:23.158172 containerd[1460]: time="2025-07-14T22:23:23.158118471Z" level=info msg="TearDown network for sandbox \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\" successfully" Jul 14 22:23:23.158172 containerd[1460]: time="2025-07-14T22:23:23.158160169Z" level=info msg="StopPodSandbox for \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\" returns successfully" Jul 14 22:23:23.176984 containerd[1460]: time="2025-07-14T22:23:23.176912837Z" level=info msg="shim disconnected" id=9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710 namespace=k8s.io Jul 14 22:23:23.176984 containerd[1460]: time="2025-07-14T22:23:23.176969574Z" level=warning msg="cleaning up after shim disconnected" id=9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710 namespace=k8s.io Jul 14 22:23:23.176984 containerd[1460]: time="2025-07-14T22:23:23.176977559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:23:23.192945 containerd[1460]: time="2025-07-14T22:23:23.192890221Z" level=info msg="TearDown network for sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" successfully" Jul 14 22:23:23.192945 containerd[1460]: time="2025-07-14T22:23:23.192924345Z" level=info msg="StopPodSandbox for \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" returns successfully" Jul 14 22:23:23.280709 kubelet[2540]: I0714 22:23:23.280287 2540 scope.go:117] "RemoveContainer" containerID="538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca" Jul 14 22:23:23.282181 containerd[1460]: time="2025-07-14T22:23:23.281722762Z" level=info msg="RemoveContainer for \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\"" Jul 14 22:23:23.289305 kubelet[2540]: I0714 22:23:23.289283 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-bpf-maps\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289378 kubelet[2540]: I0714 22:23:23.289321 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f5xk\" (UniqueName: \"kubernetes.io/projected/dc08ba04-3a42-4245-94c4-59ea976d1374-kube-api-access-7f5xk\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289378 kubelet[2540]: I0714 22:23:23.289339 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-hostproc\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289378 kubelet[2540]: I0714 22:23:23.289354 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-config-path\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289378 kubelet[2540]: I0714 22:23:23.289368 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-cgroup\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289478 kubelet[2540]: I0714 22:23:23.289384 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-host-proc-sys-kernel\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289478 kubelet[2540]: I0714 22:23:23.289398 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/956c140b-924e-4b61-8dca-576bc309c767-cilium-config-path\") pod \"956c140b-924e-4b61-8dca-576bc309c767\" (UID: \"956c140b-924e-4b61-8dca-576bc309c767\") " Jul 14 22:23:23.289478 kubelet[2540]: I0714 22:23:23.289413 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cni-path\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289478 kubelet[2540]: I0714 22:23:23.289426 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-host-proc-sys-net\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289478 kubelet[2540]: I0714 22:23:23.289439 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-lib-modules\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289478 kubelet[2540]: I0714 22:23:23.289453 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z57fp\" (UniqueName: \"kubernetes.io/projected/956c140b-924e-4b61-8dca-576bc309c767-kube-api-access-z57fp\") pod \"956c140b-924e-4b61-8dca-576bc309c767\" (UID: \"956c140b-924e-4b61-8dca-576bc309c767\") " Jul 14 22:23:23.289668 kubelet[2540]: I0714 22:23:23.289467 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-xtables-lock\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289668 kubelet[2540]: I0714 22:23:23.289485 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc08ba04-3a42-4245-94c4-59ea976d1374-clustermesh-secrets\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289668 kubelet[2540]: I0714 22:23:23.289502 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-run\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289668 kubelet[2540]: I0714 22:23:23.289519 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-etc-cni-netd\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.289668 kubelet[2540]: I0714 22:23:23.289536 2540 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc08ba04-3a42-4245-94c4-59ea976d1374-hubble-tls\") pod \"dc08ba04-3a42-4245-94c4-59ea976d1374\" (UID: \"dc08ba04-3a42-4245-94c4-59ea976d1374\") " Jul 14 22:23:23.290161 kubelet[2540]: I0714 22:23:23.289815 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:23:23.290161 kubelet[2540]: I0714 22:23:23.289871 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-hostproc" (OuterVolumeSpecName: "hostproc") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:23:23.290161 kubelet[2540]: I0714 22:23:23.289895 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:23:23.290326 kubelet[2540]: I0714 22:23:23.290310 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cni-path" (OuterVolumeSpecName: "cni-path") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:23:23.291127 containerd[1460]: time="2025-07-14T22:23:23.291088731Z" level=info msg="RemoveContainer for \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\" returns successfully" Jul 14 22:23:23.292852 kubelet[2540]: I0714 22:23:23.292798 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:23:23.293035 kubelet[2540]: I0714 22:23:23.292887 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc08ba04-3a42-4245-94c4-59ea976d1374-kube-api-access-7f5xk" (OuterVolumeSpecName: "kube-api-access-7f5xk") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "kube-api-access-7f5xk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 22:23:23.293035 kubelet[2540]: I0714 22:23:23.293012 2540 scope.go:117] "RemoveContainer" containerID="538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca" Jul 14 22:23:23.293128 kubelet[2540]: I0714 22:23:23.293105 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:23:23.293179 kubelet[2540]: I0714 22:23:23.293132 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:23:23.293316 kubelet[2540]: I0714 22:23:23.293290 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:23:23.295323 kubelet[2540]: I0714 22:23:23.293441 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc08ba04-3a42-4245-94c4-59ea976d1374-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 22:23:23.295323 kubelet[2540]: I0714 22:23:23.293487 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:23:23.295323 kubelet[2540]: I0714 22:23:23.295216 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/956c140b-924e-4b61-8dca-576bc309c767-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "956c140b-924e-4b61-8dca-576bc309c767" (UID: "956c140b-924e-4b61-8dca-576bc309c767"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 22:23:23.295323 kubelet[2540]: I0714 22:23:23.295286 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:23:23.295737 kubelet[2540]: I0714 22:23:23.295698 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 22:23:23.296282 kubelet[2540]: I0714 22:23:23.296227 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956c140b-924e-4b61-8dca-576bc309c767-kube-api-access-z57fp" (OuterVolumeSpecName: "kube-api-access-z57fp") pod "956c140b-924e-4b61-8dca-576bc309c767" (UID: "956c140b-924e-4b61-8dca-576bc309c767"). InnerVolumeSpecName "kube-api-access-z57fp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 22:23:23.296888 kubelet[2540]: I0714 22:23:23.296869 2540 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc08ba04-3a42-4245-94c4-59ea976d1374-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dc08ba04-3a42-4245-94c4-59ea976d1374" (UID: "dc08ba04-3a42-4245-94c4-59ea976d1374"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 22:23:23.297345 containerd[1460]: time="2025-07-14T22:23:23.297276555Z" level=error msg="ContainerStatus for \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\": not found" Jul 14 22:23:23.297511 kubelet[2540]: E0714 22:23:23.297485 2540 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\": not found" containerID="538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca" Jul 14 22:23:23.297593 kubelet[2540]: I0714 22:23:23.297533 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca"} err="failed to get container status \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\": rpc error: code = NotFound desc = an error occurred when try to find container \"538bf207dc88341e66dad842fbcfdb26efad1eb23e9e09449e41834b0d292eca\": not found" Jul 14 22:23:23.297625 kubelet[2540]: I0714 22:23:23.297596 2540 scope.go:117] "RemoveContainer" containerID="43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3" Jul 14 22:23:23.298692 containerd[1460]: time="2025-07-14T22:23:23.298668394Z" level=info msg="RemoveContainer for \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\"" Jul 14 22:23:23.302204 containerd[1460]: time="2025-07-14T22:23:23.302171565Z" level=info msg="RemoveContainer for \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\" returns successfully" Jul 14 22:23:23.302353 kubelet[2540]: I0714 22:23:23.302331 2540 scope.go:117] "RemoveContainer" containerID="566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777" Jul 14 22:23:23.303373 containerd[1460]: time="2025-07-14T22:23:23.303338590Z" level=info msg="RemoveContainer for \"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777\"" Jul 14 22:23:23.306604 containerd[1460]: time="2025-07-14T22:23:23.306580908Z" level=info msg="RemoveContainer for \"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777\" returns successfully" Jul 14 22:23:23.306741 kubelet[2540]: I0714 22:23:23.306707 2540 scope.go:117] "RemoveContainer" containerID="cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61" Jul 14 22:23:23.307519 containerd[1460]: time="2025-07-14T22:23:23.307494063Z" level=info msg="RemoveContainer for \"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61\"" Jul 14 22:23:23.310482 containerd[1460]: time="2025-07-14T22:23:23.310443988Z" level=info msg="RemoveContainer for \"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61\" returns successfully" Jul 14 22:23:23.310598 kubelet[2540]: I0714 22:23:23.310576 2540 scope.go:117] "RemoveContainer" containerID="058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0" Jul 14 22:23:23.311498 containerd[1460]: time="2025-07-14T22:23:23.311466770Z" level=info msg="RemoveContainer for \"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0\"" Jul 14 22:23:23.314520 containerd[1460]: time="2025-07-14T22:23:23.314484864Z" level=info msg="RemoveContainer for \"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0\" returns successfully" Jul 14 22:23:23.314663 kubelet[2540]: I0714 22:23:23.314640 2540 scope.go:117] "RemoveContainer" containerID="6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32" Jul 14 22:23:23.315513 containerd[1460]: time="2025-07-14T22:23:23.315475777Z" level=info msg="RemoveContainer for \"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32\"" Jul 14 22:23:23.318352 containerd[1460]: time="2025-07-14T22:23:23.318329079Z" level=info msg="RemoveContainer for \"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32\" returns successfully" Jul 14 22:23:23.318481 kubelet[2540]: I0714 22:23:23.318462 2540 scope.go:117] "RemoveContainer" containerID="43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3" Jul 14 22:23:23.320408 containerd[1460]: time="2025-07-14T22:23:23.320365838Z" level=error msg="ContainerStatus for \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\": not found" Jul 14 22:23:23.320520 kubelet[2540]: E0714 22:23:23.320494 2540 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\": not found" containerID="43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3" Jul 14 22:23:23.320578 kubelet[2540]: I0714 22:23:23.320521 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3"} err="failed to get container status \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\": rpc error: code = NotFound desc = an error occurred when try to find container \"43bd86ce3e0daeeca95940c651ba47c7eea5c1184196afc96209d1653f174eb3\": not found" Jul 14 22:23:23.320578 kubelet[2540]: I0714 22:23:23.320544 2540 scope.go:117] "RemoveContainer" containerID="566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777" Jul 14 22:23:23.320716 containerd[1460]: time="2025-07-14T22:23:23.320684720Z" level=error msg="ContainerStatus for \"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777\": not found" Jul 14 22:23:23.320806 kubelet[2540]: E0714 22:23:23.320786 2540 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777\": not found" containerID="566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777" Jul 14 22:23:23.320868 kubelet[2540]: I0714 22:23:23.320805 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777"} err="failed to get container status \"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777\": rpc error: code = NotFound desc = an error occurred when try to find container \"566f0ef7635c29a335b8e4a01ecbbd91b46a66d25e3d29e32633bc40b9498777\": not found" Jul 14 22:23:23.320868 kubelet[2540]: I0714 22:23:23.320816 2540 scope.go:117] "RemoveContainer" containerID="cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61" Jul 14 22:23:23.320996 containerd[1460]: time="2025-07-14T22:23:23.320966783Z" level=error msg="ContainerStatus for \"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61\": not found" Jul 14 22:23:23.321141 kubelet[2540]: E0714 22:23:23.321095 2540 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61\": not found" containerID="cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61" Jul 14 22:23:23.321141 kubelet[2540]: I0714 22:23:23.321122 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61"} err="failed to get container status \"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdc6354e2c6b065246e48fd8ea3c15c727f43c0cf7056d465c5ca434e0893c61\": not found" Jul 14 22:23:23.321141 kubelet[2540]: I0714 22:23:23.321139 2540 scope.go:117] "RemoveContainer" containerID="058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0" Jul 14 22:23:23.321345 containerd[1460]: time="2025-07-14T22:23:23.321300224Z" level=error msg="ContainerStatus for \"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0\": not found" Jul 14 22:23:23.321435 kubelet[2540]: E0714 22:23:23.321412 2540 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0\": not found" containerID="058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0" Jul 14 22:23:23.321435 kubelet[2540]: I0714 22:23:23.321431 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0"} err="failed to get container status \"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"058e2c08b8b57dd0770b5c1d1bd9829894c93d8267fdfd988e3e4913feabd3d0\": not found" Jul 14 22:23:23.321494 kubelet[2540]: I0714 22:23:23.321442 2540 scope.go:117] "RemoveContainer" containerID="6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32" Jul 14 22:23:23.321614 containerd[1460]: time="2025-07-14T22:23:23.321578940Z" level=error msg="ContainerStatus for \"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32\": not found" Jul 14 22:23:23.321693 kubelet[2540]: E0714 22:23:23.321673 2540 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32\": not found" containerID="6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32" Jul 14 22:23:23.321725 kubelet[2540]: I0714 22:23:23.321692 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32"} err="failed to get container status \"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32\": rpc error: code = NotFound desc = an error occurred when try to find container \"6954ffd97c16699cd06a2043ae29eea056ae08c891c8228d80400740a2c17a32\": not found" Jul 14 22:23:23.389974 kubelet[2540]: I0714 22:23:23.389930 2540 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.389974 kubelet[2540]: I0714 22:23:23.389961 2540 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.389974 kubelet[2540]: I0714 22:23:23.389970 2540 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.389974 kubelet[2540]: I0714 22:23:23.389979 2540 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/956c140b-924e-4b61-8dca-576bc309c767-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.389974 kubelet[2540]: I0714 22:23:23.389988 2540 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.389974 kubelet[2540]: I0714 22:23:23.389997 2540 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.390251 kubelet[2540]: I0714 22:23:23.390005 2540 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.390251 kubelet[2540]: I0714 22:23:23.390023 2540 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z57fp\" (UniqueName: \"kubernetes.io/projected/956c140b-924e-4b61-8dca-576bc309c767-kube-api-access-z57fp\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.390251 kubelet[2540]: I0714 22:23:23.390032 2540 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.390251 kubelet[2540]: I0714 22:23:23.390040 2540 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc08ba04-3a42-4245-94c4-59ea976d1374-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.390251 kubelet[2540]: I0714 22:23:23.390047 2540 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.390251 kubelet[2540]: I0714 22:23:23.390055 2540 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.390251 kubelet[2540]: I0714 22:23:23.390062 2540 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc08ba04-3a42-4245-94c4-59ea976d1374-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.390251 kubelet[2540]: I0714 22:23:23.390070 2540 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.390426 kubelet[2540]: I0714 22:23:23.390077 2540 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7f5xk\" (UniqueName: \"kubernetes.io/projected/dc08ba04-3a42-4245-94c4-59ea976d1374-kube-api-access-7f5xk\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.390426 kubelet[2540]: I0714 22:23:23.390085 2540 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc08ba04-3a42-4245-94c4-59ea976d1374-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 22:23:23.586868 systemd[1]: Removed slice kubepods-besteffort-pod956c140b_924e_4b61_8dca_576bc309c767.slice - libcontainer container kubepods-besteffort-pod956c140b_924e_4b61_8dca_576bc309c767.slice. Jul 14 22:23:23.590838 systemd[1]: Removed slice kubepods-burstable-poddc08ba04_3a42_4245_94c4_59ea976d1374.slice - libcontainer container kubepods-burstable-poddc08ba04_3a42_4245_94c4_59ea976d1374.slice. Jul 14 22:23:23.590924 systemd[1]: kubepods-burstable-poddc08ba04_3a42_4245_94c4_59ea976d1374.slice: Consumed 7.169s CPU time. Jul 14 22:23:23.850979 kubelet[2540]: I0714 22:23:23.850856 2540 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956c140b-924e-4b61-8dca-576bc309c767" path="/var/lib/kubelet/pods/956c140b-924e-4b61-8dca-576bc309c767/volumes" Jul 14 22:23:23.851462 kubelet[2540]: I0714 22:23:23.851440 2540 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc08ba04-3a42-4245-94c4-59ea976d1374" path="/var/lib/kubelet/pods/dc08ba04-3a42-4245-94c4-59ea976d1374/volumes" Jul 14 22:23:24.044442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710-rootfs.mount: Deactivated successfully. Jul 14 22:23:24.044577 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710-shm.mount: Deactivated successfully. Jul 14 22:23:24.044678 systemd[1]: var-lib-kubelet-pods-956c140b\x2d924e\x2d4b61\x2d8dca\x2d576bc309c767-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz57fp.mount: Deactivated successfully. Jul 14 22:23:24.044767 systemd[1]: var-lib-kubelet-pods-dc08ba04\x2d3a42\x2d4245\x2d94c4\x2d59ea976d1374-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7f5xk.mount: Deactivated successfully. Jul 14 22:23:24.044853 systemd[1]: var-lib-kubelet-pods-dc08ba04\x2d3a42\x2d4245\x2d94c4\x2d59ea976d1374-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 22:23:24.044948 systemd[1]: var-lib-kubelet-pods-dc08ba04\x2d3a42\x2d4245\x2d94c4\x2d59ea976d1374-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 22:23:24.989336 sshd[4189]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:24.997495 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:58434.service: Deactivated successfully. Jul 14 22:23:25.000220 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 22:23:25.002237 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. Jul 14 22:23:25.010857 systemd[1]: Started sshd@24-10.0.0.139:22-10.0.0.1:58436.service - OpenSSH per-connection server daemon (10.0.0.1:58436). Jul 14 22:23:25.011917 systemd-logind[1439]: Removed session 24. Jul 14 22:23:25.050739 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 58436 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:25.052426 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:25.057023 systemd-logind[1439]: New session 25 of user core. Jul 14 22:23:25.065677 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 22:23:25.546591 sshd[4348]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:25.559069 systemd[1]: sshd@24-10.0.0.139:22-10.0.0.1:58436.service: Deactivated successfully. Jul 14 22:23:25.561151 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 22:23:25.564389 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. Jul 14 22:23:25.576586 systemd[1]: Started sshd@25-10.0.0.139:22-10.0.0.1:58448.service - OpenSSH per-connection server daemon (10.0.0.1:58448). Jul 14 22:23:25.578197 systemd-logind[1439]: Removed session 25. Jul 14 22:23:25.584562 systemd[1]: Created slice kubepods-burstable-poddb5bf242_edc6_44ed_8cee_839e5c7c877f.slice - libcontainer container kubepods-burstable-poddb5bf242_edc6_44ed_8cee_839e5c7c877f.slice. Jul 14 22:23:25.601385 kubelet[2540]: I0714 22:23:25.601348 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db5bf242-edc6-44ed-8cee-839e5c7c877f-cni-path\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601385 kubelet[2540]: I0714 22:23:25.601386 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkf54\" (UniqueName: \"kubernetes.io/projected/db5bf242-edc6-44ed-8cee-839e5c7c877f-kube-api-access-hkf54\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601785 kubelet[2540]: I0714 22:23:25.601406 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db5bf242-edc6-44ed-8cee-839e5c7c877f-clustermesh-secrets\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601785 kubelet[2540]: I0714 22:23:25.601425 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db5bf242-edc6-44ed-8cee-839e5c7c877f-cilium-run\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601785 kubelet[2540]: I0714 22:23:25.601478 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db5bf242-edc6-44ed-8cee-839e5c7c877f-bpf-maps\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601785 kubelet[2540]: I0714 22:23:25.601508 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db5bf242-edc6-44ed-8cee-839e5c7c877f-hostproc\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601785 kubelet[2540]: I0714 22:23:25.601526 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db5bf242-edc6-44ed-8cee-839e5c7c877f-hubble-tls\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601785 kubelet[2540]: I0714 22:23:25.601543 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db5bf242-edc6-44ed-8cee-839e5c7c877f-cilium-cgroup\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601916 kubelet[2540]: I0714 22:23:25.601583 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db5bf242-edc6-44ed-8cee-839e5c7c877f-lib-modules\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601916 kubelet[2540]: I0714 22:23:25.601607 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db5bf242-edc6-44ed-8cee-839e5c7c877f-host-proc-sys-net\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601916 kubelet[2540]: I0714 22:23:25.601627 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db5bf242-edc6-44ed-8cee-839e5c7c877f-host-proc-sys-kernel\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601916 kubelet[2540]: I0714 22:23:25.601661 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db5bf242-edc6-44ed-8cee-839e5c7c877f-etc-cni-netd\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601916 kubelet[2540]: I0714 22:23:25.601677 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db5bf242-edc6-44ed-8cee-839e5c7c877f-xtables-lock\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.601916 kubelet[2540]: I0714 22:23:25.601698 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db5bf242-edc6-44ed-8cee-839e5c7c877f-cilium-config-path\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.602068 kubelet[2540]: I0714 22:23:25.601722 2540 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/db5bf242-edc6-44ed-8cee-839e5c7c877f-cilium-ipsec-secrets\") pod \"cilium-zbhg8\" (UID: \"db5bf242-edc6-44ed-8cee-839e5c7c877f\") " pod="kube-system/cilium-zbhg8" Jul 14 22:23:25.609180 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 58448 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:25.610608 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:25.614617 systemd-logind[1439]: New session 26 of user core. Jul 14 22:23:25.625723 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 22:23:25.676057 sshd[4362]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:25.686722 systemd[1]: sshd@25-10.0.0.139:22-10.0.0.1:58448.service: Deactivated successfully. Jul 14 22:23:25.688393 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 22:23:25.690054 systemd-logind[1439]: Session 26 logged out. Waiting for processes to exit. Jul 14 22:23:25.691318 systemd[1]: Started sshd@26-10.0.0.139:22-10.0.0.1:58464.service - OpenSSH per-connection server daemon (10.0.0.1:58464). Jul 14 22:23:25.692222 systemd-logind[1439]: Removed session 26. Jul 14 22:23:25.732486 sshd[4370]: Accepted publickey for core from 10.0.0.1 port 58464 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:25.734129 sshd[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:25.737731 systemd-logind[1439]: New session 27 of user core. Jul 14 22:23:25.754857 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 14 22:23:25.888359 kubelet[2540]: E0714 22:23:25.888218 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:25.889431 containerd[1460]: time="2025-07-14T22:23:25.889285184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zbhg8,Uid:db5bf242-edc6-44ed-8cee-839e5c7c877f,Namespace:kube-system,Attempt:0,}" Jul 14 22:23:25.908759 containerd[1460]: time="2025-07-14T22:23:25.908679366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:23:25.908759 containerd[1460]: time="2025-07-14T22:23:25.908734290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:23:25.908759 containerd[1460]: time="2025-07-14T22:23:25.908748797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:23:25.908883 containerd[1460]: time="2025-07-14T22:23:25.908832567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:23:25.931699 systemd[1]: Started cri-containerd-94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957.scope - libcontainer container 94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957. Jul 14 22:23:25.953651 containerd[1460]: time="2025-07-14T22:23:25.953614025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zbhg8,Uid:db5bf242-edc6-44ed-8cee-839e5c7c877f,Namespace:kube-system,Attempt:0,} returns sandbox id \"94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957\"" Jul 14 22:23:25.954348 kubelet[2540]: E0714 22:23:25.954323 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:25.960577 containerd[1460]: time="2025-07-14T22:23:25.960534369Z" level=info msg="CreateContainer within sandbox \"94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 22:23:25.971790 containerd[1460]: time="2025-07-14T22:23:25.971746112Z" level=info msg="CreateContainer within sandbox \"94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3941f15d984c36b8caae56d1bcd37d51b77b7e1cce4325a0d419a83763e718f\"" Jul 14 22:23:25.972457 containerd[1460]: time="2025-07-14T22:23:25.972210610Z" level=info msg="StartContainer for \"a3941f15d984c36b8caae56d1bcd37d51b77b7e1cce4325a0d419a83763e718f\"" Jul 14 22:23:25.998708 systemd[1]: Started cri-containerd-a3941f15d984c36b8caae56d1bcd37d51b77b7e1cce4325a0d419a83763e718f.scope - libcontainer container a3941f15d984c36b8caae56d1bcd37d51b77b7e1cce4325a0d419a83763e718f. Jul 14 22:23:26.024743 containerd[1460]: time="2025-07-14T22:23:26.024696129Z" level=info msg="StartContainer for \"a3941f15d984c36b8caae56d1bcd37d51b77b7e1cce4325a0d419a83763e718f\" returns successfully" Jul 14 22:23:26.033770 systemd[1]: cri-containerd-a3941f15d984c36b8caae56d1bcd37d51b77b7e1cce4325a0d419a83763e718f.scope: Deactivated successfully. Jul 14 22:23:26.067534 containerd[1460]: time="2025-07-14T22:23:26.067454607Z" level=info msg="shim disconnected" id=a3941f15d984c36b8caae56d1bcd37d51b77b7e1cce4325a0d419a83763e718f namespace=k8s.io Jul 14 22:23:26.067534 containerd[1460]: time="2025-07-14T22:23:26.067511375Z" level=warning msg="cleaning up after shim disconnected" id=a3941f15d984c36b8caae56d1bcd37d51b77b7e1cce4325a0d419a83763e718f namespace=k8s.io Jul 14 22:23:26.067534 containerd[1460]: time="2025-07-14T22:23:26.067520492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:23:26.293289 kubelet[2540]: E0714 22:23:26.293251 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:26.299090 containerd[1460]: time="2025-07-14T22:23:26.299039140Z" level=info msg="CreateContainer within sandbox \"94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 22:23:26.315219 containerd[1460]: time="2025-07-14T22:23:26.315178800Z" level=info msg="CreateContainer within sandbox \"94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"02728c83ceddaaffe5aaf2330a101081996fc46db5fa8075632696e561750939\"" Jul 14 22:23:26.315737 containerd[1460]: time="2025-07-14T22:23:26.315715845Z" level=info msg="StartContainer for \"02728c83ceddaaffe5aaf2330a101081996fc46db5fa8075632696e561750939\"" Jul 14 22:23:26.344674 systemd[1]: Started cri-containerd-02728c83ceddaaffe5aaf2330a101081996fc46db5fa8075632696e561750939.scope - libcontainer container 02728c83ceddaaffe5aaf2330a101081996fc46db5fa8075632696e561750939. Jul 14 22:23:26.368999 containerd[1460]: time="2025-07-14T22:23:26.368923523Z" level=info msg="StartContainer for \"02728c83ceddaaffe5aaf2330a101081996fc46db5fa8075632696e561750939\" returns successfully" Jul 14 22:23:26.376079 systemd[1]: cri-containerd-02728c83ceddaaffe5aaf2330a101081996fc46db5fa8075632696e561750939.scope: Deactivated successfully. Jul 14 22:23:26.399875 containerd[1460]: time="2025-07-14T22:23:26.399814741Z" level=info msg="shim disconnected" id=02728c83ceddaaffe5aaf2330a101081996fc46db5fa8075632696e561750939 namespace=k8s.io Jul 14 22:23:26.399875 containerd[1460]: time="2025-07-14T22:23:26.399869545Z" level=warning msg="cleaning up after shim disconnected" id=02728c83ceddaaffe5aaf2330a101081996fc46db5fa8075632696e561750939 namespace=k8s.io Jul 14 22:23:26.399875 containerd[1460]: time="2025-07-14T22:23:26.399877861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:23:26.906228 kubelet[2540]: E0714 22:23:26.906162 2540 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 22:23:27.296371 kubelet[2540]: E0714 22:23:27.296323 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:27.301031 containerd[1460]: time="2025-07-14T22:23:27.300984488Z" level=info msg="CreateContainer within sandbox \"94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 22:23:27.325940 containerd[1460]: time="2025-07-14T22:23:27.325889104Z" level=info msg="CreateContainer within sandbox \"94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3fd9808bb21844e6db7d778bdca47deca202f7ec9d36912a36f8d77aa4f2db26\"" Jul 14 22:23:27.326464 containerd[1460]: time="2025-07-14T22:23:27.326423423Z" level=info msg="StartContainer for \"3fd9808bb21844e6db7d778bdca47deca202f7ec9d36912a36f8d77aa4f2db26\"" Jul 14 22:23:27.360686 systemd[1]: Started cri-containerd-3fd9808bb21844e6db7d778bdca47deca202f7ec9d36912a36f8d77aa4f2db26.scope - libcontainer container 3fd9808bb21844e6db7d778bdca47deca202f7ec9d36912a36f8d77aa4f2db26. Jul 14 22:23:27.387896 containerd[1460]: time="2025-07-14T22:23:27.387843404Z" level=info msg="StartContainer for \"3fd9808bb21844e6db7d778bdca47deca202f7ec9d36912a36f8d77aa4f2db26\" returns successfully" Jul 14 22:23:27.388515 systemd[1]: cri-containerd-3fd9808bb21844e6db7d778bdca47deca202f7ec9d36912a36f8d77aa4f2db26.scope: Deactivated successfully. Jul 14 22:23:27.413069 containerd[1460]: time="2025-07-14T22:23:27.413005906Z" level=info msg="shim disconnected" id=3fd9808bb21844e6db7d778bdca47deca202f7ec9d36912a36f8d77aa4f2db26 namespace=k8s.io Jul 14 22:23:27.413069 containerd[1460]: time="2025-07-14T22:23:27.413059768Z" level=warning msg="cleaning up after shim disconnected" id=3fd9808bb21844e6db7d778bdca47deca202f7ec9d36912a36f8d77aa4f2db26 namespace=k8s.io Jul 14 22:23:27.413069 containerd[1460]: time="2025-07-14T22:23:27.413068685Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:23:27.707330 systemd[1]: run-containerd-runc-k8s.io-3fd9808bb21844e6db7d778bdca47deca202f7ec9d36912a36f8d77aa4f2db26-runc.Xhei2V.mount: Deactivated successfully. Jul 14 22:23:27.707448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fd9808bb21844e6db7d778bdca47deca202f7ec9d36912a36f8d77aa4f2db26-rootfs.mount: Deactivated successfully. Jul 14 22:23:28.299759 kubelet[2540]: E0714 22:23:28.299728 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:28.304812 containerd[1460]: time="2025-07-14T22:23:28.304766428Z" level=info msg="CreateContainer within sandbox \"94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 22:23:28.321413 containerd[1460]: time="2025-07-14T22:23:28.321370689Z" level=info msg="CreateContainer within sandbox \"94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a52220fb43fbfd791b8dbc7f9e69ac145f23d7511964996af8784d73b7b4b36c\"" Jul 14 22:23:28.321883 containerd[1460]: time="2025-07-14T22:23:28.321858140Z" level=info msg="StartContainer for \"a52220fb43fbfd791b8dbc7f9e69ac145f23d7511964996af8784d73b7b4b36c\"" Jul 14 22:23:28.349698 systemd[1]: Started cri-containerd-a52220fb43fbfd791b8dbc7f9e69ac145f23d7511964996af8784d73b7b4b36c.scope - libcontainer container a52220fb43fbfd791b8dbc7f9e69ac145f23d7511964996af8784d73b7b4b36c. Jul 14 22:23:28.371653 systemd[1]: cri-containerd-a52220fb43fbfd791b8dbc7f9e69ac145f23d7511964996af8784d73b7b4b36c.scope: Deactivated successfully. Jul 14 22:23:28.374018 containerd[1460]: time="2025-07-14T22:23:28.373982522Z" level=info msg="StartContainer for \"a52220fb43fbfd791b8dbc7f9e69ac145f23d7511964996af8784d73b7b4b36c\" returns successfully" Jul 14 22:23:28.395985 containerd[1460]: time="2025-07-14T22:23:28.395906393Z" level=info msg="shim disconnected" id=a52220fb43fbfd791b8dbc7f9e69ac145f23d7511964996af8784d73b7b4b36c namespace=k8s.io Jul 14 22:23:28.395985 containerd[1460]: time="2025-07-14T22:23:28.395983388Z" level=warning msg="cleaning up after shim disconnected" id=a52220fb43fbfd791b8dbc7f9e69ac145f23d7511964996af8784d73b7b4b36c namespace=k8s.io Jul 14 22:23:28.395985 containerd[1460]: time="2025-07-14T22:23:28.395992836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:23:28.707194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a52220fb43fbfd791b8dbc7f9e69ac145f23d7511964996af8784d73b7b4b36c-rootfs.mount: Deactivated successfully. Jul 14 22:23:29.303772 kubelet[2540]: E0714 22:23:29.303736 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:29.308194 containerd[1460]: time="2025-07-14T22:23:29.308067469Z" level=info msg="CreateContainer within sandbox \"94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 22:23:29.326820 containerd[1460]: time="2025-07-14T22:23:29.326759792Z" level=info msg="CreateContainer within sandbox \"94864dbb7dff9c15c005fa20e5a21285e8e05e215dbb4b8f7c51b54322618957\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1b7f7f3a6c8cbfb71ae313764b60d6aa03adf16fb6b1957ed17dfeb0e885efff\"" Jul 14 22:23:29.327307 containerd[1460]: time="2025-07-14T22:23:29.327258214Z" level=info msg="StartContainer for \"1b7f7f3a6c8cbfb71ae313764b60d6aa03adf16fb6b1957ed17dfeb0e885efff\"" Jul 14 22:23:29.354765 systemd[1]: Started cri-containerd-1b7f7f3a6c8cbfb71ae313764b60d6aa03adf16fb6b1957ed17dfeb0e885efff.scope - libcontainer container 1b7f7f3a6c8cbfb71ae313764b60d6aa03adf16fb6b1957ed17dfeb0e885efff. Jul 14 22:23:29.387240 containerd[1460]: time="2025-07-14T22:23:29.387181838Z" level=info msg="StartContainer for \"1b7f7f3a6c8cbfb71ae313764b60d6aa03adf16fb6b1957ed17dfeb0e885efff\" returns successfully" Jul 14 22:23:29.794585 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 14 22:23:30.308269 kubelet[2540]: E0714 22:23:30.308242 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:30.321934 kubelet[2540]: I0714 22:23:30.321867 2540 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zbhg8" podStartSLOduration=5.321849515 podStartE2EDuration="5.321849515s" podCreationTimestamp="2025-07-14 22:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:23:30.320968922 +0000 UTC m=+108.565017310" watchObservedRunningTime="2025-07-14 22:23:30.321849515 +0000 UTC m=+108.565897903" Jul 14 22:23:31.849110 kubelet[2540]: E0714 22:23:31.849054 2540 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jxmzs" podUID="098c5ec3-7362-4d64-8131-bd59e7a51184" Jul 14 22:23:31.889371 kubelet[2540]: E0714 22:23:31.889273 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:33.848799 kubelet[2540]: E0714 22:23:33.848745 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:38.374529 systemd[1]: run-containerd-runc-k8s.io-1b7f7f3a6c8cbfb71ae313764b60d6aa03adf16fb6b1957ed17dfeb0e885efff-runc.YbrDIX.mount: Deactivated successfully. Jul 14 22:23:41.842896 containerd[1460]: time="2025-07-14T22:23:41.842850192Z" level=info msg="StopPodSandbox for \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\"" Jul 14 22:23:41.843353 containerd[1460]: time="2025-07-14T22:23:41.842949049Z" level=info msg="TearDown network for sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" successfully" Jul 14 22:23:41.843353 containerd[1460]: time="2025-07-14T22:23:41.842963285Z" level=info msg="StopPodSandbox for \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" returns successfully" Jul 14 22:23:41.843353 containerd[1460]: time="2025-07-14T22:23:41.843334906Z" level=info msg="RemovePodSandbox for \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\"" Jul 14 22:23:41.843458 containerd[1460]: time="2025-07-14T22:23:41.843359813Z" level=info msg="Forcibly stopping sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\"" Jul 14 22:23:41.843458 containerd[1460]: time="2025-07-14T22:23:41.843412493Z" level=info msg="TearDown network for sandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" successfully" Jul 14 22:23:41.847814 containerd[1460]: time="2025-07-14T22:23:41.847773452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:23:41.847814 containerd[1460]: time="2025-07-14T22:23:41.847821232Z" level=info msg="RemovePodSandbox \"9fec6717fae83010bbb4707002be07934e9c87cb91d18ab43a1b0b5c30e49710\" returns successfully" Jul 14 22:23:41.848623 containerd[1460]: time="2025-07-14T22:23:41.848572199Z" level=info msg="StopPodSandbox for \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\"" Jul 14 22:23:41.848723 containerd[1460]: time="2025-07-14T22:23:41.848653512Z" level=info msg="TearDown network for sandbox \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\" successfully" Jul 14 22:23:41.848723 containerd[1460]: time="2025-07-14T22:23:41.848666938Z" level=info msg="StopPodSandbox for \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\" returns successfully" Jul 14 22:23:41.848996 containerd[1460]: time="2025-07-14T22:23:41.848958138Z" level=info msg="RemovePodSandbox for \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\"" Jul 14 22:23:41.848996 containerd[1460]: time="2025-07-14T22:23:41.848980199Z" level=info msg="Forcibly stopping sandbox \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\"" Jul 14 22:23:41.849072 containerd[1460]: time="2025-07-14T22:23:41.849032197Z" level=info msg="TearDown network for sandbox \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\" successfully" Jul 14 22:23:41.853286 containerd[1460]: time="2025-07-14T22:23:41.853248473Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:23:41.853327 containerd[1460]: time="2025-07-14T22:23:41.853293377Z" level=info msg="RemovePodSandbox \"e2a5fd681146484851b78444e9e40cf1ee545db64210b103c6e87b4837d630b3\" returns successfully" Jul 14 22:23:52.864923 systemd-networkd[1393]: lxc_health: Link UP Jul 14 22:23:52.878454 systemd-networkd[1393]: lxc_health: Gained carrier Jul 14 22:23:53.890618 kubelet[2540]: E0714 22:23:53.890399 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:54.108801 systemd-networkd[1393]: lxc_health: Gained IPv6LL Jul 14 22:23:54.351504 kubelet[2540]: E0714 22:23:54.351453 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:55.353772 kubelet[2540]: E0714 22:23:55.353742 2540 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:57.275906 kubelet[2540]: E0714 22:23:57.275865 2540 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37538->127.0.0.1:45665: write tcp 127.0.0.1:37538->127.0.0.1:45665: write: broken pipe Jul 14 22:23:59.366994 sshd[4370]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:59.371335 systemd[1]: sshd@26-10.0.0.139:22-10.0.0.1:58464.service: Deactivated successfully. Jul 14 22:23:59.373514 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 22:23:59.374187 systemd-logind[1439]: Session 27 logged out. Waiting for processes to exit. Jul 14 22:23:59.375072 systemd-logind[1439]: Removed session 27.