Jul 14 23:56:21.879153 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 22:12:05 -00 2025 Jul 14 23:56:21.879174 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b3329440486f6df07adec8acfff793e63e5f00f2c50d9ad5ef23b1b049ec0ca0 Jul 14 23:56:21.879186 kernel: BIOS-provided physical RAM map: Jul 14 23:56:21.879192 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 14 23:56:21.879199 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 14 23:56:21.879205 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 14 23:56:21.879213 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 14 23:56:21.879219 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 14 23:56:21.879226 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 14 23:56:21.879261 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 14 23:56:21.879268 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 14 23:56:21.879274 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 14 23:56:21.879281 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 14 23:56:21.879288 kernel: NX (Execute Disable) protection: active Jul 14 23:56:21.879296 kernel: APIC: Static calls initialized Jul 14 23:56:21.879306 kernel: SMBIOS 2.8 present. Jul 14 23:56:21.879314 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 14 23:56:21.879320 kernel: Hypervisor detected: KVM Jul 14 23:56:21.879327 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 14 23:56:21.879334 kernel: kvm-clock: using sched offset of 2271430484 cycles Jul 14 23:56:21.879341 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 14 23:56:21.879349 kernel: tsc: Detected 2794.748 MHz processor Jul 14 23:56:21.879356 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 23:56:21.879364 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 23:56:21.879371 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 14 23:56:21.879380 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 14 23:56:21.879388 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 23:56:21.879395 kernel: Using GB pages for direct mapping Jul 14 23:56:21.879402 kernel: ACPI: Early table checksum verification disabled Jul 14 23:56:21.879409 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 14 23:56:21.879416 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:56:21.879423 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:56:21.879430 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:56:21.879437 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 14 23:56:21.879447 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:56:21.879454 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:56:21.879461 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:56:21.879468 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 23:56:21.879475 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 14 23:56:21.879482 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 14 23:56:21.879493 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 14 23:56:21.879502 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 14 23:56:21.879509 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 14 23:56:21.879517 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 14 23:56:21.879524 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 14 23:56:21.879531 kernel: No NUMA configuration found Jul 14 23:56:21.879539 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 14 23:56:21.879546 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 14 23:56:21.879556 kernel: Zone ranges: Jul 14 23:56:21.879563 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 23:56:21.879570 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 14 23:56:21.879578 kernel: Normal empty Jul 14 23:56:21.879585 kernel: Movable zone start for each node Jul 14 23:56:21.879592 kernel: Early memory node ranges Jul 14 23:56:21.879599 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 14 23:56:21.879607 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 14 23:56:21.879614 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 14 23:56:21.879625 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 23:56:21.879634 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 14 23:56:21.879642 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 14 23:56:21.879651 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 14 23:56:21.879658 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 14 23:56:21.879666 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 14 23:56:21.879673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 14 23:56:21.879680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 14 23:56:21.879688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 14 23:56:21.879695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 14 23:56:21.879705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 14 23:56:21.879712 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 23:56:21.879719 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 14 23:56:21.879726 kernel: TSC deadline timer available Jul 14 23:56:21.879734 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 14 23:56:21.879741 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 14 23:56:21.879748 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 14 23:56:21.879755 kernel: kvm-guest: setup PV sched yield Jul 14 23:56:21.879763 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 14 23:56:21.879772 kernel: Booting paravirtualized kernel on KVM Jul 14 23:56:21.879780 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 23:56:21.879787 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 14 23:56:21.879795 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 14 23:56:21.879802 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 14 23:56:21.879809 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 14 23:56:21.879816 kernel: kvm-guest: PV spinlocks enabled Jul 14 23:56:21.879824 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 14 23:56:21.879832 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b3329440486f6df07adec8acfff793e63e5f00f2c50d9ad5ef23b1b049ec0ca0 Jul 14 23:56:21.879842 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 23:56:21.879850 kernel: random: crng init done Jul 14 23:56:21.879857 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 23:56:21.879865 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 23:56:21.879872 kernel: Fallback order for Node 0: 0 Jul 14 23:56:21.879879 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 14 23:56:21.879887 kernel: Policy zone: DMA32 Jul 14 23:56:21.879894 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 23:56:21.879904 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43492K init, 1584K bss, 138948K reserved, 0K cma-reserved) Jul 14 23:56:21.879912 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 23:56:21.879919 kernel: ftrace: allocating 37940 entries in 149 pages Jul 14 23:56:21.879926 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 23:56:21.879934 kernel: Dynamic Preempt: voluntary Jul 14 23:56:21.879941 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 23:56:21.879949 kernel: rcu: RCU event tracing is enabled. Jul 14 23:56:21.879957 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 23:56:21.879964 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 23:56:21.879974 kernel: Rude variant of Tasks RCU enabled. Jul 14 23:56:21.879981 kernel: Tracing variant of Tasks RCU enabled. Jul 14 23:56:21.879989 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 23:56:21.879996 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 23:56:21.880003 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 14 23:56:21.880011 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 23:56:21.880018 kernel: Console: colour VGA+ 80x25 Jul 14 23:56:21.880025 kernel: printk: console [ttyS0] enabled Jul 14 23:56:21.880032 kernel: ACPI: Core revision 20230628 Jul 14 23:56:21.880040 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 14 23:56:21.880050 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 23:56:21.880057 kernel: x2apic enabled Jul 14 23:56:21.880064 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 23:56:21.880072 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 14 23:56:21.880079 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 14 23:56:21.880087 kernel: kvm-guest: setup PV IPIs Jul 14 23:56:21.880102 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 23:56:21.880112 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 14 23:56:21.880120 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 14 23:56:21.880127 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 14 23:56:21.880135 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 14 23:56:21.880145 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 14 23:56:21.880152 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 23:56:21.880160 kernel: Spectre V2 : Mitigation: Retpolines Jul 14 23:56:21.880168 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 14 23:56:21.880175 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 14 23:56:21.880185 kernel: RETBleed: Mitigation: untrained return thunk Jul 14 23:56:21.880193 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 23:56:21.880201 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 23:56:21.880209 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 14 23:56:21.880217 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 14 23:56:21.880225 kernel: x86/bugs: return thunk changed Jul 14 23:56:21.880248 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 14 23:56:21.880257 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 23:56:21.880267 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 23:56:21.880275 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 23:56:21.880282 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 23:56:21.880290 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 23:56:21.880298 kernel: Freeing SMP alternatives memory: 32K Jul 14 23:56:21.880305 kernel: pid_max: default: 32768 minimum: 301 Jul 14 23:56:21.880313 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 23:56:21.880321 kernel: landlock: Up and running. Jul 14 23:56:21.880328 kernel: SELinux: Initializing. Jul 14 23:56:21.880336 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 23:56:21.880346 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 23:56:21.880354 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 14 23:56:21.880362 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 23:56:21.880370 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 23:56:21.880377 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 23:56:21.880385 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 14 23:56:21.880393 kernel: ... version: 0 Jul 14 23:56:21.880400 kernel: ... bit width: 48 Jul 14 23:56:21.880410 kernel: ... generic registers: 6 Jul 14 23:56:21.880418 kernel: ... value mask: 0000ffffffffffff Jul 14 23:56:21.880425 kernel: ... max period: 00007fffffffffff Jul 14 23:56:21.880433 kernel: ... fixed-purpose events: 0 Jul 14 23:56:21.880440 kernel: ... event mask: 000000000000003f Jul 14 23:56:21.880448 kernel: signal: max sigframe size: 1776 Jul 14 23:56:21.880455 kernel: rcu: Hierarchical SRCU implementation. Jul 14 23:56:21.880463 kernel: rcu: Max phase no-delay instances is 400. Jul 14 23:56:21.880471 kernel: smp: Bringing up secondary CPUs ... Jul 14 23:56:21.880481 kernel: smpboot: x86: Booting SMP configuration: Jul 14 23:56:21.880488 kernel: .... node #0, CPUs: #1 #2 #3 Jul 14 23:56:21.880496 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 23:56:21.880503 kernel: smpboot: Max logical packages: 1 Jul 14 23:56:21.880511 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 14 23:56:21.880519 kernel: devtmpfs: initialized Jul 14 23:56:21.880526 kernel: x86/mm: Memory block size: 128MB Jul 14 23:56:21.880534 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 23:56:21.880542 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 23:56:21.880549 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 23:56:21.880559 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 23:56:21.880567 kernel: audit: initializing netlink subsys (disabled) Jul 14 23:56:21.880575 kernel: audit: type=2000 audit(1752537381.382:1): state=initialized audit_enabled=0 res=1 Jul 14 23:56:21.880582 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 23:56:21.880590 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 23:56:21.880597 kernel: cpuidle: using governor menu Jul 14 23:56:21.880605 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 23:56:21.880612 kernel: dca service started, version 1.12.1 Jul 14 23:56:21.880635 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 14 23:56:21.880657 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 14 23:56:21.880672 kernel: PCI: Using configuration type 1 for base access Jul 14 23:56:21.880681 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 23:56:21.880689 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 23:56:21.880696 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 23:56:21.880705 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 23:56:21.880712 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 23:56:21.880720 kernel: ACPI: Added _OSI(Module Device) Jul 14 23:56:21.880730 kernel: ACPI: Added _OSI(Processor Device) Jul 14 23:56:21.880737 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 23:56:21.880745 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 23:56:21.880753 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 23:56:21.880760 kernel: ACPI: Interpreter enabled Jul 14 23:56:21.880768 kernel: ACPI: PM: (supports S0 S3 S5) Jul 14 23:56:21.880775 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 23:56:21.880783 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 23:56:21.880791 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 23:56:21.880799 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 14 23:56:21.880809 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 23:56:21.880993 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 23:56:21.881126 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 14 23:56:21.881273 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 14 23:56:21.881285 kernel: PCI host bridge to bus 0000:00 Jul 14 23:56:21.881410 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 23:56:21.881530 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 14 23:56:21.881644 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 23:56:21.881757 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 14 23:56:21.881869 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 23:56:21.881981 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 14 23:56:21.882094 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 23:56:21.882257 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 14 23:56:21.882398 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 14 23:56:21.882522 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 14 23:56:21.882649 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 14 23:56:21.882773 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 14 23:56:21.882895 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 23:56:21.883027 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 23:56:21.883158 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 14 23:56:21.883465 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 14 23:56:21.883635 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 14 23:56:21.883780 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 14 23:56:21.883909 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 14 23:56:21.884036 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 14 23:56:21.884162 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 14 23:56:21.884366 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 14 23:56:21.884553 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 14 23:56:21.884681 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 14 23:56:21.884808 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 14 23:56:21.884933 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 14 23:56:21.885071 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 14 23:56:21.885199 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 14 23:56:21.885362 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 14 23:56:21.885489 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 14 23:56:21.885613 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 14 23:56:21.885749 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 14 23:56:21.885874 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 14 23:56:21.885886 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 14 23:56:21.885895 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 14 23:56:21.885907 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 23:56:21.885915 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 14 23:56:21.885923 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 14 23:56:21.885931 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 14 23:56:21.885939 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 14 23:56:21.885947 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 14 23:56:21.885955 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 14 23:56:21.885963 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 14 23:56:21.885970 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 14 23:56:21.885982 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 14 23:56:21.885990 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 14 23:56:21.885998 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 14 23:56:21.886006 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 14 23:56:21.886014 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 14 23:56:21.886022 kernel: iommu: Default domain type: Translated Jul 14 23:56:21.886030 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 23:56:21.886038 kernel: PCI: Using ACPI for IRQ routing Jul 14 23:56:21.886046 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 23:56:21.886057 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 14 23:56:21.886065 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 14 23:56:21.886191 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 14 23:56:21.886341 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 14 23:56:21.886467 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 23:56:21.886478 kernel: vgaarb: loaded Jul 14 23:56:21.886487 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 14 23:56:21.886495 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 14 23:56:21.886507 kernel: clocksource: Switched to clocksource kvm-clock Jul 14 23:56:21.886516 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 23:56:21.886525 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 23:56:21.886532 kernel: pnp: PnP ACPI init Jul 14 23:56:21.886673 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 14 23:56:21.886686 kernel: pnp: PnP ACPI: found 6 devices Jul 14 23:56:21.886695 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 23:56:21.886703 kernel: NET: Registered PF_INET protocol family Jul 14 23:56:21.886711 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 23:56:21.886723 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 23:56:21.886731 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 23:56:21.886739 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 23:56:21.886747 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 23:56:21.886755 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 23:56:21.886763 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 23:56:21.886771 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 23:56:21.886779 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 23:56:21.886791 kernel: NET: Registered PF_XDP protocol family Jul 14 23:56:21.886913 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 14 23:56:21.887028 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 14 23:56:21.887142 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 14 23:56:21.887279 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 14 23:56:21.887396 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 14 23:56:21.887510 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 14 23:56:21.887521 kernel: PCI: CLS 0 bytes, default 64 Jul 14 23:56:21.887529 kernel: Initialise system trusted keyrings Jul 14 23:56:21.887541 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 23:56:21.887549 kernel: Key type asymmetric registered Jul 14 23:56:21.887557 kernel: Asymmetric key parser 'x509' registered Jul 14 23:56:21.887565 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 23:56:21.887573 kernel: io scheduler mq-deadline registered Jul 14 23:56:21.887581 kernel: io scheduler kyber registered Jul 14 23:56:21.887589 kernel: io scheduler bfq registered Jul 14 23:56:21.887597 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 23:56:21.887606 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 14 23:56:21.887616 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 14 23:56:21.887624 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 14 23:56:21.887632 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 23:56:21.887640 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 23:56:21.887648 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 14 23:56:21.887656 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 23:56:21.887664 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 23:56:21.887791 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 14 23:56:21.887806 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 23:56:21.887923 kernel: rtc_cmos 00:04: registered as rtc0 Jul 14 23:56:21.888040 kernel: rtc_cmos 00:04: setting system clock to 2025-07-14T23:56:21 UTC (1752537381) Jul 14 23:56:21.888157 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 14 23:56:21.888168 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 14 23:56:21.888176 kernel: NET: Registered PF_INET6 protocol family Jul 14 23:56:21.888184 kernel: Segment Routing with IPv6 Jul 14 23:56:21.888192 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 23:56:21.888200 kernel: NET: Registered PF_PACKET protocol family Jul 14 23:56:21.888211 kernel: Key type dns_resolver registered Jul 14 23:56:21.888220 kernel: IPI shorthand broadcast: enabled Jul 14 23:56:21.888228 kernel: sched_clock: Marking stable (536002117, 100512917)->(680842386, -44327352) Jul 14 23:56:21.888256 kernel: registered taskstats version 1 Jul 14 23:56:21.888264 kernel: Loading compiled-in X.509 certificates Jul 14 23:56:21.888272 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: bf6496aa5b6cd4d87ec52e2500e1924de07ec31a' Jul 14 23:56:21.888280 kernel: Key type .fscrypt registered Jul 14 23:56:21.888288 kernel: Key type fscrypt-provisioning registered Jul 14 23:56:21.888296 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 23:56:21.888307 kernel: ima: Allocated hash algorithm: sha1 Jul 14 23:56:21.888316 kernel: ima: No architecture policies found Jul 14 23:56:21.888323 kernel: clk: Disabling unused clocks Jul 14 23:56:21.888332 kernel: Freeing unused kernel image (initmem) memory: 43492K Jul 14 23:56:21.888340 kernel: Write protecting the kernel read-only data: 38912k Jul 14 23:56:21.888348 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 14 23:56:21.888356 kernel: Run /init as init process Jul 14 23:56:21.888364 kernel: with arguments: Jul 14 23:56:21.888371 kernel: /init Jul 14 23:56:21.888382 kernel: with environment: Jul 14 23:56:21.888390 kernel: HOME=/ Jul 14 23:56:21.888398 kernel: TERM=linux Jul 14 23:56:21.888405 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 23:56:21.888415 systemd[1]: Successfully made /usr/ read-only. Jul 14 23:56:21.888427 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 23:56:21.888436 systemd[1]: Detected virtualization kvm. Jul 14 23:56:21.888447 systemd[1]: Detected architecture x86-64. Jul 14 23:56:21.888455 systemd[1]: Running in initrd. Jul 14 23:56:21.888463 systemd[1]: No hostname configured, using default hostname. Jul 14 23:56:21.888472 systemd[1]: Hostname set to . Jul 14 23:56:21.888480 systemd[1]: Initializing machine ID from VM UUID. Jul 14 23:56:21.888489 systemd[1]: Queued start job for default target initrd.target. Jul 14 23:56:21.888497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 23:56:21.888506 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 23:56:21.888518 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 23:56:21.888541 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 23:56:21.888552 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 23:56:21.888562 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 23:56:21.888573 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 23:56:21.888584 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 23:56:21.888593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 23:56:21.888601 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 23:56:21.888610 systemd[1]: Reached target paths.target - Path Units. Jul 14 23:56:21.888619 systemd[1]: Reached target slices.target - Slice Units. Jul 14 23:56:21.888627 systemd[1]: Reached target swap.target - Swaps. Jul 14 23:56:21.888636 systemd[1]: Reached target timers.target - Timer Units. Jul 14 23:56:21.888645 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 23:56:21.888658 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 23:56:21.888676 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 23:56:21.888696 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 14 23:56:21.888718 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 23:56:21.888741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 23:56:21.888764 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 23:56:21.888787 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 23:56:21.888809 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 23:56:21.888828 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 23:56:21.888856 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 23:56:21.888878 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 23:56:21.888901 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 23:56:21.888923 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 23:56:21.888949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:56:21.888972 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 23:56:21.888994 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 23:56:21.889027 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 23:56:21.889116 systemd-journald[193]: Collecting audit messages is disabled. Jul 14 23:56:21.889183 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 23:56:21.889206 systemd-journald[193]: Journal started Jul 14 23:56:21.889280 systemd-journald[193]: Runtime Journal (/run/log/journal/b44e8167b8694f19b424ad816f8ebac1) is 6M, max 48.4M, 42.3M free. Jul 14 23:56:21.864959 systemd-modules-load[195]: Inserted module 'overlay' Jul 14 23:56:21.901114 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 23:56:21.901130 kernel: Bridge firewalling registered Jul 14 23:56:21.891686 systemd-modules-load[195]: Inserted module 'br_netfilter' Jul 14 23:56:21.902769 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 23:56:21.904100 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 23:56:21.906341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:56:21.908638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 23:56:21.927410 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:56:21.930341 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 23:56:21.932867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 23:56:21.937374 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 23:56:21.942854 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:56:21.945287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 23:56:21.947929 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:56:21.950506 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 23:56:21.961452 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 23:56:21.963817 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 23:56:21.974406 dracut-cmdline[229]: dracut-dracut-053 Jul 14 23:56:21.977386 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b3329440486f6df07adec8acfff793e63e5f00f2c50d9ad5ef23b1b049ec0ca0 Jul 14 23:56:22.006946 systemd-resolved[230]: Positive Trust Anchors: Jul 14 23:56:22.006960 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 23:56:22.006990 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 23:56:22.017162 systemd-resolved[230]: Defaulting to hostname 'linux'. Jul 14 23:56:22.018999 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 23:56:22.021192 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 23:56:22.062268 kernel: SCSI subsystem initialized Jul 14 23:56:22.071265 kernel: Loading iSCSI transport class v2.0-870. Jul 14 23:56:22.081265 kernel: iscsi: registered transport (tcp) Jul 14 23:56:22.102267 kernel: iscsi: registered transport (qla4xxx) Jul 14 23:56:22.102287 kernel: QLogic iSCSI HBA Driver Jul 14 23:56:22.151512 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 23:56:22.164374 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 23:56:22.187335 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 23:56:22.187360 kernel: device-mapper: uevent: version 1.0.3 Jul 14 23:56:22.188312 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 23:56:22.229258 kernel: raid6: avx2x4 gen() 30698 MB/s Jul 14 23:56:22.246261 kernel: raid6: avx2x2 gen() 31160 MB/s Jul 14 23:56:22.263283 kernel: raid6: avx2x1 gen() 26084 MB/s Jul 14 23:56:22.263310 kernel: raid6: using algorithm avx2x2 gen() 31160 MB/s Jul 14 23:56:22.281282 kernel: raid6: .... xor() 20036 MB/s, rmw enabled Jul 14 23:56:22.281313 kernel: raid6: using avx2x2 recovery algorithm Jul 14 23:56:22.301264 kernel: xor: automatically using best checksumming function avx Jul 14 23:56:22.448278 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 23:56:22.461672 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 23:56:22.468419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 23:56:22.482858 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 14 23:56:22.488112 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 23:56:22.501387 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 23:56:22.514654 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jul 14 23:56:22.549056 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 23:56:22.560351 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 23:56:22.624827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 23:56:22.630406 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 23:56:22.642164 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 23:56:22.645738 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 23:56:22.646953 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 23:56:22.650351 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 23:56:22.657260 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 23:56:22.660392 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 23:56:22.666535 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 23:56:22.666592 kernel: AES CTR mode by8 optimization enabled Jul 14 23:56:22.672056 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 23:56:22.686276 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 14 23:56:22.686788 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 23:56:22.686902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:56:22.696785 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 23:56:22.689782 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:56:22.694019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 23:56:22.703151 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 23:56:22.703168 kernel: GPT:9289727 != 19775487 Jul 14 23:56:22.703179 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 23:56:22.703189 kernel: GPT:9289727 != 19775487 Jul 14 23:56:22.703199 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 23:56:22.703209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 23:56:22.694275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:56:22.704365 kernel: libata version 3.00 loaded. Jul 14 23:56:22.695914 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:56:22.705676 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:56:22.712273 kernel: ahci 0000:00:1f.2: version 3.0 Jul 14 23:56:22.713260 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 14 23:56:22.715779 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 14 23:56:22.716026 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 14 23:56:22.718297 kernel: scsi host0: ahci Jul 14 23:56:22.720953 kernel: scsi host1: ahci Jul 14 23:56:22.721130 kernel: scsi host2: ahci Jul 14 23:56:22.724298 kernel: scsi host3: ahci Jul 14 23:56:22.726255 kernel: scsi host4: ahci Jul 14 23:56:22.728254 kernel: BTRFS: device fsid 0f48c447-00ea-47e7-98df-4bdb6058b27c devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (471) Jul 14 23:56:22.731252 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (458) Jul 14 23:56:22.733252 kernel: scsi host5: ahci Jul 14 23:56:22.733517 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 14 23:56:22.733529 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 14 23:56:22.733539 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 14 23:56:22.733549 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 14 23:56:22.733563 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 14 23:56:22.733573 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 14 23:56:22.754829 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 23:56:22.773328 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 23:56:22.773401 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 23:56:22.773923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:56:22.783301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 23:56:22.792984 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 23:56:22.807363 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 23:56:22.809807 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:56:22.817209 disk-uuid[568]: Primary Header is updated. Jul 14 23:56:22.817209 disk-uuid[568]: Secondary Entries is updated. Jul 14 23:56:22.817209 disk-uuid[568]: Secondary Header is updated. Jul 14 23:56:22.820551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 23:56:22.824258 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 23:56:22.839910 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:56:23.040355 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 14 23:56:23.040431 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 14 23:56:23.040443 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 14 23:56:23.040454 kernel: ata3.00: applying bridge limits Jul 14 23:56:23.040464 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 14 23:56:23.041781 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 14 23:56:23.041845 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 14 23:56:23.043262 kernel: ata3.00: configured for UDMA/100 Jul 14 23:56:23.049265 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 14 23:56:23.049311 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 14 23:56:23.088719 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 14 23:56:23.088943 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 23:56:23.101266 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 23:56:23.826707 disk-uuid[570]: The operation has completed successfully. Jul 14 23:56:23.827866 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 23:56:23.857568 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 23:56:23.857683 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 23:56:23.902416 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 23:56:23.907572 sh[594]: Success Jul 14 23:56:23.919288 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 14 23:56:23.953818 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 23:56:23.966686 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 23:56:23.969316 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 23:56:23.979677 kernel: BTRFS info (device dm-0): first mount of filesystem 0f48c447-00ea-47e7-98df-4bdb6058b27c Jul 14 23:56:23.979707 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:56:23.979718 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 23:56:23.980645 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 23:56:23.981345 kernel: BTRFS info (device dm-0): using free space tree Jul 14 23:56:23.986719 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 23:56:23.987376 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 23:56:23.988175 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 23:56:23.992049 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 23:56:24.008976 kernel: BTRFS info (device vda6): first mount of filesystem 59c6f3f1-8270-4370-81df-d46ae9629c2e Jul 14 23:56:24.009006 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:56:24.009018 kernel: BTRFS info (device vda6): using free space tree Jul 14 23:56:24.012283 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 23:56:24.017285 kernel: BTRFS info (device vda6): last unmount of filesystem 59c6f3f1-8270-4370-81df-d46ae9629c2e Jul 14 23:56:24.023204 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 23:56:24.032387 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 23:56:24.083795 ignition[681]: Ignition 2.20.0 Jul 14 23:56:24.083809 ignition[681]: Stage: fetch-offline Jul 14 23:56:24.083849 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:56:24.083861 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:56:24.083968 ignition[681]: parsed url from cmdline: "" Jul 14 23:56:24.083971 ignition[681]: no config URL provided Jul 14 23:56:24.083977 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 23:56:24.083987 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jul 14 23:56:24.084016 ignition[681]: op(1): [started] loading QEMU firmware config module Jul 14 23:56:24.084022 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 23:56:24.092716 ignition[681]: op(1): [finished] loading QEMU firmware config module Jul 14 23:56:24.116556 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 23:56:24.131383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 23:56:24.132800 ignition[681]: parsing config with SHA512: 3d7dd8a3fe35f497f3d9b4c0a966cdb6e57992573c32333f585eda55654d5f9083fea2494bd77a1ce44362d74af977ee0d9a5ab5663ed17cd8af7d5a08c6aef3 Jul 14 23:56:24.137044 unknown[681]: fetched base config from "system" Jul 14 23:56:24.137062 unknown[681]: fetched user config from "qemu" Jul 14 23:56:24.137844 ignition[681]: fetch-offline: fetch-offline passed Jul 14 23:56:24.137941 ignition[681]: Ignition finished successfully Jul 14 23:56:24.140213 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 23:56:24.159595 systemd-networkd[781]: lo: Link UP Jul 14 23:56:24.159606 systemd-networkd[781]: lo: Gained carrier Jul 14 23:56:24.161274 systemd-networkd[781]: Enumeration completed Jul 14 23:56:24.161403 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 23:56:24.161619 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 23:56:24.161624 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 23:56:24.162466 systemd-networkd[781]: eth0: Link UP Jul 14 23:56:24.162470 systemd-networkd[781]: eth0: Gained carrier Jul 14 23:56:24.162477 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 23:56:24.163415 systemd[1]: Reached target network.target - Network. Jul 14 23:56:24.165298 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 23:56:24.173307 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 23:56:24.173372 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 23:56:24.190515 ignition[785]: Ignition 2.20.0 Jul 14 23:56:24.190527 ignition[785]: Stage: kargs Jul 14 23:56:24.190690 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:56:24.190701 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:56:24.191503 ignition[785]: kargs: kargs passed Jul 14 23:56:24.191546 ignition[785]: Ignition finished successfully Jul 14 23:56:24.194997 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 23:56:24.204503 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 23:56:24.215320 ignition[795]: Ignition 2.20.0 Jul 14 23:56:24.215331 ignition[795]: Stage: disks Jul 14 23:56:24.215474 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:56:24.215485 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:56:24.218275 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 23:56:24.216271 ignition[795]: disks: disks passed Jul 14 23:56:24.219677 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 23:56:24.216311 ignition[795]: Ignition finished successfully Jul 14 23:56:24.221417 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 23:56:24.223134 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 23:56:24.223212 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 23:56:24.223538 systemd[1]: Reached target basic.target - Basic System. Jul 14 23:56:24.229364 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 23:56:24.241274 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 23:56:24.247442 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 23:56:24.261339 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 23:56:24.344258 kernel: EXT4-fs (vda9): mounted filesystem e62201b2-5386-4e48-beed-7080f52a14be r/w with ordered data mode. Quota mode: none. Jul 14 23:56:24.345006 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 23:56:24.345585 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 23:56:24.354304 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 23:56:24.356158 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 23:56:24.356518 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 23:56:24.356559 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 23:56:24.356583 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 23:56:24.366196 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (813) Jul 14 23:56:24.363445 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 23:56:24.370584 kernel: BTRFS info (device vda6): first mount of filesystem 59c6f3f1-8270-4370-81df-d46ae9629c2e Jul 14 23:56:24.370600 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:56:24.370611 kernel: BTRFS info (device vda6): using free space tree Jul 14 23:56:24.366953 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 23:56:24.373270 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 23:56:24.374396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 23:56:24.403314 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 23:56:24.408220 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jul 14 23:56:24.412882 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 23:56:24.417201 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 23:56:24.502333 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 23:56:24.513358 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 23:56:24.514865 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 23:56:24.521262 kernel: BTRFS info (device vda6): last unmount of filesystem 59c6f3f1-8270-4370-81df-d46ae9629c2e Jul 14 23:56:24.537679 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 23:56:24.539594 ignition[925]: INFO : Ignition 2.20.0 Jul 14 23:56:24.539594 ignition[925]: INFO : Stage: mount Jul 14 23:56:24.541142 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:56:24.541142 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:56:24.541142 ignition[925]: INFO : mount: mount passed Jul 14 23:56:24.541142 ignition[925]: INFO : Ignition finished successfully Jul 14 23:56:24.542677 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 23:56:24.548391 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 23:56:24.979394 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 23:56:24.996399 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 23:56:25.003693 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (940) Jul 14 23:56:25.003729 kernel: BTRFS info (device vda6): first mount of filesystem 59c6f3f1-8270-4370-81df-d46ae9629c2e Jul 14 23:56:25.003744 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:56:25.005270 kernel: BTRFS info (device vda6): using free space tree Jul 14 23:56:25.008269 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 23:56:25.009315 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 23:56:25.043759 ignition[957]: INFO : Ignition 2.20.0 Jul 14 23:56:25.043759 ignition[957]: INFO : Stage: files Jul 14 23:56:25.045946 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:56:25.045946 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:56:25.045946 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jul 14 23:56:25.045946 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 23:56:25.045946 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 23:56:25.053104 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 23:56:25.053104 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 23:56:25.053104 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 23:56:25.053104 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 23:56:25.053104 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 14 23:56:25.049884 unknown[957]: wrote ssh authorized keys file for user: core Jul 14 23:56:25.150012 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 23:56:25.289575 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 23:56:25.289575 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 23:56:25.293218 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 14 23:56:25.712382 systemd-networkd[781]: eth0: Gained IPv6LL Jul 14 23:56:25.773506 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 23:56:25.869584 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:56:25.871563 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 14 23:56:26.268615 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 23:56:26.635748 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:56:26.635748 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 14 23:56:26.639551 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 23:56:26.639551 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 23:56:26.639551 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 14 23:56:26.639551 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 14 23:56:26.639551 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 23:56:26.639551 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 23:56:26.639551 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 14 23:56:26.639551 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 23:56:26.657361 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 23:56:26.660916 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 23:56:26.662442 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 23:56:26.662442 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 14 23:56:26.662442 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 23:56:26.662442 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 23:56:26.662442 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 23:56:26.662442 ignition[957]: INFO : files: files passed Jul 14 23:56:26.662442 ignition[957]: INFO : Ignition finished successfully Jul 14 23:56:26.664010 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 23:56:26.679363 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 23:56:26.681982 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 23:56:26.683759 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 23:56:26.683863 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 23:56:26.691187 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 23:56:26.693805 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:56:26.693805 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:56:26.696903 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:56:26.699335 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 23:56:26.699991 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 23:56:26.712401 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 23:56:26.736768 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 23:56:26.736888 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 23:56:26.737993 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 23:56:26.740029 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 23:56:26.740540 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 23:56:26.741294 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 23:56:26.760602 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 23:56:26.780341 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 23:56:26.789121 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 23:56:26.789278 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 23:56:26.791363 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 23:56:26.791661 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 23:56:26.791766 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 23:56:26.797881 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 23:56:26.798016 systemd[1]: Stopped target basic.target - Basic System. Jul 14 23:56:26.800638 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 23:56:26.801487 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 23:56:26.801797 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 23:56:26.802162 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 23:56:26.802630 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 23:56:26.802952 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 23:56:26.803285 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 23:56:26.803742 systemd[1]: Stopped target swap.target - Swaps. Jul 14 23:56:26.804036 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 23:56:26.804152 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 23:56:26.818543 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 23:56:26.818677 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 23:56:26.820583 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 23:56:26.822652 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 23:56:26.825875 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 23:56:26.825988 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 23:56:26.828695 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 23:56:26.828813 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 23:56:26.829858 systemd[1]: Stopped target paths.target - Path Units. Jul 14 23:56:26.830089 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 23:56:26.836296 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 23:56:26.836444 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 23:56:26.838879 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 23:56:26.839204 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 23:56:26.839310 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 23:56:26.839700 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 23:56:26.839779 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 23:56:26.843601 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 23:56:26.843709 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 23:56:26.845340 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 23:56:26.845444 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 23:56:26.857349 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 23:56:26.857417 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 23:56:26.857522 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 23:56:26.858528 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 23:56:26.859052 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 23:56:26.859173 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 23:56:26.859808 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 23:56:26.859904 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 23:56:26.865021 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 23:56:26.865144 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 23:56:26.883483 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 23:56:26.885420 ignition[1012]: INFO : Ignition 2.20.0 Jul 14 23:56:26.885420 ignition[1012]: INFO : Stage: umount Jul 14 23:56:26.887093 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:56:26.887093 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 23:56:26.889715 ignition[1012]: INFO : umount: umount passed Jul 14 23:56:26.890579 ignition[1012]: INFO : Ignition finished successfully Jul 14 23:56:26.893587 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 23:56:26.893718 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 23:56:26.896484 systemd[1]: Stopped target network.target - Network. Jul 14 23:56:26.896553 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 23:56:26.896603 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 23:56:26.898193 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 23:56:26.898257 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 23:56:26.899944 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 23:56:26.899994 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 23:56:26.903463 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 23:56:26.903514 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 23:56:26.904452 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 23:56:26.904769 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 23:56:26.913287 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 23:56:26.913438 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 23:56:26.917325 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 14 23:56:26.917638 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 23:56:26.917686 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 23:56:26.921462 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 14 23:56:26.933035 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 23:56:26.933171 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 23:56:26.936815 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 14 23:56:26.937012 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 23:56:26.937052 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 23:56:26.947411 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 23:56:26.948303 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 23:56:26.948368 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 23:56:26.949706 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 23:56:26.949754 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:56:26.951781 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 23:56:26.951829 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 23:56:26.952925 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 23:56:26.956203 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 23:56:26.962547 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 23:56:26.962662 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 23:56:26.967000 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 23:56:26.967187 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 23:56:26.969480 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 23:56:26.969530 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 23:56:26.971453 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 23:56:26.971490 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 23:56:26.973357 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 23:56:26.973406 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 23:56:26.975419 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 23:56:26.975465 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 23:56:26.977320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 23:56:26.977368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:56:26.986359 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 23:56:26.987413 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 23:56:26.987464 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 23:56:26.989718 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 23:56:26.989765 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:56:26.995090 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 23:56:26.995215 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 23:56:27.036422 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 23:56:27.036559 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 23:56:27.038821 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 23:56:27.040059 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 23:56:27.040130 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 23:56:27.050360 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 23:56:27.058060 systemd[1]: Switching root. Jul 14 23:56:27.093530 systemd-journald[193]: Journal stopped Jul 14 23:56:28.218502 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 14 23:56:28.218564 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 23:56:28.218586 kernel: SELinux: policy capability open_perms=1 Jul 14 23:56:28.218600 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 23:56:28.218614 kernel: SELinux: policy capability always_check_network=0 Jul 14 23:56:28.218632 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 23:56:28.218644 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 23:56:28.218656 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 23:56:28.218674 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 23:56:28.218688 kernel: audit: type=1403 audit(1752537387.440:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 23:56:28.218701 systemd[1]: Successfully loaded SELinux policy in 41.417ms. Jul 14 23:56:28.218725 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.589ms. Jul 14 23:56:28.218741 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 14 23:56:28.218755 systemd[1]: Detected virtualization kvm. Jul 14 23:56:28.218767 systemd[1]: Detected architecture x86-64. Jul 14 23:56:28.218780 systemd[1]: Detected first boot. Jul 14 23:56:28.218792 systemd[1]: Initializing machine ID from VM UUID. Jul 14 23:56:28.218804 zram_generator::config[1058]: No configuration found. Jul 14 23:56:28.218817 kernel: Guest personality initialized and is inactive Jul 14 23:56:28.218829 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 14 23:56:28.218845 kernel: Initialized host personality Jul 14 23:56:28.218860 kernel: NET: Registered PF_VSOCK protocol family Jul 14 23:56:28.218872 systemd[1]: Populated /etc with preset unit settings. Jul 14 23:56:28.218885 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 14 23:56:28.218897 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 23:56:28.218914 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 23:56:28.218926 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 23:56:28.218939 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 23:56:28.218951 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 23:56:28.218966 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 23:56:28.218978 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 23:56:28.218990 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 23:56:28.219004 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 23:56:28.219017 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 23:56:28.219030 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 23:56:28.219042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 23:56:28.219055 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 23:56:28.219067 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 23:56:28.219091 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 23:56:28.219104 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 23:56:28.219117 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 23:56:28.219129 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 23:56:28.219142 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 23:56:28.219154 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 23:56:28.219167 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 23:56:28.219179 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 23:56:28.219193 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 23:56:28.219206 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 23:56:28.219218 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 23:56:28.219921 systemd[1]: Reached target slices.target - Slice Units. Jul 14 23:56:28.219938 systemd[1]: Reached target swap.target - Swaps. Jul 14 23:56:28.219950 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 23:56:28.219963 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 23:56:28.219975 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 14 23:56:28.219988 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 23:56:28.220004 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 23:56:28.220017 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 23:56:28.220029 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 23:56:28.220042 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 23:56:28.220055 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 23:56:28.220067 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 23:56:28.220088 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:56:28.220101 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 23:56:28.220113 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 23:56:28.220128 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 23:56:28.220141 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 23:56:28.220154 systemd[1]: Reached target machines.target - Containers. Jul 14 23:56:28.220166 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 23:56:28.220179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 23:56:28.220192 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 23:56:28.220205 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 23:56:28.220217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 23:56:28.220243 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 23:56:28.220256 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 23:56:28.220269 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 23:56:28.220282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 23:56:28.220294 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 23:56:28.220307 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 23:56:28.220319 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 23:56:28.220331 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 23:56:28.220346 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 23:56:28.220360 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 23:56:28.220373 kernel: loop: module loaded Jul 14 23:56:28.220384 kernel: fuse: init (API version 7.39) Jul 14 23:56:28.220396 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 23:56:28.220408 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 23:56:28.220421 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 23:56:28.220433 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 23:56:28.220446 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 14 23:56:28.220460 kernel: ACPI: bus type drm_connector registered Jul 14 23:56:28.220491 systemd-journald[1133]: Collecting audit messages is disabled. Jul 14 23:56:28.220521 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 23:56:28.220534 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 23:56:28.220547 systemd[1]: Stopped verity-setup.service. Jul 14 23:56:28.220560 systemd-journald[1133]: Journal started Jul 14 23:56:28.220584 systemd-journald[1133]: Runtime Journal (/run/log/journal/b44e8167b8694f19b424ad816f8ebac1) is 6M, max 48.4M, 42.3M free. Jul 14 23:56:28.004753 systemd[1]: Queued start job for default target multi-user.target. Jul 14 23:56:28.016731 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 23:56:28.017196 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 23:56:28.223260 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:56:28.229368 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 23:56:28.230340 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 23:56:28.231453 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 23:56:28.232597 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 23:56:28.233665 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 23:56:28.234794 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 23:56:28.235943 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 23:56:28.237171 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 23:56:28.238591 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 23:56:28.240053 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 23:56:28.240283 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 23:56:28.241715 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:56:28.241927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 23:56:28.243310 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 23:56:28.243517 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 23:56:28.244805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:56:28.245010 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 23:56:28.246476 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 23:56:28.246687 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 23:56:28.247984 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:56:28.248198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 23:56:28.249633 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 23:56:28.250992 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 23:56:28.252508 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 23:56:28.253990 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 14 23:56:28.266447 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 23:56:28.277314 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 23:56:28.279469 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 23:56:28.280667 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 23:56:28.280698 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 23:56:28.282624 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 14 23:56:28.284816 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 23:56:28.286883 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 23:56:28.287959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:56:28.289125 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 23:56:28.292607 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 23:56:28.294328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 23:56:28.297432 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 23:56:28.298772 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 23:56:28.300967 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 23:56:28.301866 systemd-journald[1133]: Time spent on flushing to /var/log/journal/b44e8167b8694f19b424ad816f8ebac1 is 21.650ms for 963 entries. Jul 14 23:56:28.301866 systemd-journald[1133]: System Journal (/var/log/journal/b44e8167b8694f19b424ad816f8ebac1) is 8M, max 195.6M, 187.6M free. Jul 14 23:56:28.332638 systemd-journald[1133]: Received client request to flush runtime journal. Jul 14 23:56:28.332683 kernel: loop0: detected capacity change from 0 to 221472 Jul 14 23:56:28.305440 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 23:56:28.308416 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 23:56:28.312389 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 23:56:28.314484 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 23:56:28.316156 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 23:56:28.330928 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 23:56:28.334019 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 23:56:28.336278 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 23:56:28.346741 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 14 23:56:28.348525 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:56:28.362274 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 23:56:28.364839 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 23:56:28.374575 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 23:56:28.377015 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 23:56:28.379310 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 23:56:28.380934 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 14 23:56:28.392555 kernel: loop1: detected capacity change from 0 to 147912 Jul 14 23:56:28.392493 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 23:56:28.396442 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 23:56:28.416738 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jul 14 23:56:28.416756 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jul 14 23:56:28.423031 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 23:56:28.428268 kernel: loop2: detected capacity change from 0 to 138176 Jul 14 23:56:28.467367 kernel: loop3: detected capacity change from 0 to 221472 Jul 14 23:56:28.476802 kernel: loop4: detected capacity change from 0 to 147912 Jul 14 23:56:28.488257 kernel: loop5: detected capacity change from 0 to 138176 Jul 14 23:56:28.498699 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 23:56:28.499310 (sd-merge)[1203]: Merged extensions into '/usr'. Jul 14 23:56:28.503713 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 23:56:28.503827 systemd[1]: Reloading... Jul 14 23:56:28.580260 zram_generator::config[1234]: No configuration found. Jul 14 23:56:28.609365 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 23:56:28.689844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:56:28.754714 systemd[1]: Reloading finished in 250 ms. Jul 14 23:56:28.774061 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 23:56:28.775702 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 23:56:28.795738 systemd[1]: Starting ensure-sysext.service... Jul 14 23:56:28.797674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 23:56:28.812399 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Jul 14 23:56:28.812419 systemd[1]: Reloading... Jul 14 23:56:28.820487 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 23:56:28.820759 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 23:56:28.822130 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 23:56:28.822480 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 14 23:56:28.822616 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 14 23:56:28.826455 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 23:56:28.826535 systemd-tmpfiles[1269]: Skipping /boot Jul 14 23:56:28.839619 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 23:56:28.839703 systemd-tmpfiles[1269]: Skipping /boot Jul 14 23:56:28.870260 zram_generator::config[1298]: No configuration found. Jul 14 23:56:28.987397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:56:29.053024 systemd[1]: Reloading finished in 240 ms. Jul 14 23:56:29.065774 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 23:56:29.086767 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 23:56:29.095277 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 23:56:29.097534 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 23:56:29.100365 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 23:56:29.104102 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 23:56:29.109320 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 23:56:29.113296 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 23:56:29.118324 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:56:29.118496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 23:56:29.120359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 23:56:29.124336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 23:56:29.126808 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 23:56:29.127930 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:56:29.128030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 23:56:29.131333 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 23:56:29.132524 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:56:29.133785 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:56:29.134007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 23:56:29.135779 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:56:29.135988 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 23:56:29.139107 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 23:56:29.142934 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:56:29.143183 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 23:56:29.152200 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:56:29.153427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 23:56:29.156722 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Jul 14 23:56:29.158552 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 23:56:29.158740 augenrules[1370]: No rules Jul 14 23:56:29.162172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 23:56:29.166507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 23:56:29.168156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:56:29.168404 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 23:56:29.171375 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 23:56:29.172633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:56:29.174406 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 23:56:29.174655 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 23:56:29.176446 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 23:56:29.178510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:56:29.178731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 23:56:29.180538 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 23:56:29.182443 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:56:29.182659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 23:56:29.187557 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:56:29.187795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 23:56:29.189678 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 23:56:29.199313 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 23:56:29.203660 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:56:29.210480 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 23:56:29.211558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 23:56:29.214475 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 23:56:29.217658 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 23:56:29.220182 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 23:56:29.222675 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 23:56:29.223870 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:56:29.224193 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 14 23:56:29.230290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 23:56:29.231314 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:56:29.233931 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 23:56:29.235748 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 23:56:29.235956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 23:56:29.237641 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:56:29.237856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 23:56:29.250645 systemd[1]: Finished ensure-sysext.service. Jul 14 23:56:29.254269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:56:29.254496 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 23:56:29.264803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:56:29.265034 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 23:56:29.269264 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1396) Jul 14 23:56:29.269712 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 23:56:29.269785 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 23:56:29.270424 augenrules[1395]: /sbin/augenrules: No change Jul 14 23:56:29.280559 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 23:56:29.282288 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 23:56:29.283798 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 14 23:56:29.285564 augenrules[1440]: No rules Jul 14 23:56:29.288481 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 23:56:29.289194 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 23:56:29.293382 systemd-resolved[1340]: Positive Trust Anchors: Jul 14 23:56:29.293767 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 23:56:29.293902 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 23:56:29.298261 systemd-resolved[1340]: Defaulting to hostname 'linux'. Jul 14 23:56:29.301631 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 23:56:29.303352 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 23:56:29.339550 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 23:56:29.348426 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 23:56:29.354573 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 14 23:56:29.362271 kernel: ACPI: button: Power Button [PWRF] Jul 14 23:56:29.362639 systemd-networkd[1414]: lo: Link UP Jul 14 23:56:29.362649 systemd-networkd[1414]: lo: Gained carrier Jul 14 23:56:29.363149 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 23:56:29.364553 systemd-networkd[1414]: Enumeration completed Jul 14 23:56:29.364684 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 23:56:29.364926 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 23:56:29.364931 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 23:56:29.365889 systemd-networkd[1414]: eth0: Link UP Jul 14 23:56:29.365897 systemd-networkd[1414]: eth0: Gained carrier Jul 14 23:56:29.365910 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 23:56:29.366423 systemd[1]: Reached target network.target - Network. Jul 14 23:56:29.373402 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 14 23:56:29.375998 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 23:56:29.378223 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 23:56:29.379655 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 23:56:29.380319 systemd-networkd[1414]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 23:56:29.381103 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Jul 14 23:56:29.830641 systemd-resolved[1340]: Clock change detected. Flushing caches. Jul 14 23:56:29.830741 systemd-timesyncd[1434]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 23:56:29.830797 systemd-timesyncd[1434]: Initial clock synchronization to Mon 2025-07-14 23:56:29.830559 UTC. Jul 14 23:56:29.838534 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 14 23:56:29.839774 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 14 23:56:29.840029 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 14 23:56:29.843544 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 14 23:56:29.862030 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 14 23:56:29.926598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:56:29.949052 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 23:56:29.955322 kernel: kvm_amd: TSC scaling supported Jul 14 23:56:29.955369 kernel: kvm_amd: Nested Virtualization enabled Jul 14 23:56:29.955382 kernel: kvm_amd: Nested Paging enabled Jul 14 23:56:29.956514 kernel: kvm_amd: LBR virtualization supported Jul 14 23:56:29.956535 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 14 23:56:29.957102 kernel: kvm_amd: Virtual GIF supported Jul 14 23:56:29.979048 kernel: EDAC MC: Ver: 3.0.0 Jul 14 23:56:30.009524 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 23:56:30.038275 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 23:56:30.040334 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:56:30.046576 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 23:56:30.084460 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 23:56:30.086174 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 23:56:30.087304 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 23:56:30.088464 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 23:56:30.089680 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 23:56:30.091128 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 23:56:30.092495 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 23:56:30.093750 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 23:56:30.094947 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 23:56:30.094976 systemd[1]: Reached target paths.target - Path Units. Jul 14 23:56:30.095855 systemd[1]: Reached target timers.target - Timer Units. Jul 14 23:56:30.097634 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 23:56:30.100459 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 23:56:30.103978 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 14 23:56:30.105415 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 14 23:56:30.106657 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 14 23:56:30.111461 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 23:56:30.113201 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 14 23:56:30.115866 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 23:56:30.117515 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 23:56:30.118671 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 23:56:30.119607 systemd[1]: Reached target basic.target - Basic System. Jul 14 23:56:30.120558 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 23:56:30.120587 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 23:56:30.121683 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 23:56:30.123878 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 23:56:30.127071 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 23:56:30.127507 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 23:56:30.132223 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 23:56:30.133287 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 23:56:30.135776 jq[1480]: false Jul 14 23:56:30.136271 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 23:56:30.140757 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 23:56:30.146191 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 23:56:30.149873 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 23:56:30.157266 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 23:56:30.159171 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 23:56:30.159741 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 23:56:30.162349 dbus-daemon[1479]: [system] SELinux support is enabled Jul 14 23:56:30.163054 extend-filesystems[1481]: Found loop3 Jul 14 23:56:30.164111 extend-filesystems[1481]: Found loop4 Jul 14 23:56:30.164111 extend-filesystems[1481]: Found loop5 Jul 14 23:56:30.164111 extend-filesystems[1481]: Found sr0 Jul 14 23:56:30.164111 extend-filesystems[1481]: Found vda Jul 14 23:56:30.164111 extend-filesystems[1481]: Found vda1 Jul 14 23:56:30.164111 extend-filesystems[1481]: Found vda2 Jul 14 23:56:30.164111 extend-filesystems[1481]: Found vda3 Jul 14 23:56:30.164111 extend-filesystems[1481]: Found usr Jul 14 23:56:30.164111 extend-filesystems[1481]: Found vda4 Jul 14 23:56:30.164111 extend-filesystems[1481]: Found vda6 Jul 14 23:56:30.164111 extend-filesystems[1481]: Found vda7 Jul 14 23:56:30.164111 extend-filesystems[1481]: Found vda9 Jul 14 23:56:30.164111 extend-filesystems[1481]: Checking size of /dev/vda9 Jul 14 23:56:30.189735 extend-filesystems[1481]: Resized partition /dev/vda9 Jul 14 23:56:30.164238 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 23:56:30.171263 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 23:56:30.172651 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 23:56:30.177653 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 23:56:30.180396 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 23:56:30.180674 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 23:56:30.181037 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 23:56:30.181544 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 23:56:30.184886 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 23:56:30.185151 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 23:56:30.193952 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 23:56:30.193986 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 23:56:30.196810 jq[1497]: true Jul 14 23:56:30.197323 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 23:56:30.197345 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 23:56:30.204091 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Jul 14 23:56:30.215393 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 23:56:30.221303 update_engine[1493]: I20250714 23:56:30.221210 1493 main.cc:92] Flatcar Update Engine starting Jul 14 23:56:30.223076 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 23:56:30.223115 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1406) Jul 14 23:56:30.224547 update_engine[1493]: I20250714 23:56:30.224514 1493 update_check_scheduler.cc:74] Next update check in 3m16s Jul 14 23:56:30.225984 systemd[1]: Started update-engine.service - Update Engine. Jul 14 23:56:30.230742 tar[1502]: linux-amd64/helm Jul 14 23:56:30.241337 jq[1511]: true Jul 14 23:56:30.244216 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 23:56:30.258349 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 23:56:30.284500 systemd-logind[1492]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 23:56:30.284532 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 23:56:30.286133 systemd-logind[1492]: New seat seat0. Jul 14 23:56:30.286559 extend-filesystems[1507]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 23:56:30.286559 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 23:56:30.286559 extend-filesystems[1507]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 23:56:30.296930 extend-filesystems[1481]: Resized filesystem in /dev/vda9 Jul 14 23:56:30.287623 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 23:56:30.288807 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 23:56:30.290257 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 23:56:30.298329 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 23:56:30.307226 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Jul 14 23:56:30.309315 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 23:56:30.311748 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 23:56:30.413085 containerd[1512]: time="2025-07-14T23:56:30.412753907Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 14 23:56:30.437468 containerd[1512]: time="2025-07-14T23:56:30.437348510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:56:30.439295 containerd[1512]: time="2025-07-14T23:56:30.439247853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:56:30.439295 containerd[1512]: time="2025-07-14T23:56:30.439279201Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 23:56:30.439295 containerd[1512]: time="2025-07-14T23:56:30.439294550Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 23:56:30.439500 containerd[1512]: time="2025-07-14T23:56:30.439478265Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 23:56:30.439500 containerd[1512]: time="2025-07-14T23:56:30.439497701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 23:56:30.439579 containerd[1512]: time="2025-07-14T23:56:30.439563805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:56:30.439601 containerd[1512]: time="2025-07-14T23:56:30.439578002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:56:30.439864 containerd[1512]: time="2025-07-14T23:56:30.439838130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:56:30.439864 containerd[1512]: time="2025-07-14T23:56:30.439857015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 23:56:30.439915 containerd[1512]: time="2025-07-14T23:56:30.439869709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:56:30.439915 containerd[1512]: time="2025-07-14T23:56:30.439878996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 23:56:30.440000 containerd[1512]: time="2025-07-14T23:56:30.439980076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:56:30.440256 containerd[1512]: time="2025-07-14T23:56:30.440234684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:56:30.440407 containerd[1512]: time="2025-07-14T23:56:30.440387440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:56:30.440407 containerd[1512]: time="2025-07-14T23:56:30.440402178Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 23:56:30.440507 containerd[1512]: time="2025-07-14T23:56:30.440493589Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 23:56:30.440560 containerd[1512]: time="2025-07-14T23:56:30.440547691Z" level=info msg="metadata content store policy set" policy=shared Jul 14 23:56:30.446865 containerd[1512]: time="2025-07-14T23:56:30.446798716Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 23:56:30.447036 containerd[1512]: time="2025-07-14T23:56:30.446894365Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 23:56:30.447036 containerd[1512]: time="2025-07-14T23:56:30.446913221Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 23:56:30.447036 containerd[1512]: time="2025-07-14T23:56:30.446932877Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 23:56:30.447036 containerd[1512]: time="2025-07-14T23:56:30.446950110Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 23:56:30.447674 containerd[1512]: time="2025-07-14T23:56:30.447206901Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 23:56:30.447674 containerd[1512]: time="2025-07-14T23:56:30.447482458Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 23:56:30.447674 containerd[1512]: time="2025-07-14T23:56:30.447609707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 23:56:30.447674 containerd[1512]: time="2025-07-14T23:56:30.447628222Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 23:56:30.447674 containerd[1512]: time="2025-07-14T23:56:30.447644953Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 23:56:30.447674 containerd[1512]: time="2025-07-14T23:56:30.447663778Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447680069Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447695177Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447713241Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447733088Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447751313Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447766461Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447783773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447827015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447847092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447863944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447879543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447895323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447911433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.447947 containerd[1512]: time="2025-07-14T23:56:30.447928435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.448465 containerd[1512]: time="2025-07-14T23:56:30.447946278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.448465 containerd[1512]: time="2025-07-14T23:56:30.447962799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.448465 containerd[1512]: time="2025-07-14T23:56:30.447985642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.448465 containerd[1512]: time="2025-07-14T23:56:30.448001712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.448465 containerd[1512]: time="2025-07-14T23:56:30.448032540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.448465 containerd[1512]: time="2025-07-14T23:56:30.448049692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.448465 containerd[1512]: time="2025-07-14T23:56:30.448067706Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 23:56:30.448465 containerd[1512]: time="2025-07-14T23:56:30.448099385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.448465 containerd[1512]: time="2025-07-14T23:56:30.448118221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.448465 containerd[1512]: time="2025-07-14T23:56:30.448132538Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 23:56:30.449084 containerd[1512]: time="2025-07-14T23:56:30.448947396Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 23:56:30.449084 containerd[1512]: time="2025-07-14T23:56:30.448979236Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 23:56:30.449084 containerd[1512]: time="2025-07-14T23:56:30.448993212Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 23:56:30.449084 containerd[1512]: time="2025-07-14T23:56:30.449007669Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 23:56:30.449084 containerd[1512]: time="2025-07-14T23:56:30.449038457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.449084 containerd[1512]: time="2025-07-14T23:56:30.449054968Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 23:56:30.449084 containerd[1512]: time="2025-07-14T23:56:30.449069285Z" level=info msg="NRI interface is disabled by configuration." Jul 14 23:56:30.449084 containerd[1512]: time="2025-07-14T23:56:30.449081498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 23:56:30.449505 containerd[1512]: time="2025-07-14T23:56:30.449432676Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 23:56:30.449505 containerd[1512]: time="2025-07-14T23:56:30.449494562Z" level=info msg="Connect containerd service" Jul 14 23:56:30.449731 containerd[1512]: time="2025-07-14T23:56:30.449521142Z" level=info msg="using legacy CRI server" Jul 14 23:56:30.449731 containerd[1512]: time="2025-07-14T23:56:30.449529197Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 23:56:30.449731 containerd[1512]: time="2025-07-14T23:56:30.449650645Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 23:56:30.450431 containerd[1512]: time="2025-07-14T23:56:30.450391495Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 23:56:30.452255 containerd[1512]: time="2025-07-14T23:56:30.450619753Z" level=info msg="Start subscribing containerd event" Jul 14 23:56:30.452255 containerd[1512]: time="2025-07-14T23:56:30.451080928Z" level=info msg="Start recovering state" Jul 14 23:56:30.452255 containerd[1512]: time="2025-07-14T23:56:30.450993975Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 23:56:30.452255 containerd[1512]: time="2025-07-14T23:56:30.451165316Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 23:56:30.452770 containerd[1512]: time="2025-07-14T23:56:30.452736984Z" level=info msg="Start event monitor" Jul 14 23:56:30.452823 containerd[1512]: time="2025-07-14T23:56:30.452783461Z" level=info msg="Start snapshots syncer" Jul 14 23:56:30.452823 containerd[1512]: time="2025-07-14T23:56:30.452799552Z" level=info msg="Start cni network conf syncer for default" Jul 14 23:56:30.452873 containerd[1512]: time="2025-07-14T23:56:30.452809390Z" level=info msg="Start streaming server" Jul 14 23:56:30.453043 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 23:56:30.453238 containerd[1512]: time="2025-07-14T23:56:30.453212997Z" level=info msg="containerd successfully booted in 0.041561s" Jul 14 23:56:30.639383 tar[1502]: linux-amd64/LICENSE Jul 14 23:56:30.639495 tar[1502]: linux-amd64/README.md Jul 14 23:56:30.651848 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 23:56:30.781704 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 23:56:30.806943 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 23:56:30.819463 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 23:56:30.827682 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 23:56:30.827980 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 23:56:30.831030 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 23:56:30.858580 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 23:56:30.873391 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 23:56:30.875758 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 23:56:30.877031 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 23:56:31.280201 systemd-networkd[1414]: eth0: Gained IPv6LL Jul 14 23:56:31.283352 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 23:56:31.285243 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 23:56:31.293232 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 23:56:31.295512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:56:31.297581 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 23:56:31.315684 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 23:56:31.316436 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 23:56:31.317944 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 23:56:31.321868 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 23:56:31.989255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:56:31.990889 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 23:56:31.992091 systemd[1]: Startup finished in 665ms (kernel) + 5.749s (initrd) + 4.143s (userspace) = 10.559s. Jul 14 23:56:31.993193 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 23:56:32.394636 kubelet[1593]: E0714 23:56:32.394505 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:56:32.398791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:56:32.398990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:56:32.399377 systemd[1]: kubelet.service: Consumed 982ms CPU time, 266.2M memory peak. Jul 14 23:56:35.151371 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 23:56:35.152666 systemd[1]: Started sshd@0-10.0.0.18:22-10.0.0.1:50690.service - OpenSSH per-connection server daemon (10.0.0.1:50690). Jul 14 23:56:35.201248 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 50690 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:56:35.203090 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:56:35.215107 systemd-logind[1492]: New session 1 of user core. Jul 14 23:56:35.216304 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 23:56:35.225229 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 23:56:35.236087 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 23:56:35.238632 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 23:56:35.246512 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:56:35.248915 systemd-logind[1492]: New session c1 of user core. Jul 14 23:56:35.393065 systemd[1610]: Queued start job for default target default.target. Jul 14 23:56:35.405321 systemd[1610]: Created slice app.slice - User Application Slice. Jul 14 23:56:35.405346 systemd[1610]: Reached target paths.target - Paths. Jul 14 23:56:35.405388 systemd[1610]: Reached target timers.target - Timers. Jul 14 23:56:35.406961 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 23:56:35.418932 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 23:56:35.419076 systemd[1610]: Reached target sockets.target - Sockets. Jul 14 23:56:35.419119 systemd[1610]: Reached target basic.target - Basic System. Jul 14 23:56:35.419163 systemd[1610]: Reached target default.target - Main User Target. Jul 14 23:56:35.419196 systemd[1610]: Startup finished in 163ms. Jul 14 23:56:35.419674 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 23:56:35.421461 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 23:56:35.492124 systemd[1]: Started sshd@1-10.0.0.18:22-10.0.0.1:50702.service - OpenSSH per-connection server daemon (10.0.0.1:50702). Jul 14 23:56:35.532126 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 50702 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:56:35.533680 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:56:35.537912 systemd-logind[1492]: New session 2 of user core. Jul 14 23:56:35.548147 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 23:56:35.601613 sshd[1623]: Connection closed by 10.0.0.1 port 50702 Jul 14 23:56:35.601973 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Jul 14 23:56:35.616774 systemd[1]: sshd@1-10.0.0.18:22-10.0.0.1:50702.service: Deactivated successfully. Jul 14 23:56:35.619337 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 23:56:35.621360 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Jul 14 23:56:35.632318 systemd[1]: Started sshd@2-10.0.0.18:22-10.0.0.1:50716.service - OpenSSH per-connection server daemon (10.0.0.1:50716). Jul 14 23:56:35.633331 systemd-logind[1492]: Removed session 2. Jul 14 23:56:35.665981 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 50716 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:56:35.667628 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:56:35.671855 systemd-logind[1492]: New session 3 of user core. Jul 14 23:56:35.678118 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 23:56:35.727208 sshd[1631]: Connection closed by 10.0.0.1 port 50716 Jul 14 23:56:35.727609 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Jul 14 23:56:35.736115 systemd[1]: sshd@2-10.0.0.18:22-10.0.0.1:50716.service: Deactivated successfully. Jul 14 23:56:35.737874 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 23:56:35.739603 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Jul 14 23:56:35.766349 systemd[1]: Started sshd@3-10.0.0.18:22-10.0.0.1:50720.service - OpenSSH per-connection server daemon (10.0.0.1:50720). Jul 14 23:56:35.767344 systemd-logind[1492]: Removed session 3. Jul 14 23:56:35.801972 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 50720 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:56:35.803451 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:56:35.807569 systemd-logind[1492]: New session 4 of user core. Jul 14 23:56:35.817159 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 23:56:35.869845 sshd[1639]: Connection closed by 10.0.0.1 port 50720 Jul 14 23:56:35.870279 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Jul 14 23:56:35.882527 systemd[1]: sshd@3-10.0.0.18:22-10.0.0.1:50720.service: Deactivated successfully. Jul 14 23:56:35.884138 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 23:56:35.885713 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. Jul 14 23:56:35.898311 systemd[1]: Started sshd@4-10.0.0.18:22-10.0.0.1:50724.service - OpenSSH per-connection server daemon (10.0.0.1:50724). Jul 14 23:56:35.899326 systemd-logind[1492]: Removed session 4. Jul 14 23:56:35.932020 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 50724 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:56:35.933416 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:56:35.937490 systemd-logind[1492]: New session 5 of user core. Jul 14 23:56:35.949145 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 23:56:36.007587 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 23:56:36.007916 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:56:36.027971 sudo[1648]: pam_unix(sudo:session): session closed for user root Jul 14 23:56:36.029462 sshd[1647]: Connection closed by 10.0.0.1 port 50724 Jul 14 23:56:36.029821 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Jul 14 23:56:36.054786 systemd[1]: sshd@4-10.0.0.18:22-10.0.0.1:50724.service: Deactivated successfully. Jul 14 23:56:36.056486 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 23:56:36.058094 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. Jul 14 23:56:36.059493 systemd[1]: Started sshd@5-10.0.0.18:22-10.0.0.1:50736.service - OpenSSH per-connection server daemon (10.0.0.1:50736). Jul 14 23:56:36.060245 systemd-logind[1492]: Removed session 5. Jul 14 23:56:36.096128 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 50736 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:56:36.097646 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:56:36.101608 systemd-logind[1492]: New session 6 of user core. Jul 14 23:56:36.110205 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 23:56:36.163746 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 23:56:36.164091 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:56:36.167930 sudo[1658]: pam_unix(sudo:session): session closed for user root Jul 14 23:56:36.174381 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 14 23:56:36.174740 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:56:36.193262 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 14 23:56:36.223032 augenrules[1680]: No rules Jul 14 23:56:36.224738 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 23:56:36.224999 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 14 23:56:36.226140 sudo[1657]: pam_unix(sudo:session): session closed for user root Jul 14 23:56:36.227642 sshd[1656]: Connection closed by 10.0.0.1 port 50736 Jul 14 23:56:36.227991 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Jul 14 23:56:36.245701 systemd[1]: sshd@5-10.0.0.18:22-10.0.0.1:50736.service: Deactivated successfully. Jul 14 23:56:36.247515 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 23:56:36.249116 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Jul 14 23:56:36.260254 systemd[1]: Started sshd@6-10.0.0.18:22-10.0.0.1:50748.service - OpenSSH per-connection server daemon (10.0.0.1:50748). Jul 14 23:56:36.261208 systemd-logind[1492]: Removed session 6. Jul 14 23:56:36.293379 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 50748 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:56:36.295067 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:56:36.299703 systemd-logind[1492]: New session 7 of user core. Jul 14 23:56:36.313142 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 23:56:36.366482 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 23:56:36.366839 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:56:36.673375 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 23:56:36.673411 (dockerd)[1713]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 23:56:36.946768 dockerd[1713]: time="2025-07-14T23:56:36.946604183Z" level=info msg="Starting up" Jul 14 23:56:37.333474 dockerd[1713]: time="2025-07-14T23:56:37.333273604Z" level=info msg="Loading containers: start." Jul 14 23:56:37.505046 kernel: Initializing XFRM netlink socket Jul 14 23:56:37.592560 systemd-networkd[1414]: docker0: Link UP Jul 14 23:56:37.630605 dockerd[1713]: time="2025-07-14T23:56:37.630544449Z" level=info msg="Loading containers: done." Jul 14 23:56:37.648527 dockerd[1713]: time="2025-07-14T23:56:37.648471525Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 23:56:37.648692 dockerd[1713]: time="2025-07-14T23:56:37.648589667Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 14 23:56:37.648764 dockerd[1713]: time="2025-07-14T23:56:37.648739568Z" level=info msg="Daemon has completed initialization" Jul 14 23:56:37.687330 dockerd[1713]: time="2025-07-14T23:56:37.687244873Z" level=info msg="API listen on /run/docker.sock" Jul 14 23:56:37.687441 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 23:56:38.490106 containerd[1512]: time="2025-07-14T23:56:38.490061999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 14 23:56:39.149661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249832754.mount: Deactivated successfully. Jul 14 23:56:40.205956 containerd[1512]: time="2025-07-14T23:56:40.205893849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:40.206704 containerd[1512]: time="2025-07-14T23:56:40.206644557Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 14 23:56:40.207935 containerd[1512]: time="2025-07-14T23:56:40.207905021Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:40.210813 containerd[1512]: time="2025-07-14T23:56:40.210776848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:40.211723 containerd[1512]: time="2025-07-14T23:56:40.211669152Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.721566106s" Jul 14 23:56:40.211723 containerd[1512]: time="2025-07-14T23:56:40.211713625Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 14 23:56:40.212308 containerd[1512]: time="2025-07-14T23:56:40.212278595Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 14 23:56:41.408271 containerd[1512]: time="2025-07-14T23:56:41.408205973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:41.409269 containerd[1512]: time="2025-07-14T23:56:41.409206580Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 14 23:56:41.410445 containerd[1512]: time="2025-07-14T23:56:41.410412141Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:41.415220 containerd[1512]: time="2025-07-14T23:56:41.415180515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:41.416606 containerd[1512]: time="2025-07-14T23:56:41.416550955Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.204228889s" Jul 14 23:56:41.416657 containerd[1512]: time="2025-07-14T23:56:41.416608373Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 14 23:56:41.417220 containerd[1512]: time="2025-07-14T23:56:41.417187780Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 14 23:56:42.649468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 23:56:42.662206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:56:42.677100 containerd[1512]: time="2025-07-14T23:56:42.677045142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:42.677989 containerd[1512]: time="2025-07-14T23:56:42.677915865Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 14 23:56:42.678969 containerd[1512]: time="2025-07-14T23:56:42.678938293Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:42.681988 containerd[1512]: time="2025-07-14T23:56:42.681943650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:42.683003 containerd[1512]: time="2025-07-14T23:56:42.682977900Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.265584094s" Jul 14 23:56:42.683055 containerd[1512]: time="2025-07-14T23:56:42.683007406Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 14 23:56:42.683566 containerd[1512]: time="2025-07-14T23:56:42.683528493Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 14 23:56:42.836154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:56:42.840828 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 23:56:43.281056 kubelet[1981]: E0714 23:56:43.280970 1981 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:56:43.287475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:56:43.287692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:56:43.288063 systemd[1]: kubelet.service: Consumed 230ms CPU time, 111.2M memory peak. Jul 14 23:56:45.808865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419398170.mount: Deactivated successfully. Jul 14 23:56:46.314936 containerd[1512]: time="2025-07-14T23:56:46.314874090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:46.315729 containerd[1512]: time="2025-07-14T23:56:46.315682225Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 14 23:56:46.317027 containerd[1512]: time="2025-07-14T23:56:46.316931078Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:46.318937 containerd[1512]: time="2025-07-14T23:56:46.318897506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:46.319569 containerd[1512]: time="2025-07-14T23:56:46.319532558Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 3.635976623s" Jul 14 23:56:46.319569 containerd[1512]: time="2025-07-14T23:56:46.319560760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 14 23:56:46.320030 containerd[1512]: time="2025-07-14T23:56:46.319988693Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 23:56:46.838974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1588055299.mount: Deactivated successfully. Jul 14 23:56:47.505195 containerd[1512]: time="2025-07-14T23:56:47.505138445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:47.505940 containerd[1512]: time="2025-07-14T23:56:47.505901797Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 14 23:56:47.507290 containerd[1512]: time="2025-07-14T23:56:47.507246499Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:47.509897 containerd[1512]: time="2025-07-14T23:56:47.509859040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:47.511077 containerd[1512]: time="2025-07-14T23:56:47.511050765Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.19101812s" Jul 14 23:56:47.511115 containerd[1512]: time="2025-07-14T23:56:47.511081743Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 14 23:56:47.511564 containerd[1512]: time="2025-07-14T23:56:47.511543439Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 23:56:47.998459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount10646645.mount: Deactivated successfully. Jul 14 23:56:48.004649 containerd[1512]: time="2025-07-14T23:56:48.004619198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:48.005404 containerd[1512]: time="2025-07-14T23:56:48.005319823Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 14 23:56:48.006322 containerd[1512]: time="2025-07-14T23:56:48.006279202Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:48.008423 containerd[1512]: time="2025-07-14T23:56:48.008388719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:48.009125 containerd[1512]: time="2025-07-14T23:56:48.009083713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 497.51721ms" Jul 14 23:56:48.009125 containerd[1512]: time="2025-07-14T23:56:48.009110954Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 23:56:48.009574 containerd[1512]: time="2025-07-14T23:56:48.009551961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 23:56:48.485448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823301497.mount: Deactivated successfully. Jul 14 23:56:50.283663 containerd[1512]: time="2025-07-14T23:56:50.283572564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:50.284608 containerd[1512]: time="2025-07-14T23:56:50.284559114Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 14 23:56:50.286248 containerd[1512]: time="2025-07-14T23:56:50.286184092Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:50.290050 containerd[1512]: time="2025-07-14T23:56:50.289998126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:56:50.291591 containerd[1512]: time="2025-07-14T23:56:50.291520612Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.28193067s" Jul 14 23:56:50.291591 containerd[1512]: time="2025-07-14T23:56:50.291583730Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 14 23:56:52.916199 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:56:52.916439 systemd[1]: kubelet.service: Consumed 230ms CPU time, 111.2M memory peak. Jul 14 23:56:52.929272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:56:52.955873 systemd[1]: Reload requested from client PID 2137 ('systemctl') (unit session-7.scope)... Jul 14 23:56:52.955888 systemd[1]: Reloading... Jul 14 23:56:53.049230 zram_generator::config[2181]: No configuration found. Jul 14 23:56:53.214561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:56:53.326085 systemd[1]: Reloading finished in 369 ms. Jul 14 23:56:53.381963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:56:53.387720 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 23:56:53.390006 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:56:53.391630 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 23:56:53.391917 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:56:53.391963 systemd[1]: kubelet.service: Consumed 154ms CPU time, 99.3M memory peak. Jul 14 23:56:53.393832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:56:53.560686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:56:53.564966 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 23:56:53.600100 kubelet[2232]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:56:53.600100 kubelet[2232]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 23:56:53.600100 kubelet[2232]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:56:53.600485 kubelet[2232]: I0714 23:56:53.600153 2232 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 23:56:54.053883 kubelet[2232]: I0714 23:56:54.053764 2232 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 23:56:54.053883 kubelet[2232]: I0714 23:56:54.053794 2232 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 23:56:54.054095 kubelet[2232]: I0714 23:56:54.054071 2232 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 23:56:54.073467 kubelet[2232]: E0714 23:56:54.073420 2232 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:54.074021 kubelet[2232]: I0714 23:56:54.073992 2232 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 23:56:54.079740 kubelet[2232]: E0714 23:56:54.079704 2232 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 23:56:54.079740 kubelet[2232]: I0714 23:56:54.079729 2232 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 23:56:54.086452 kubelet[2232]: I0714 23:56:54.086429 2232 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 23:56:54.087051 kubelet[2232]: I0714 23:56:54.087022 2232 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 23:56:54.087200 kubelet[2232]: I0714 23:56:54.087163 2232 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 23:56:54.087386 kubelet[2232]: I0714 23:56:54.087191 2232 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 23:56:54.087386 kubelet[2232]: I0714 23:56:54.087385 2232 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 23:56:54.087526 kubelet[2232]: I0714 23:56:54.087394 2232 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 23:56:54.087526 kubelet[2232]: I0714 23:56:54.087507 2232 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:56:54.089312 kubelet[2232]: I0714 23:56:54.089280 2232 kubelet.go:408] "Attempting to sync node with API server" Jul 14 23:56:54.089312 kubelet[2232]: I0714 23:56:54.089317 2232 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 23:56:54.089469 kubelet[2232]: I0714 23:56:54.089354 2232 kubelet.go:314] "Adding apiserver pod source" Jul 14 23:56:54.089469 kubelet[2232]: I0714 23:56:54.089374 2232 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 23:56:54.092647 kubelet[2232]: I0714 23:56:54.092622 2232 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 14 23:56:54.093544 kubelet[2232]: I0714 23:56:54.092978 2232 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 23:56:54.094324 kubelet[2232]: W0714 23:56:54.094132 2232 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 23:56:54.094324 kubelet[2232]: W0714 23:56:54.094176 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:54.094324 kubelet[2232]: E0714 23:56:54.094239 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:54.094324 kubelet[2232]: W0714 23:56:54.094194 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:54.094324 kubelet[2232]: E0714 23:56:54.094285 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:54.095988 kubelet[2232]: I0714 23:56:54.095961 2232 server.go:1274] "Started kubelet" Jul 14 23:56:54.096843 kubelet[2232]: I0714 23:56:54.096362 2232 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 23:56:54.096843 kubelet[2232]: I0714 23:56:54.096474 2232 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 23:56:54.096843 kubelet[2232]: I0714 23:56:54.096674 2232 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 23:56:54.097472 kubelet[2232]: I0714 23:56:54.097445 2232 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 23:56:54.098645 kubelet[2232]: I0714 23:56:54.097824 2232 server.go:449] "Adding debug handlers to kubelet server" Jul 14 23:56:54.099621 kubelet[2232]: I0714 23:56:54.099373 2232 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 23:56:54.100693 kubelet[2232]: E0714 23:56:54.098520 2232 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.18:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.18:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1852437e26c515e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 23:56:54.095934946 +0000 UTC m=+0.527156496,LastTimestamp:2025-07-14 23:56:54.095934946 +0000 UTC m=+0.527156496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 23:56:54.100871 kubelet[2232]: E0714 23:56:54.100746 2232 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:56:54.100871 kubelet[2232]: I0714 23:56:54.100778 2232 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 23:56:54.100871 kubelet[2232]: I0714 23:56:54.100844 2232 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 23:56:54.101556 kubelet[2232]: I0714 23:56:54.100889 2232 reconciler.go:26] "Reconciler: start to sync state" Jul 14 23:56:54.101556 kubelet[2232]: W0714 23:56:54.101081 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:54.101556 kubelet[2232]: E0714 23:56:54.101115 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:54.101556 kubelet[2232]: E0714 23:56:54.101149 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="200ms" Jul 14 23:56:54.101979 kubelet[2232]: I0714 23:56:54.101779 2232 factory.go:221] Registration of the systemd container factory successfully Jul 14 23:56:54.101979 kubelet[2232]: I0714 23:56:54.101862 2232 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 23:56:54.102078 kubelet[2232]: E0714 23:56:54.101993 2232 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 23:56:54.102594 kubelet[2232]: I0714 23:56:54.102575 2232 factory.go:221] Registration of the containerd container factory successfully Jul 14 23:56:54.117780 kubelet[2232]: I0714 23:56:54.117681 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 23:56:54.118953 kubelet[2232]: I0714 23:56:54.118123 2232 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 23:56:54.118953 kubelet[2232]: I0714 23:56:54.118144 2232 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 23:56:54.118953 kubelet[2232]: I0714 23:56:54.118160 2232 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:56:54.119607 kubelet[2232]: I0714 23:56:54.119567 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 23:56:54.119607 kubelet[2232]: I0714 23:56:54.119595 2232 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 23:56:54.119607 kubelet[2232]: I0714 23:56:54.119611 2232 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 23:56:54.119713 kubelet[2232]: E0714 23:56:54.119650 2232 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 23:56:54.120427 kubelet[2232]: W0714 23:56:54.120255 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:54.120427 kubelet[2232]: E0714 23:56:54.120291 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:54.201699 kubelet[2232]: E0714 23:56:54.201664 2232 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:56:54.219949 kubelet[2232]: E0714 23:56:54.219913 2232 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 23:56:54.273274 kubelet[2232]: I0714 23:56:54.273239 2232 policy_none.go:49] "None policy: Start" Jul 14 23:56:54.273970 kubelet[2232]: I0714 23:56:54.273955 2232 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 23:56:54.274062 kubelet[2232]: I0714 23:56:54.273974 2232 state_mem.go:35] "Initializing new in-memory state store" Jul 14 23:56:54.280684 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 23:56:54.298314 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 23:56:54.301636 kubelet[2232]: E0714 23:56:54.301596 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="400ms" Jul 14 23:56:54.301690 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 23:56:54.301799 kubelet[2232]: E0714 23:56:54.301779 2232 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:56:54.318413 kubelet[2232]: I0714 23:56:54.317895 2232 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 23:56:54.318413 kubelet[2232]: I0714 23:56:54.318122 2232 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 23:56:54.318413 kubelet[2232]: I0714 23:56:54.318131 2232 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 23:56:54.318413 kubelet[2232]: I0714 23:56:54.318316 2232 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 23:56:54.320191 kubelet[2232]: E0714 23:56:54.320158 2232 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 23:56:54.419845 kubelet[2232]: I0714 23:56:54.419804 2232 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:56:54.420148 kubelet[2232]: E0714 23:56:54.420124 2232 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jul 14 23:56:54.427818 systemd[1]: Created slice kubepods-burstable-pod154612774223ef9aa3ff3f2a6d949658.slice - libcontainer container kubepods-burstable-pod154612774223ef9aa3ff3f2a6d949658.slice. Jul 14 23:56:54.448484 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 14 23:56:54.460907 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 14 23:56:54.602589 kubelet[2232]: I0714 23:56:54.602461 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:56:54.602589 kubelet[2232]: I0714 23:56:54.602510 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:56:54.602589 kubelet[2232]: I0714 23:56:54.602536 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 23:56:54.602589 kubelet[2232]: I0714 23:56:54.602554 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:56:54.602589 kubelet[2232]: I0714 23:56:54.602569 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:56:54.603168 kubelet[2232]: I0714 23:56:54.602586 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:56:54.603168 kubelet[2232]: I0714 23:56:54.602604 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/154612774223ef9aa3ff3f2a6d949658-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"154612774223ef9aa3ff3f2a6d949658\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:56:54.603168 kubelet[2232]: I0714 23:56:54.602662 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/154612774223ef9aa3ff3f2a6d949658-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"154612774223ef9aa3ff3f2a6d949658\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:56:54.603168 kubelet[2232]: I0714 23:56:54.602700 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/154612774223ef9aa3ff3f2a6d949658-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"154612774223ef9aa3ff3f2a6d949658\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:56:54.621577 kubelet[2232]: I0714 23:56:54.621559 2232 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:56:54.621839 kubelet[2232]: E0714 23:56:54.621808 2232 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jul 14 23:56:54.702606 kubelet[2232]: E0714 23:56:54.702564 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="800ms" Jul 14 23:56:54.745929 kubelet[2232]: E0714 23:56:54.745888 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:56:54.746538 containerd[1512]: time="2025-07-14T23:56:54.746491975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:154612774223ef9aa3ff3f2a6d949658,Namespace:kube-system,Attempt:0,}" Jul 14 23:56:54.759745 kubelet[2232]: E0714 23:56:54.759713 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:56:54.760054 containerd[1512]: time="2025-07-14T23:56:54.760004390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 14 23:56:54.763311 kubelet[2232]: E0714 23:56:54.763277 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:56:54.763667 containerd[1512]: time="2025-07-14T23:56:54.763638156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 14 23:56:54.922390 kubelet[2232]: W0714 23:56:54.922251 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:54.922390 kubelet[2232]: E0714 23:56:54.922323 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:54.963118 kubelet[2232]: W0714 23:56:54.963070 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:54.963214 kubelet[2232]: E0714 23:56:54.963133 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:55.023081 kubelet[2232]: I0714 23:56:55.023051 2232 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:56:55.023482 kubelet[2232]: E0714 23:56:55.023434 2232 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jul 14 23:56:55.146843 kubelet[2232]: W0714 23:56:55.146769 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:55.146981 kubelet[2232]: E0714 23:56:55.146860 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:55.471601 kubelet[2232]: W0714 23:56:55.471522 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:55.471601 kubelet[2232]: E0714 23:56:55.471599 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:55.503575 kubelet[2232]: E0714 23:56:55.503493 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="1.6s" Jul 14 23:56:55.825388 kubelet[2232]: I0714 23:56:55.825260 2232 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:56:55.825824 kubelet[2232]: E0714 23:56:55.825617 2232 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jul 14 23:56:56.090119 kubelet[2232]: E0714 23:56:56.089964 2232 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:56.654515 kubelet[2232]: W0714 23:56:56.654472 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:56.654641 kubelet[2232]: E0714 23:56:56.654519 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:56.837728 kubelet[2232]: W0714 23:56:56.837652 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:56.837728 kubelet[2232]: E0714 23:56:56.837723 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:57.105116 kubelet[2232]: E0714 23:56:57.104938 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="3.2s" Jul 14 23:56:57.363966 kubelet[2232]: W0714 23:56:57.363830 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:57.363966 kubelet[2232]: E0714 23:56:57.363875 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:56:57.427986 kubelet[2232]: I0714 23:56:57.427936 2232 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:56:57.428349 kubelet[2232]: E0714 23:56:57.428312 2232 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jul 14 23:56:57.429610 kubelet[2232]: W0714 23:56:57.429569 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Jul 14 23:56:57.429654 kubelet[2232]: E0714 23:56:57.429609 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:57:00.306143 kubelet[2232]: E0714 23:57:00.306092 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="6.4s" Jul 14 23:57:00.323815 kubelet[2232]: E0714 23:57:00.323780 2232 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:57:00.468926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681284419.mount: Deactivated successfully. Jul 14 23:57:00.476535 containerd[1512]: time="2025-07-14T23:57:00.476479534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:57:00.477614 containerd[1512]: time="2025-07-14T23:57:00.477576471Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 14 23:57:00.480962 containerd[1512]: time="2025-07-14T23:57:00.480933308Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:57:00.483222 containerd[1512]: time="2025-07-14T23:57:00.483160625Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:57:00.484185 containerd[1512]: time="2025-07-14T23:57:00.484122219Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 23:57:00.485065 containerd[1512]: time="2025-07-14T23:57:00.485028279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:57:00.485867 containerd[1512]: time="2025-07-14T23:57:00.485839831Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.739261213s" Jul 14 23:57:00.486416 containerd[1512]: time="2025-07-14T23:57:00.486378792Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:57:00.486985 containerd[1512]: time="2025-07-14T23:57:00.486952438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 23:57:00.493671 containerd[1512]: time="2025-07-14T23:57:00.493636355Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.729948776s" Jul 14 23:57:00.494086 containerd[1512]: time="2025-07-14T23:57:00.494059398Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.733974877s" Jul 14 23:57:00.631100 kubelet[2232]: I0714 23:57:00.630219 2232 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:57:00.631100 kubelet[2232]: E0714 23:57:00.630547 2232 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jul 14 23:57:00.681080 containerd[1512]: time="2025-07-14T23:57:00.678559017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:57:00.681080 containerd[1512]: time="2025-07-14T23:57:00.680845486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:57:00.681080 containerd[1512]: time="2025-07-14T23:57:00.680865794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:00.681080 containerd[1512]: time="2025-07-14T23:57:00.680983935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:00.707796 containerd[1512]: time="2025-07-14T23:57:00.707414251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:57:00.707796 containerd[1512]: time="2025-07-14T23:57:00.707497818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:57:00.707796 containerd[1512]: time="2025-07-14T23:57:00.707517685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:00.707796 containerd[1512]: time="2025-07-14T23:57:00.707610549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:00.711458 containerd[1512]: time="2025-07-14T23:57:00.711170797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:57:00.711458 containerd[1512]: time="2025-07-14T23:57:00.711242852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:57:00.711458 containerd[1512]: time="2025-07-14T23:57:00.711256758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:00.711458 containerd[1512]: time="2025-07-14T23:57:00.711360924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:00.731290 systemd[1]: Started cri-containerd-9ce1f4bd3d3b8a09cd4f83f7f63baa716673462d3c951d6403f485b5c97d1567.scope - libcontainer container 9ce1f4bd3d3b8a09cd4f83f7f63baa716673462d3c951d6403f485b5c97d1567. Jul 14 23:57:00.736850 systemd[1]: Started cri-containerd-0750a8308465ff74ce8c32ed8304122534e8ccb48c252f7b7eb416adc4d106b7.scope - libcontainer container 0750a8308465ff74ce8c32ed8304122534e8ccb48c252f7b7eb416adc4d106b7. Jul 14 23:57:00.764528 systemd[1]: Started cri-containerd-3538a1f9fa0e5173d6814ab20b770de29c182e564a42e240cb1657413c76a033.scope - libcontainer container 3538a1f9fa0e5173d6814ab20b770de29c182e564a42e240cb1657413c76a033. Jul 14 23:57:00.841346 containerd[1512]: time="2025-07-14T23:57:00.841296346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ce1f4bd3d3b8a09cd4f83f7f63baa716673462d3c951d6403f485b5c97d1567\"" Jul 14 23:57:00.842590 kubelet[2232]: E0714 23:57:00.842561 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:00.845920 containerd[1512]: time="2025-07-14T23:57:00.844796270Z" level=info msg="CreateContainer within sandbox \"9ce1f4bd3d3b8a09cd4f83f7f63baa716673462d3c951d6403f485b5c97d1567\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 23:57:00.845920 containerd[1512]: time="2025-07-14T23:57:00.845271211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"3538a1f9fa0e5173d6814ab20b770de29c182e564a42e240cb1657413c76a033\"" Jul 14 23:57:00.846033 kubelet[2232]: E0714 23:57:00.845660 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:00.848182 containerd[1512]: time="2025-07-14T23:57:00.848148027Z" level=info msg="CreateContainer within sandbox \"3538a1f9fa0e5173d6814ab20b770de29c182e564a42e240cb1657413c76a033\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 23:57:00.848927 containerd[1512]: time="2025-07-14T23:57:00.848888266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:154612774223ef9aa3ff3f2a6d949658,Namespace:kube-system,Attempt:0,} returns sandbox id \"0750a8308465ff74ce8c32ed8304122534e8ccb48c252f7b7eb416adc4d106b7\"" Jul 14 23:57:00.849930 kubelet[2232]: E0714 23:57:00.849903 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:00.854299 containerd[1512]: time="2025-07-14T23:57:00.853353341Z" level=info msg="CreateContainer within sandbox \"0750a8308465ff74ce8c32ed8304122534e8ccb48c252f7b7eb416adc4d106b7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 23:57:00.872885 containerd[1512]: time="2025-07-14T23:57:00.872833560Z" level=info msg="CreateContainer within sandbox \"9ce1f4bd3d3b8a09cd4f83f7f63baa716673462d3c951d6403f485b5c97d1567\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b301070df5e8b2e078fd870253f47c5372d5c90636bee890df44c5d4ccd90b84\"" Jul 14 23:57:00.873634 containerd[1512]: time="2025-07-14T23:57:00.873604817Z" level=info msg="StartContainer for \"b301070df5e8b2e078fd870253f47c5372d5c90636bee890df44c5d4ccd90b84\"" Jul 14 23:57:00.877934 containerd[1512]: time="2025-07-14T23:57:00.877888853Z" level=info msg="CreateContainer within sandbox \"3538a1f9fa0e5173d6814ab20b770de29c182e564a42e240cb1657413c76a033\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"25e2d6cd6d79f627b6d4f6120b3150c66dbd0b61ddfe9ddc6da617d32db727a8\"" Jul 14 23:57:00.878428 containerd[1512]: time="2025-07-14T23:57:00.878401424Z" level=info msg="StartContainer for \"25e2d6cd6d79f627b6d4f6120b3150c66dbd0b61ddfe9ddc6da617d32db727a8\"" Jul 14 23:57:00.882277 containerd[1512]: time="2025-07-14T23:57:00.882166096Z" level=info msg="CreateContainer within sandbox \"0750a8308465ff74ce8c32ed8304122534e8ccb48c252f7b7eb416adc4d106b7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1456457c0fe2dacdf316cd5b6ac59b50f7e71789de5b8803c6df80f903622f1e\"" Jul 14 23:57:00.884131 containerd[1512]: time="2025-07-14T23:57:00.884080156Z" level=info msg="StartContainer for \"1456457c0fe2dacdf316cd5b6ac59b50f7e71789de5b8803c6df80f903622f1e\"" Jul 14 23:57:00.952296 systemd[1]: Started cri-containerd-b301070df5e8b2e078fd870253f47c5372d5c90636bee890df44c5d4ccd90b84.scope - libcontainer container b301070df5e8b2e078fd870253f47c5372d5c90636bee890df44c5d4ccd90b84. Jul 14 23:57:00.957398 systemd[1]: Started cri-containerd-1456457c0fe2dacdf316cd5b6ac59b50f7e71789de5b8803c6df80f903622f1e.scope - libcontainer container 1456457c0fe2dacdf316cd5b6ac59b50f7e71789de5b8803c6df80f903622f1e. Jul 14 23:57:00.959391 systemd[1]: Started cri-containerd-25e2d6cd6d79f627b6d4f6120b3150c66dbd0b61ddfe9ddc6da617d32db727a8.scope - libcontainer container 25e2d6cd6d79f627b6d4f6120b3150c66dbd0b61ddfe9ddc6da617d32db727a8. Jul 14 23:57:01.021594 containerd[1512]: time="2025-07-14T23:57:01.021534902Z" level=info msg="StartContainer for \"b301070df5e8b2e078fd870253f47c5372d5c90636bee890df44c5d4ccd90b84\" returns successfully" Jul 14 23:57:01.021740 containerd[1512]: time="2025-07-14T23:57:01.021680625Z" level=info msg="StartContainer for \"25e2d6cd6d79f627b6d4f6120b3150c66dbd0b61ddfe9ddc6da617d32db727a8\" returns successfully" Jul 14 23:57:01.021740 containerd[1512]: time="2025-07-14T23:57:01.021711613Z" level=info msg="StartContainer for \"1456457c0fe2dacdf316cd5b6ac59b50f7e71789de5b8803c6df80f903622f1e\" returns successfully" Jul 14 23:57:01.133445 kubelet[2232]: E0714 23:57:01.133311 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:01.135968 kubelet[2232]: E0714 23:57:01.135939 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:01.137461 kubelet[2232]: E0714 23:57:01.137404 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:02.143046 kubelet[2232]: E0714 23:57:02.142980 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:02.872926 kubelet[2232]: E0714 23:57:02.872885 2232 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 14 23:57:03.096057 kubelet[2232]: I0714 23:57:03.096001 2232 apiserver.go:52] "Watching apiserver" Jul 14 23:57:03.101916 kubelet[2232]: I0714 23:57:03.101889 2232 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 23:57:03.230493 kubelet[2232]: E0714 23:57:03.230374 2232 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 14 23:57:03.779988 kubelet[2232]: E0714 23:57:03.779923 2232 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 14 23:57:04.320280 kubelet[2232]: E0714 23:57:04.320245 2232 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 23:57:04.681971 kubelet[2232]: E0714 23:57:04.681719 2232 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 14 23:57:04.854286 kubelet[2232]: E0714 23:57:04.854247 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:05.357328 kubelet[2232]: E0714 23:57:05.357289 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:06.708965 kubelet[2232]: E0714 23:57:06.708925 2232 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 23:57:07.032752 kubelet[2232]: I0714 23:57:07.032625 2232 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:57:07.037721 kubelet[2232]: I0714 23:57:07.037685 2232 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 23:57:07.175716 systemd[1]: Reload requested from client PID 2513 ('systemctl') (unit session-7.scope)... Jul 14 23:57:07.175734 systemd[1]: Reloading... Jul 14 23:57:07.280077 zram_generator::config[2561]: No configuration found. Jul 14 23:57:07.396723 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:57:07.521458 systemd[1]: Reloading finished in 345 ms. Jul 14 23:57:07.548533 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:57:07.562273 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 23:57:07.562595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:57:07.562649 systemd[1]: kubelet.service: Consumed 1.116s CPU time, 133.1M memory peak. Jul 14 23:57:07.569223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:57:07.744224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:57:07.749317 (kubelet)[2602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 23:57:07.799001 kubelet[2602]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:57:07.799001 kubelet[2602]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 23:57:07.799001 kubelet[2602]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:57:07.799451 kubelet[2602]: I0714 23:57:07.799102 2602 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 23:57:07.805122 kubelet[2602]: I0714 23:57:07.805069 2602 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 23:57:07.805122 kubelet[2602]: I0714 23:57:07.805106 2602 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 23:57:07.805407 kubelet[2602]: I0714 23:57:07.805382 2602 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 23:57:07.807917 kubelet[2602]: I0714 23:57:07.807898 2602 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 23:57:07.809841 kubelet[2602]: I0714 23:57:07.809799 2602 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 23:57:07.814195 kubelet[2602]: E0714 23:57:07.813497 2602 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 23:57:07.814195 kubelet[2602]: I0714 23:57:07.813530 2602 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 23:57:07.818513 kubelet[2602]: I0714 23:57:07.818494 2602 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 23:57:07.818629 kubelet[2602]: I0714 23:57:07.818602 2602 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 23:57:07.818770 kubelet[2602]: I0714 23:57:07.818724 2602 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 23:57:07.818908 kubelet[2602]: I0714 23:57:07.818747 2602 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 23:57:07.819026 kubelet[2602]: I0714 23:57:07.818928 2602 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 23:57:07.819026 kubelet[2602]: I0714 23:57:07.818938 2602 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 23:57:07.819026 kubelet[2602]: I0714 23:57:07.818963 2602 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:57:07.819154 kubelet[2602]: I0714 23:57:07.819078 2602 kubelet.go:408] "Attempting to sync node with API server" Jul 14 23:57:07.819154 kubelet[2602]: I0714 23:57:07.819092 2602 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 23:57:07.819154 kubelet[2602]: I0714 23:57:07.819122 2602 kubelet.go:314] "Adding apiserver pod source" Jul 14 23:57:07.819154 kubelet[2602]: I0714 23:57:07.819138 2602 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 23:57:07.820163 kubelet[2602]: I0714 23:57:07.819903 2602 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 14 23:57:07.820292 kubelet[2602]: I0714 23:57:07.820270 2602 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 23:57:07.820705 kubelet[2602]: I0714 23:57:07.820664 2602 server.go:1274] "Started kubelet" Jul 14 23:57:07.826496 kubelet[2602]: I0714 23:57:07.826452 2602 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 23:57:07.826886 kubelet[2602]: I0714 23:57:07.826577 2602 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 23:57:07.826886 kubelet[2602]: I0714 23:57:07.826823 2602 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 23:57:07.827859 kubelet[2602]: I0714 23:57:07.827324 2602 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 23:57:07.828554 kubelet[2602]: I0714 23:57:07.828216 2602 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 23:57:07.828628 kubelet[2602]: I0714 23:57:07.828603 2602 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 23:57:07.829080 kubelet[2602]: E0714 23:57:07.828822 2602 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:57:07.829080 kubelet[2602]: I0714 23:57:07.828889 2602 server.go:449] "Adding debug handlers to kubelet server" Jul 14 23:57:07.828917 sudo[2618]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 23:57:07.829405 sudo[2618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 14 23:57:07.829484 kubelet[2602]: I0714 23:57:07.829253 2602 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 23:57:07.829484 kubelet[2602]: I0714 23:57:07.829371 2602 reconciler.go:26] "Reconciler: start to sync state" Jul 14 23:57:07.830753 kubelet[2602]: I0714 23:57:07.830727 2602 factory.go:221] Registration of the systemd container factory successfully Jul 14 23:57:07.833038 kubelet[2602]: I0714 23:57:07.830808 2602 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 23:57:07.840372 kubelet[2602]: I0714 23:57:07.840343 2602 factory.go:221] Registration of the containerd container factory successfully Jul 14 23:57:07.843666 kubelet[2602]: E0714 23:57:07.842779 2602 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 23:57:07.854603 kubelet[2602]: I0714 23:57:07.854480 2602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 23:57:07.857196 kubelet[2602]: I0714 23:57:07.857171 2602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 23:57:07.857196 kubelet[2602]: I0714 23:57:07.857197 2602 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 23:57:07.857286 kubelet[2602]: I0714 23:57:07.857223 2602 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 23:57:07.857286 kubelet[2602]: E0714 23:57:07.857271 2602 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 23:57:07.888115 kubelet[2602]: I0714 23:57:07.888065 2602 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 23:57:07.888115 kubelet[2602]: I0714 23:57:07.888097 2602 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 23:57:07.888115 kubelet[2602]: I0714 23:57:07.888115 2602 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:57:07.888290 kubelet[2602]: I0714 23:57:07.888245 2602 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 23:57:07.888290 kubelet[2602]: I0714 23:57:07.888255 2602 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 23:57:07.888290 kubelet[2602]: I0714 23:57:07.888274 2602 policy_none.go:49] "None policy: Start" Jul 14 23:57:07.888960 kubelet[2602]: I0714 23:57:07.888931 2602 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 23:57:07.889003 kubelet[2602]: I0714 23:57:07.888971 2602 state_mem.go:35] "Initializing new in-memory state store" Jul 14 23:57:07.889233 kubelet[2602]: I0714 23:57:07.889215 2602 state_mem.go:75] "Updated machine memory state" Jul 14 23:57:07.894618 kubelet[2602]: I0714 23:57:07.894338 2602 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 23:57:07.894618 kubelet[2602]: I0714 23:57:07.894509 2602 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 23:57:07.894618 kubelet[2602]: I0714 23:57:07.894519 2602 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 23:57:07.894726 kubelet[2602]: I0714 23:57:07.894704 2602 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 23:57:08.003719 kubelet[2602]: I0714 23:57:08.001778 2602 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:57:08.037576 kubelet[2602]: I0714 23:57:08.037522 2602 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 23:57:08.037761 kubelet[2602]: I0714 23:57:08.037607 2602 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 23:57:08.129771 kubelet[2602]: I0714 23:57:08.129705 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/154612774223ef9aa3ff3f2a6d949658-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"154612774223ef9aa3ff3f2a6d949658\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:57:08.129771 kubelet[2602]: I0714 23:57:08.129755 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:57:08.129771 kubelet[2602]: I0714 23:57:08.129783 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:57:08.129996 kubelet[2602]: I0714 23:57:08.129821 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:57:08.129996 kubelet[2602]: I0714 23:57:08.129867 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:57:08.129996 kubelet[2602]: I0714 23:57:08.129885 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 23:57:08.129996 kubelet[2602]: I0714 23:57:08.129903 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/154612774223ef9aa3ff3f2a6d949658-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"154612774223ef9aa3ff3f2a6d949658\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:57:08.129996 kubelet[2602]: I0714 23:57:08.129921 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/154612774223ef9aa3ff3f2a6d949658-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"154612774223ef9aa3ff3f2a6d949658\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:57:08.130187 kubelet[2602]: I0714 23:57:08.129937 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:57:08.302363 kubelet[2602]: E0714 23:57:08.302237 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:08.304816 kubelet[2602]: E0714 23:57:08.304774 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:08.304816 kubelet[2602]: E0714 23:57:08.304817 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:08.365579 sudo[2618]: pam_unix(sudo:session): session closed for user root Jul 14 23:57:08.819934 kubelet[2602]: I0714 23:57:08.819884 2602 apiserver.go:52] "Watching apiserver" Jul 14 23:57:08.830324 kubelet[2602]: I0714 23:57:08.830284 2602 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 23:57:08.875034 kubelet[2602]: E0714 23:57:08.874950 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:08.875034 kubelet[2602]: E0714 23:57:08.874981 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:08.875198 kubelet[2602]: E0714 23:57:08.875180 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:09.112873 kubelet[2602]: I0714 23:57:09.112701 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.112680264 podStartE2EDuration="2.112680264s" podCreationTimestamp="2025-07-14 23:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:57:09.11076564 +0000 UTC m=+1.356665052" watchObservedRunningTime="2025-07-14 23:57:09.112680264 +0000 UTC m=+1.358579666" Jul 14 23:57:09.112873 kubelet[2602]: I0714 23:57:09.112838 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.112834098 podStartE2EDuration="2.112834098s" podCreationTimestamp="2025-07-14 23:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:57:09.041944955 +0000 UTC m=+1.287844358" watchObservedRunningTime="2025-07-14 23:57:09.112834098 +0000 UTC m=+1.358733510" Jul 14 23:57:09.134229 kubelet[2602]: I0714 23:57:09.134159 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.134140849 podStartE2EDuration="2.134140849s" podCreationTimestamp="2025-07-14 23:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:57:09.120231034 +0000 UTC m=+1.366130436" watchObservedRunningTime="2025-07-14 23:57:09.134140849 +0000 UTC m=+1.380040251" Jul 14 23:57:09.290581 kubelet[2602]: I0714 23:57:09.290537 2602 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 23:57:09.290909 containerd[1512]: time="2025-07-14T23:57:09.290857953Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 23:57:09.291259 kubelet[2602]: I0714 23:57:09.291052 2602 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 23:57:09.536998 sudo[1692]: pam_unix(sudo:session): session closed for user root Jul 14 23:57:09.538655 sshd[1691]: Connection closed by 10.0.0.1 port 50748 Jul 14 23:57:09.539076 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Jul 14 23:57:09.543372 systemd[1]: sshd@6-10.0.0.18:22-10.0.0.1:50748.service: Deactivated successfully. Jul 14 23:57:09.545798 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 23:57:09.546031 systemd[1]: session-7.scope: Consumed 4.516s CPU time, 249.3M memory peak. Jul 14 23:57:09.547216 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Jul 14 23:57:09.548075 systemd-logind[1492]: Removed session 7. Jul 14 23:57:09.877238 kubelet[2602]: E0714 23:57:09.877108 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:09.891043 kubelet[2602]: E0714 23:57:09.887413 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:09.896006 systemd[1]: Created slice kubepods-besteffort-podbb83513d_4c87_4ac0_b4f7_c937f52a089e.slice - libcontainer container kubepods-besteffort-podbb83513d_4c87_4ac0_b4f7_c937f52a089e.slice. Jul 14 23:57:09.909782 systemd[1]: Created slice kubepods-burstable-podbf58066f_1fec_434e_8e1c_e19982e73c96.slice - libcontainer container kubepods-burstable-podbf58066f_1fec_434e_8e1c_e19982e73c96.slice. Jul 14 23:57:09.942395 kubelet[2602]: I0714 23:57:09.942321 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb83513d-4c87-4ac0-b4f7-c937f52a089e-kube-proxy\") pod \"kube-proxy-vtdbm\" (UID: \"bb83513d-4c87-4ac0-b4f7-c937f52a089e\") " pod="kube-system/kube-proxy-vtdbm" Jul 14 23:57:09.942395 kubelet[2602]: I0714 23:57:09.942370 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cni-path\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942395 kubelet[2602]: I0714 23:57:09.942391 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-host-proc-sys-kernel\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942604 kubelet[2602]: I0714 23:57:09.942414 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb83513d-4c87-4ac0-b4f7-c937f52a089e-lib-modules\") pod \"kube-proxy-vtdbm\" (UID: \"bb83513d-4c87-4ac0-b4f7-c937f52a089e\") " pod="kube-system/kube-proxy-vtdbm" Jul 14 23:57:09.942604 kubelet[2602]: I0714 23:57:09.942434 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-run\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942604 kubelet[2602]: I0714 23:57:09.942515 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-etc-cni-netd\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942604 kubelet[2602]: I0714 23:57:09.942556 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-bpf-maps\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942604 kubelet[2602]: I0714 23:57:09.942579 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-lib-modules\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942714 kubelet[2602]: I0714 23:57:09.942612 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-config-path\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942714 kubelet[2602]: I0714 23:57:09.942630 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf58066f-1fec-434e-8e1c-e19982e73c96-clustermesh-secrets\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942714 kubelet[2602]: I0714 23:57:09.942645 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf58066f-1fec-434e-8e1c-e19982e73c96-hubble-tls\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942714 kubelet[2602]: I0714 23:57:09.942659 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5sl6\" (UniqueName: \"kubernetes.io/projected/bf58066f-1fec-434e-8e1c-e19982e73c96-kube-api-access-l5sl6\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942714 kubelet[2602]: I0714 23:57:09.942692 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfgc\" (UniqueName: \"kubernetes.io/projected/bb83513d-4c87-4ac0-b4f7-c937f52a089e-kube-api-access-qcfgc\") pod \"kube-proxy-vtdbm\" (UID: \"bb83513d-4c87-4ac0-b4f7-c937f52a089e\") " pod="kube-system/kube-proxy-vtdbm" Jul 14 23:57:09.942816 kubelet[2602]: I0714 23:57:09.942709 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-cgroup\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942816 kubelet[2602]: I0714 23:57:09.942729 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-xtables-lock\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942816 kubelet[2602]: I0714 23:57:09.942743 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb83513d-4c87-4ac0-b4f7-c937f52a089e-xtables-lock\") pod \"kube-proxy-vtdbm\" (UID: \"bb83513d-4c87-4ac0-b4f7-c937f52a089e\") " pod="kube-system/kube-proxy-vtdbm" Jul 14 23:57:09.942816 kubelet[2602]: I0714 23:57:09.942759 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-host-proc-sys-net\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:09.942816 kubelet[2602]: I0714 23:57:09.942779 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-hostproc\") pod \"cilium-c42xt\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " pod="kube-system/cilium-c42xt" Jul 14 23:57:10.207954 kubelet[2602]: E0714 23:57:10.207792 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:10.209049 containerd[1512]: time="2025-07-14T23:57:10.208426099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vtdbm,Uid:bb83513d-4c87-4ac0-b4f7-c937f52a089e,Namespace:kube-system,Attempt:0,}" Jul 14 23:57:10.212571 kubelet[2602]: E0714 23:57:10.212538 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:10.212901 containerd[1512]: time="2025-07-14T23:57:10.212869891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c42xt,Uid:bf58066f-1fec-434e-8e1c-e19982e73c96,Namespace:kube-system,Attempt:0,}" Jul 14 23:57:10.407925 systemd[1]: Created slice kubepods-besteffort-podf3a1483d_7643_4fb7_bfeb_ede147797f61.slice - libcontainer container kubepods-besteffort-podf3a1483d_7643_4fb7_bfeb_ede147797f61.slice. Jul 14 23:57:10.437462 containerd[1512]: time="2025-07-14T23:57:10.436844370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:57:10.437462 containerd[1512]: time="2025-07-14T23:57:10.436905967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:57:10.437462 containerd[1512]: time="2025-07-14T23:57:10.436916177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:10.437462 containerd[1512]: time="2025-07-14T23:57:10.437001329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:10.445909 containerd[1512]: time="2025-07-14T23:57:10.445282265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:57:10.445909 containerd[1512]: time="2025-07-14T23:57:10.445353260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:57:10.445909 containerd[1512]: time="2025-07-14T23:57:10.445366396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:10.445909 containerd[1512]: time="2025-07-14T23:57:10.445460565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:10.447425 kubelet[2602]: I0714 23:57:10.447328 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6llx\" (UniqueName: \"kubernetes.io/projected/f3a1483d-7643-4fb7-bfeb-ede147797f61-kube-api-access-m6llx\") pod \"cilium-operator-5d85765b45-f7gnt\" (UID: \"f3a1483d-7643-4fb7-bfeb-ede147797f61\") " pod="kube-system/cilium-operator-5d85765b45-f7gnt" Jul 14 23:57:10.447491 kubelet[2602]: I0714 23:57:10.447471 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3a1483d-7643-4fb7-bfeb-ede147797f61-cilium-config-path\") pod \"cilium-operator-5d85765b45-f7gnt\" (UID: \"f3a1483d-7643-4fb7-bfeb-ede147797f61\") " pod="kube-system/cilium-operator-5d85765b45-f7gnt" Jul 14 23:57:10.457160 systemd[1]: Started cri-containerd-a91cc796a85e326dd84c8716f978df3015cf103365b06dbd3d70e1196d2a6d27.scope - libcontainer container a91cc796a85e326dd84c8716f978df3015cf103365b06dbd3d70e1196d2a6d27. Jul 14 23:57:10.460842 systemd[1]: Started cri-containerd-3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a.scope - libcontainer container 3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a. Jul 14 23:57:10.483671 containerd[1512]: time="2025-07-14T23:57:10.483590553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vtdbm,Uid:bb83513d-4c87-4ac0-b4f7-c937f52a089e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a91cc796a85e326dd84c8716f978df3015cf103365b06dbd3d70e1196d2a6d27\"" Jul 14 23:57:10.484522 kubelet[2602]: E0714 23:57:10.484498 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:10.485439 containerd[1512]: time="2025-07-14T23:57:10.485227965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c42xt,Uid:bf58066f-1fec-434e-8e1c-e19982e73c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\"" Jul 14 23:57:10.486210 kubelet[2602]: E0714 23:57:10.486190 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:10.487801 containerd[1512]: time="2025-07-14T23:57:10.487713463Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 23:57:10.488347 containerd[1512]: time="2025-07-14T23:57:10.488315280Z" level=info msg="CreateContainer within sandbox \"a91cc796a85e326dd84c8716f978df3015cf103365b06dbd3d70e1196d2a6d27\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 23:57:10.508455 containerd[1512]: time="2025-07-14T23:57:10.508386933Z" level=info msg="CreateContainer within sandbox \"a91cc796a85e326dd84c8716f978df3015cf103365b06dbd3d70e1196d2a6d27\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b9879033459be6c55f314b3dde2ce2c7471596e756aecf65c9e2e7340f58de04\"" Jul 14 23:57:10.509006 containerd[1512]: time="2025-07-14T23:57:10.508957350Z" level=info msg="StartContainer for \"b9879033459be6c55f314b3dde2ce2c7471596e756aecf65c9e2e7340f58de04\"" Jul 14 23:57:10.553276 systemd[1]: Started cri-containerd-b9879033459be6c55f314b3dde2ce2c7471596e756aecf65c9e2e7340f58de04.scope - libcontainer container b9879033459be6c55f314b3dde2ce2c7471596e756aecf65c9e2e7340f58de04. Jul 14 23:57:10.592471 containerd[1512]: time="2025-07-14T23:57:10.592425198Z" level=info msg="StartContainer for \"b9879033459be6c55f314b3dde2ce2c7471596e756aecf65c9e2e7340f58de04\" returns successfully" Jul 14 23:57:10.712900 kubelet[2602]: E0714 23:57:10.712156 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:10.713183 containerd[1512]: time="2025-07-14T23:57:10.712695242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-f7gnt,Uid:f3a1483d-7643-4fb7-bfeb-ede147797f61,Namespace:kube-system,Attempt:0,}" Jul 14 23:57:10.738946 containerd[1512]: time="2025-07-14T23:57:10.738838119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:57:10.738946 containerd[1512]: time="2025-07-14T23:57:10.738894506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:57:10.738946 containerd[1512]: time="2025-07-14T23:57:10.738906549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:10.739258 containerd[1512]: time="2025-07-14T23:57:10.739210700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:10.760203 systemd[1]: Started cri-containerd-431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b.scope - libcontainer container 431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b. Jul 14 23:57:10.798224 containerd[1512]: time="2025-07-14T23:57:10.798181057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-f7gnt,Uid:f3a1483d-7643-4fb7-bfeb-ede147797f61,Namespace:kube-system,Attempt:0,} returns sandbox id \"431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b\"" Jul 14 23:57:10.799424 kubelet[2602]: E0714 23:57:10.799315 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:10.881501 kubelet[2602]: E0714 23:57:10.881430 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:11.734467 kubelet[2602]: E0714 23:57:11.734185 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:11.746899 kubelet[2602]: I0714 23:57:11.746842 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vtdbm" podStartSLOduration=2.746824451 podStartE2EDuration="2.746824451s" podCreationTimestamp="2025-07-14 23:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:57:10.889997789 +0000 UTC m=+3.135897191" watchObservedRunningTime="2025-07-14 23:57:11.746824451 +0000 UTC m=+3.992723853" Jul 14 23:57:11.884633 kubelet[2602]: E0714 23:57:11.884594 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:12.885860 kubelet[2602]: E0714 23:57:12.885818 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:15.434882 update_engine[1493]: I20250714 23:57:15.434808 1493 update_attempter.cc:509] Updating boot flags... Jul 14 23:57:15.489080 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2985) Jul 14 23:57:15.601049 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2989) Jul 14 23:57:15.657045 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2989) Jul 14 23:57:16.872520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount502541085.mount: Deactivated successfully. Jul 14 23:57:17.036236 kubelet[2602]: E0714 23:57:17.036195 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:17.894866 kubelet[2602]: E0714 23:57:17.894815 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:19.891001 kubelet[2602]: E0714 23:57:19.890964 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:27.082465 containerd[1512]: time="2025-07-14T23:57:27.082404734Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:57:27.083094 containerd[1512]: time="2025-07-14T23:57:27.083064218Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 14 23:57:27.084299 containerd[1512]: time="2025-07-14T23:57:27.084241758Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:57:27.085769 containerd[1512]: time="2025-07-14T23:57:27.085724695Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.597981836s" Jul 14 23:57:27.085769 containerd[1512]: time="2025-07-14T23:57:27.085754902Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 14 23:57:27.087005 containerd[1512]: time="2025-07-14T23:57:27.086954794Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 23:57:27.088638 containerd[1512]: time="2025-07-14T23:57:27.088604355Z" level=info msg="CreateContainer within sandbox \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 23:57:27.102301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801006447.mount: Deactivated successfully. Jul 14 23:57:27.103222 containerd[1512]: time="2025-07-14T23:57:27.103178582Z" level=info msg="CreateContainer within sandbox \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5\"" Jul 14 23:57:27.103754 containerd[1512]: time="2025-07-14T23:57:27.103688494Z" level=info msg="StartContainer for \"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5\"" Jul 14 23:57:27.140198 systemd[1]: Started cri-containerd-6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5.scope - libcontainer container 6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5. Jul 14 23:57:27.168519 containerd[1512]: time="2025-07-14T23:57:27.168476659Z" level=info msg="StartContainer for \"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5\" returns successfully" Jul 14 23:57:27.182282 systemd[1]: cri-containerd-6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5.scope: Deactivated successfully. Jul 14 23:57:27.452504 containerd[1512]: time="2025-07-14T23:57:27.452364563Z" level=info msg="shim disconnected" id=6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5 namespace=k8s.io Jul 14 23:57:27.452504 containerd[1512]: time="2025-07-14T23:57:27.452416792Z" level=warning msg="cleaning up after shim disconnected" id=6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5 namespace=k8s.io Jul 14 23:57:27.452504 containerd[1512]: time="2025-07-14T23:57:27.452425629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:57:27.910743 kubelet[2602]: E0714 23:57:27.910701 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:27.912558 containerd[1512]: time="2025-07-14T23:57:27.912458200Z" level=info msg="CreateContainer within sandbox \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 23:57:27.930538 containerd[1512]: time="2025-07-14T23:57:27.930489586Z" level=info msg="CreateContainer within sandbox \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942\"" Jul 14 23:57:27.931083 containerd[1512]: time="2025-07-14T23:57:27.931050544Z" level=info msg="StartContainer for \"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942\"" Jul 14 23:57:27.973212 systemd[1]: Started cri-containerd-731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942.scope - libcontainer container 731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942. Jul 14 23:57:27.998505 containerd[1512]: time="2025-07-14T23:57:27.998454283Z" level=info msg="StartContainer for \"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942\" returns successfully" Jul 14 23:57:28.012540 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 23:57:28.013109 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:57:28.013302 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 14 23:57:28.019328 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 23:57:28.019544 systemd[1]: cri-containerd-731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942.scope: Deactivated successfully. Jul 14 23:57:28.039792 containerd[1512]: time="2025-07-14T23:57:28.039734050Z" level=info msg="shim disconnected" id=731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942 namespace=k8s.io Jul 14 23:57:28.039792 containerd[1512]: time="2025-07-14T23:57:28.039781388Z" level=warning msg="cleaning up after shim disconnected" id=731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942 namespace=k8s.io Jul 14 23:57:28.039792 containerd[1512]: time="2025-07-14T23:57:28.039791898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:57:28.045686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:57:28.099260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5-rootfs.mount: Deactivated successfully. Jul 14 23:57:28.484843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3486051992.mount: Deactivated successfully. Jul 14 23:57:28.914814 kubelet[2602]: E0714 23:57:28.914779 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:28.916856 containerd[1512]: time="2025-07-14T23:57:28.916711391Z" level=info msg="CreateContainer within sandbox \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 23:57:29.305084 containerd[1512]: time="2025-07-14T23:57:29.305029533Z" level=info msg="CreateContainer within sandbox \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440\"" Jul 14 23:57:29.305749 containerd[1512]: time="2025-07-14T23:57:29.305688655Z" level=info msg="StartContainer for \"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440\"" Jul 14 23:57:29.313615 containerd[1512]: time="2025-07-14T23:57:29.313361827Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:57:29.314236 containerd[1512]: time="2025-07-14T23:57:29.314190939Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 14 23:57:29.315671 containerd[1512]: time="2025-07-14T23:57:29.315634560Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:57:29.317928 containerd[1512]: time="2025-07-14T23:57:29.317761999Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.23076745s" Jul 14 23:57:29.317928 containerd[1512]: time="2025-07-14T23:57:29.317801294Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 14 23:57:29.323890 containerd[1512]: time="2025-07-14T23:57:29.323735768Z" level=info msg="CreateContainer within sandbox \"431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 23:57:29.339054 containerd[1512]: time="2025-07-14T23:57:29.338998814Z" level=info msg="CreateContainer within sandbox \"431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\"" Jul 14 23:57:29.339463 containerd[1512]: time="2025-07-14T23:57:29.339418144Z" level=info msg="StartContainer for \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\"" Jul 14 23:57:29.344253 systemd[1]: Started cri-containerd-9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440.scope - libcontainer container 9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440. Jul 14 23:57:29.377186 systemd[1]: Started cri-containerd-8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc.scope - libcontainer container 8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc. Jul 14 23:57:29.377421 systemd[1]: cri-containerd-9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440.scope: Deactivated successfully. Jul 14 23:57:29.382751 containerd[1512]: time="2025-07-14T23:57:29.382700629Z" level=info msg="StartContainer for \"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440\" returns successfully" Jul 14 23:57:29.445838 containerd[1512]: time="2025-07-14T23:57:29.445785855Z" level=info msg="StartContainer for \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\" returns successfully" Jul 14 23:57:29.472027 containerd[1512]: time="2025-07-14T23:57:29.471944998Z" level=info msg="shim disconnected" id=9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440 namespace=k8s.io Jul 14 23:57:29.472027 containerd[1512]: time="2025-07-14T23:57:29.472000672Z" level=warning msg="cleaning up after shim disconnected" id=9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440 namespace=k8s.io Jul 14 23:57:29.472027 containerd[1512]: time="2025-07-14T23:57:29.472024017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:57:29.917327 kubelet[2602]: E0714 23:57:29.917213 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:29.919226 kubelet[2602]: E0714 23:57:29.919209 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:29.920514 containerd[1512]: time="2025-07-14T23:57:29.920473067Z" level=info msg="CreateContainer within sandbox \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 23:57:30.295821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440-rootfs.mount: Deactivated successfully. Jul 14 23:57:30.598560 containerd[1512]: time="2025-07-14T23:57:30.598433921Z" level=info msg="CreateContainer within sandbox \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d\"" Jul 14 23:57:30.598849 containerd[1512]: time="2025-07-14T23:57:30.598816452Z" level=info msg="StartContainer for \"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d\"" Jul 14 23:57:30.640145 systemd[1]: Started cri-containerd-391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d.scope - libcontainer container 391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d. Jul 14 23:57:30.665350 systemd[1]: cri-containerd-391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d.scope: Deactivated successfully. Jul 14 23:57:30.734944 containerd[1512]: time="2025-07-14T23:57:30.734881557Z" level=info msg="StartContainer for \"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d\" returns successfully" Jul 14 23:57:30.817377 kubelet[2602]: I0714 23:57:30.817322 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-f7gnt" podStartSLOduration=2.296740073 podStartE2EDuration="20.817303371s" podCreationTimestamp="2025-07-14 23:57:10 +0000 UTC" firstStartedPulling="2025-07-14 23:57:10.799851942 +0000 UTC m=+3.045751344" lastFinishedPulling="2025-07-14 23:57:29.32041524 +0000 UTC m=+21.566314642" observedRunningTime="2025-07-14 23:57:30.652641716 +0000 UTC m=+22.898541118" watchObservedRunningTime="2025-07-14 23:57:30.817303371 +0000 UTC m=+23.063202773" Jul 14 23:57:30.868579 containerd[1512]: time="2025-07-14T23:57:30.868426795Z" level=info msg="shim disconnected" id=391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d namespace=k8s.io Jul 14 23:57:30.868579 containerd[1512]: time="2025-07-14T23:57:30.868487519Z" level=warning msg="cleaning up after shim disconnected" id=391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d namespace=k8s.io Jul 14 23:57:30.868579 containerd[1512]: time="2025-07-14T23:57:30.868495994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:57:30.968766 kubelet[2602]: E0714 23:57:30.968462 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:30.968766 kubelet[2602]: E0714 23:57:30.968730 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:30.970423 containerd[1512]: time="2025-07-14T23:57:30.970378094Z" level=info msg="CreateContainer within sandbox \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 23:57:31.295080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d-rootfs.mount: Deactivated successfully. Jul 14 23:57:31.511355 containerd[1512]: time="2025-07-14T23:57:31.511313211Z" level=info msg="CreateContainer within sandbox \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\"" Jul 14 23:57:31.511734 containerd[1512]: time="2025-07-14T23:57:31.511710810Z" level=info msg="StartContainer for \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\"" Jul 14 23:57:31.541163 systemd[1]: Started cri-containerd-b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026.scope - libcontainer container b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026. Jul 14 23:57:31.722077 systemd[1]: Started sshd@7-10.0.0.18:22-10.0.0.1:45994.service - OpenSSH per-connection server daemon (10.0.0.1:45994). Jul 14 23:57:31.750583 containerd[1512]: time="2025-07-14T23:57:31.750550831Z" level=info msg="StartContainer for \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\" returns successfully" Jul 14 23:57:31.787717 sshd[3357]: Accepted publickey for core from 10.0.0.1 port 45994 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:57:31.788435 sshd-session[3357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:57:31.795706 systemd-logind[1492]: New session 8 of user core. Jul 14 23:57:31.802300 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 23:57:32.055087 kubelet[2602]: I0714 23:57:32.055034 2602 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 23:57:32.086637 sshd[3362]: Connection closed by 10.0.0.1 port 45994 Jul 14 23:57:32.087926 sshd-session[3357]: pam_unix(sshd:session): session closed for user core Jul 14 23:57:32.089465 systemd[1]: Created slice kubepods-burstable-pod63b4949c_6044_4de0_84bc_11fbee3ff5f1.slice - libcontainer container kubepods-burstable-pod63b4949c_6044_4de0_84bc_11fbee3ff5f1.slice. Jul 14 23:57:32.094715 systemd[1]: sshd@7-10.0.0.18:22-10.0.0.1:45994.service: Deactivated successfully. Jul 14 23:57:32.097718 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 23:57:32.100899 kubelet[2602]: I0714 23:57:32.100476 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wghqx\" (UniqueName: \"kubernetes.io/projected/278201f6-b9af-4d42-8696-eef5c6038113-kube-api-access-wghqx\") pod \"coredns-7c65d6cfc9-dxlp8\" (UID: \"278201f6-b9af-4d42-8696-eef5c6038113\") " pod="kube-system/coredns-7c65d6cfc9-dxlp8" Jul 14 23:57:32.100899 kubelet[2602]: I0714 23:57:32.100512 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/278201f6-b9af-4d42-8696-eef5c6038113-config-volume\") pod \"coredns-7c65d6cfc9-dxlp8\" (UID: \"278201f6-b9af-4d42-8696-eef5c6038113\") " pod="kube-system/coredns-7c65d6cfc9-dxlp8" Jul 14 23:57:32.100899 kubelet[2602]: I0714 23:57:32.100530 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63b4949c-6044-4de0-84bc-11fbee3ff5f1-config-volume\") pod \"coredns-7c65d6cfc9-24bfb\" (UID: \"63b4949c-6044-4de0-84bc-11fbee3ff5f1\") " pod="kube-system/coredns-7c65d6cfc9-24bfb" Jul 14 23:57:32.100899 kubelet[2602]: I0714 23:57:32.100549 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4m9h\" (UniqueName: \"kubernetes.io/projected/63b4949c-6044-4de0-84bc-11fbee3ff5f1-kube-api-access-f4m9h\") pod \"coredns-7c65d6cfc9-24bfb\" (UID: \"63b4949c-6044-4de0-84bc-11fbee3ff5f1\") " pod="kube-system/coredns-7c65d6cfc9-24bfb" Jul 14 23:57:32.103254 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. Jul 14 23:57:32.106794 systemd[1]: Created slice kubepods-burstable-pod278201f6_b9af_4d42_8696_eef5c6038113.slice - libcontainer container kubepods-burstable-pod278201f6_b9af_4d42_8696_eef5c6038113.slice. Jul 14 23:57:32.107341 systemd-logind[1492]: Removed session 8. Jul 14 23:57:32.397496 kubelet[2602]: E0714 23:57:32.397379 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:32.398132 containerd[1512]: time="2025-07-14T23:57:32.398062054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-24bfb,Uid:63b4949c-6044-4de0-84bc-11fbee3ff5f1,Namespace:kube-system,Attempt:0,}" Jul 14 23:57:32.411576 kubelet[2602]: E0714 23:57:32.411554 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:32.412060 containerd[1512]: time="2025-07-14T23:57:32.412009859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dxlp8,Uid:278201f6-b9af-4d42-8696-eef5c6038113,Namespace:kube-system,Attempt:0,}" Jul 14 23:57:32.979082 kubelet[2602]: E0714 23:57:32.979048 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:32.992419 kubelet[2602]: I0714 23:57:32.992356 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c42xt" podStartSLOduration=7.392523866 podStartE2EDuration="23.992340528s" podCreationTimestamp="2025-07-14 23:57:09 +0000 UTC" firstStartedPulling="2025-07-14 23:57:10.486948334 +0000 UTC m=+2.732847736" lastFinishedPulling="2025-07-14 23:57:27.086764986 +0000 UTC m=+19.332664398" observedRunningTime="2025-07-14 23:57:32.991429734 +0000 UTC m=+25.237329136" watchObservedRunningTime="2025-07-14 23:57:32.992340528 +0000 UTC m=+25.238239940" Jul 14 23:57:33.805979 systemd-networkd[1414]: cilium_host: Link UP Jul 14 23:57:33.806702 systemd-networkd[1414]: cilium_net: Link UP Jul 14 23:57:33.807037 systemd-networkd[1414]: cilium_net: Gained carrier Jul 14 23:57:33.807326 systemd-networkd[1414]: cilium_host: Gained carrier Jul 14 23:57:33.823240 systemd-networkd[1414]: cilium_net: Gained IPv6LL Jul 14 23:57:33.909512 systemd-networkd[1414]: cilium_vxlan: Link UP Jul 14 23:57:33.909524 systemd-networkd[1414]: cilium_vxlan: Gained carrier Jul 14 23:57:33.922244 systemd-networkd[1414]: cilium_host: Gained IPv6LL Jul 14 23:57:34.112047 kernel: NET: Registered PF_ALG protocol family Jul 14 23:57:34.214220 kubelet[2602]: E0714 23:57:34.214147 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:34.749791 systemd-networkd[1414]: lxc_health: Link UP Jul 14 23:57:34.752407 systemd-networkd[1414]: lxc_health: Gained carrier Jul 14 23:57:35.169049 kernel: eth0: renamed from tmp11c58 Jul 14 23:57:35.183048 kernel: eth0: renamed from tmpf452c Jul 14 23:57:35.190343 systemd-networkd[1414]: lxc164001df85f4: Link UP Jul 14 23:57:35.193665 systemd-networkd[1414]: lxc164001df85f4: Gained carrier Jul 14 23:57:35.193842 systemd-networkd[1414]: lxcc21a0b000814: Link UP Jul 14 23:57:35.196268 systemd-networkd[1414]: lxcc21a0b000814: Gained carrier Jul 14 23:57:35.728376 systemd-networkd[1414]: cilium_vxlan: Gained IPv6LL Jul 14 23:57:35.856185 systemd-networkd[1414]: lxc_health: Gained IPv6LL Jul 14 23:57:36.214358 kubelet[2602]: E0714 23:57:36.214327 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:36.627064 systemd-networkd[1414]: lxcc21a0b000814: Gained IPv6LL Jul 14 23:57:36.816586 systemd-networkd[1414]: lxc164001df85f4: Gained IPv6LL Jul 14 23:57:36.985108 kubelet[2602]: E0714 23:57:36.984968 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:37.107313 systemd[1]: Started sshd@8-10.0.0.18:22-10.0.0.1:46010.service - OpenSSH per-connection server daemon (10.0.0.1:46010). Jul 14 23:57:37.145531 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 46010 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:57:37.147083 sshd-session[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:57:37.151456 systemd-logind[1492]: New session 9 of user core. Jul 14 23:57:37.161167 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 23:57:37.383676 sshd[3851]: Connection closed by 10.0.0.1 port 46010 Jul 14 23:57:37.384046 sshd-session[3849]: pam_unix(sshd:session): session closed for user core Jul 14 23:57:37.387593 systemd[1]: sshd@8-10.0.0.18:22-10.0.0.1:46010.service: Deactivated successfully. Jul 14 23:57:37.389706 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 23:57:37.390413 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. Jul 14 23:57:37.391306 systemd-logind[1492]: Removed session 9. Jul 14 23:57:37.986748 kubelet[2602]: E0714 23:57:37.986716 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:38.470631 containerd[1512]: time="2025-07-14T23:57:38.470540209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:57:38.470631 containerd[1512]: time="2025-07-14T23:57:38.470596635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:57:38.470631 containerd[1512]: time="2025-07-14T23:57:38.470607745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:38.471109 containerd[1512]: time="2025-07-14T23:57:38.470685562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:38.484741 containerd[1512]: time="2025-07-14T23:57:38.484484244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:57:38.484741 containerd[1512]: time="2025-07-14T23:57:38.484596926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:57:38.484741 containerd[1512]: time="2025-07-14T23:57:38.484611944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:38.485598 containerd[1512]: time="2025-07-14T23:57:38.485483433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:57:38.492164 systemd[1]: Started cri-containerd-11c58c3e73ccf7f4810e7038e5c6244662ec5d3c0b9680da2566b6626adb1ec7.scope - libcontainer container 11c58c3e73ccf7f4810e7038e5c6244662ec5d3c0b9680da2566b6626adb1ec7. Jul 14 23:57:38.504291 systemd[1]: run-containerd-runc-k8s.io-f452cb4f31fa883b461b6e2c62f70b3338eb4b1e680a2108e1ecbf61593918ae-runc.ygupTj.mount: Deactivated successfully. Jul 14 23:57:38.516155 systemd[1]: Started cri-containerd-f452cb4f31fa883b461b6e2c62f70b3338eb4b1e680a2108e1ecbf61593918ae.scope - libcontainer container f452cb4f31fa883b461b6e2c62f70b3338eb4b1e680a2108e1ecbf61593918ae. Jul 14 23:57:38.520395 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:57:38.528689 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:57:38.547175 containerd[1512]: time="2025-07-14T23:57:38.547104776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-24bfb,Uid:63b4949c-6044-4de0-84bc-11fbee3ff5f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"11c58c3e73ccf7f4810e7038e5c6244662ec5d3c0b9680da2566b6626adb1ec7\"" Jul 14 23:57:38.547857 kubelet[2602]: E0714 23:57:38.547820 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:38.550114 containerd[1512]: time="2025-07-14T23:57:38.550071225Z" level=info msg="CreateContainer within sandbox \"11c58c3e73ccf7f4810e7038e5c6244662ec5d3c0b9680da2566b6626adb1ec7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 23:57:38.557946 containerd[1512]: time="2025-07-14T23:57:38.557895652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dxlp8,Uid:278201f6-b9af-4d42-8696-eef5c6038113,Namespace:kube-system,Attempt:0,} returns sandbox id \"f452cb4f31fa883b461b6e2c62f70b3338eb4b1e680a2108e1ecbf61593918ae\"" Jul 14 23:57:38.558804 kubelet[2602]: E0714 23:57:38.558766 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:38.560870 containerd[1512]: time="2025-07-14T23:57:38.560837815Z" level=info msg="CreateContainer within sandbox \"f452cb4f31fa883b461b6e2c62f70b3338eb4b1e680a2108e1ecbf61593918ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 23:57:38.569839 containerd[1512]: time="2025-07-14T23:57:38.569791214Z" level=info msg="CreateContainer within sandbox \"11c58c3e73ccf7f4810e7038e5c6244662ec5d3c0b9680da2566b6626adb1ec7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d39d58e4659923ae1d74c678bd75773e37561102bbc4421f5d39c6f0eec9d12f\"" Jul 14 23:57:38.570198 containerd[1512]: time="2025-07-14T23:57:38.570165809Z" level=info msg="StartContainer for \"d39d58e4659923ae1d74c678bd75773e37561102bbc4421f5d39c6f0eec9d12f\"" Jul 14 23:57:38.581874 containerd[1512]: time="2025-07-14T23:57:38.581846937Z" level=info msg="CreateContainer within sandbox \"f452cb4f31fa883b461b6e2c62f70b3338eb4b1e680a2108e1ecbf61593918ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21669b14e07eadb935254c8c1b97695a753364fe130f1a62dd5dd7985c802918\"" Jul 14 23:57:38.582548 containerd[1512]: time="2025-07-14T23:57:38.582509123Z" level=info msg="StartContainer for \"21669b14e07eadb935254c8c1b97695a753364fe130f1a62dd5dd7985c802918\"" Jul 14 23:57:38.599186 systemd[1]: Started cri-containerd-d39d58e4659923ae1d74c678bd75773e37561102bbc4421f5d39c6f0eec9d12f.scope - libcontainer container d39d58e4659923ae1d74c678bd75773e37561102bbc4421f5d39c6f0eec9d12f. Jul 14 23:57:38.616187 systemd[1]: Started cri-containerd-21669b14e07eadb935254c8c1b97695a753364fe130f1a62dd5dd7985c802918.scope - libcontainer container 21669b14e07eadb935254c8c1b97695a753364fe130f1a62dd5dd7985c802918. Jul 14 23:57:38.641370 containerd[1512]: time="2025-07-14T23:57:38.641333246Z" level=info msg="StartContainer for \"d39d58e4659923ae1d74c678bd75773e37561102bbc4421f5d39c6f0eec9d12f\" returns successfully" Jul 14 23:57:38.649299 containerd[1512]: time="2025-07-14T23:57:38.649212816Z" level=info msg="StartContainer for \"21669b14e07eadb935254c8c1b97695a753364fe130f1a62dd5dd7985c802918\" returns successfully" Jul 14 23:57:38.992041 kubelet[2602]: E0714 23:57:38.990148 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:38.993403 kubelet[2602]: E0714 23:57:38.993378 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:39.001387 kubelet[2602]: I0714 23:57:39.001326 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dxlp8" podStartSLOduration=29.001309487 podStartE2EDuration="29.001309487s" podCreationTimestamp="2025-07-14 23:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:57:39.000637283 +0000 UTC m=+31.246536695" watchObservedRunningTime="2025-07-14 23:57:39.001309487 +0000 UTC m=+31.247208889" Jul 14 23:57:39.020303 kubelet[2602]: I0714 23:57:39.020228 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-24bfb" podStartSLOduration=29.020207193 podStartE2EDuration="29.020207193s" podCreationTimestamp="2025-07-14 23:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:57:39.010132919 +0000 UTC m=+31.256032331" watchObservedRunningTime="2025-07-14 23:57:39.020207193 +0000 UTC m=+31.266106605" Jul 14 23:57:39.995129 kubelet[2602]: E0714 23:57:39.995100 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:39.995623 kubelet[2602]: E0714 23:57:39.995100 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:40.997093 kubelet[2602]: E0714 23:57:40.997061 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:40.997579 kubelet[2602]: E0714 23:57:40.997224 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:57:42.399231 systemd[1]: Started sshd@9-10.0.0.18:22-10.0.0.1:35614.service - OpenSSH per-connection server daemon (10.0.0.1:35614). Jul 14 23:57:42.439653 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 35614 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:57:42.441348 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:57:42.445597 systemd-logind[1492]: New session 10 of user core. Jul 14 23:57:42.455147 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 23:57:42.574759 sshd[4049]: Connection closed by 10.0.0.1 port 35614 Jul 14 23:57:42.575193 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Jul 14 23:57:42.579763 systemd[1]: sshd@9-10.0.0.18:22-10.0.0.1:35614.service: Deactivated successfully. Jul 14 23:57:42.582366 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 23:57:42.583129 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. Jul 14 23:57:42.583954 systemd-logind[1492]: Removed session 10. Jul 14 23:57:47.589731 systemd[1]: Started sshd@10-10.0.0.18:22-10.0.0.1:35630.service - OpenSSH per-connection server daemon (10.0.0.1:35630). Jul 14 23:57:47.626174 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 35630 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:57:47.627593 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:57:47.631620 systemd-logind[1492]: New session 11 of user core. Jul 14 23:57:47.645162 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 23:57:47.754742 sshd[4066]: Connection closed by 10.0.0.1 port 35630 Jul 14 23:57:47.755133 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Jul 14 23:57:47.769821 systemd[1]: sshd@10-10.0.0.18:22-10.0.0.1:35630.service: Deactivated successfully. Jul 14 23:57:47.771676 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 23:57:47.773151 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. Jul 14 23:57:47.774423 systemd[1]: Started sshd@11-10.0.0.18:22-10.0.0.1:35644.service - OpenSSH per-connection server daemon (10.0.0.1:35644). Jul 14 23:57:47.775381 systemd-logind[1492]: Removed session 11. Jul 14 23:57:47.810558 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 35644 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:57:47.811952 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:57:47.816160 systemd-logind[1492]: New session 12 of user core. Jul 14 23:57:47.821130 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 23:57:47.966705 sshd[4083]: Connection closed by 10.0.0.1 port 35644 Jul 14 23:57:47.967512 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Jul 14 23:57:47.978008 systemd[1]: sshd@11-10.0.0.18:22-10.0.0.1:35644.service: Deactivated successfully. Jul 14 23:57:47.981371 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 23:57:47.983235 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. Jul 14 23:57:47.986996 systemd-logind[1492]: Removed session 12. Jul 14 23:57:48.000343 systemd[1]: Started sshd@12-10.0.0.18:22-10.0.0.1:35656.service - OpenSSH per-connection server daemon (10.0.0.1:35656). Jul 14 23:57:48.038721 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 35656 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:57:48.040240 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:57:48.044725 systemd-logind[1492]: New session 13 of user core. Jul 14 23:57:48.054150 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 23:57:48.172293 sshd[4097]: Connection closed by 10.0.0.1 port 35656 Jul 14 23:57:48.172790 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Jul 14 23:57:48.177753 systemd[1]: sshd@12-10.0.0.18:22-10.0.0.1:35656.service: Deactivated successfully. Jul 14 23:57:48.180176 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 23:57:48.181001 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. Jul 14 23:57:48.181954 systemd-logind[1492]: Removed session 13. Jul 14 23:57:53.186635 systemd[1]: Started sshd@13-10.0.0.18:22-10.0.0.1:39950.service - OpenSSH per-connection server daemon (10.0.0.1:39950). Jul 14 23:57:53.225072 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 39950 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:57:53.226633 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:57:53.231993 systemd-logind[1492]: New session 14 of user core. Jul 14 23:57:53.242200 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 23:57:53.357382 sshd[4112]: Connection closed by 10.0.0.1 port 39950 Jul 14 23:57:53.357813 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Jul 14 23:57:53.361904 systemd[1]: sshd@13-10.0.0.18:22-10.0.0.1:39950.service: Deactivated successfully. Jul 14 23:57:53.364589 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 23:57:53.365385 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. Jul 14 23:57:53.366290 systemd-logind[1492]: Removed session 14. Jul 14 23:57:58.369697 systemd[1]: Started sshd@14-10.0.0.18:22-10.0.0.1:39952.service - OpenSSH per-connection server daemon (10.0.0.1:39952). Jul 14 23:57:58.405567 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 39952 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:57:58.406935 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:57:58.411032 systemd-logind[1492]: New session 15 of user core. Jul 14 23:57:58.423142 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 23:57:58.534257 sshd[4127]: Connection closed by 10.0.0.1 port 39952 Jul 14 23:57:58.534668 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Jul 14 23:57:58.551959 systemd[1]: sshd@14-10.0.0.18:22-10.0.0.1:39952.service: Deactivated successfully. Jul 14 23:57:58.554590 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 23:57:58.556575 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. Jul 14 23:57:58.564407 systemd[1]: Started sshd@15-10.0.0.18:22-10.0.0.1:39968.service - OpenSSH per-connection server daemon (10.0.0.1:39968). Jul 14 23:57:58.565387 systemd-logind[1492]: Removed session 15. Jul 14 23:57:58.601945 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 39968 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:57:58.603718 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:57:58.608460 systemd-logind[1492]: New session 16 of user core. Jul 14 23:57:58.619145 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 23:57:58.970290 sshd[4143]: Connection closed by 10.0.0.1 port 39968 Jul 14 23:57:58.971039 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Jul 14 23:57:58.986738 systemd[1]: sshd@15-10.0.0.18:22-10.0.0.1:39968.service: Deactivated successfully. Jul 14 23:57:58.988811 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 23:57:58.990370 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. Jul 14 23:57:58.998266 systemd[1]: Started sshd@16-10.0.0.18:22-10.0.0.1:58842.service - OpenSSH per-connection server daemon (10.0.0.1:58842). Jul 14 23:57:58.999451 systemd-logind[1492]: Removed session 16. Jul 14 23:57:59.040559 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 58842 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:57:59.041838 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:57:59.045854 systemd-logind[1492]: New session 17 of user core. Jul 14 23:57:59.055124 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 23:58:00.243822 sshd[4156]: Connection closed by 10.0.0.1 port 58842 Jul 14 23:58:00.244974 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Jul 14 23:58:00.256721 systemd[1]: sshd@16-10.0.0.18:22-10.0.0.1:58842.service: Deactivated successfully. Jul 14 23:58:00.260304 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 23:58:00.262650 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. Jul 14 23:58:00.271303 systemd[1]: Started sshd@17-10.0.0.18:22-10.0.0.1:58856.service - OpenSSH per-connection server daemon (10.0.0.1:58856). Jul 14 23:58:00.271855 systemd-logind[1492]: Removed session 17. Jul 14 23:58:00.305266 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 58856 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:58:00.306836 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:58:00.311305 systemd-logind[1492]: New session 18 of user core. Jul 14 23:58:00.323156 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 23:58:00.749920 sshd[4178]: Connection closed by 10.0.0.1 port 58856 Jul 14 23:58:00.750394 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Jul 14 23:58:00.761112 systemd[1]: sshd@17-10.0.0.18:22-10.0.0.1:58856.service: Deactivated successfully. Jul 14 23:58:00.763345 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 23:58:00.764971 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. Jul 14 23:58:00.773447 systemd[1]: Started sshd@18-10.0.0.18:22-10.0.0.1:58870.service - OpenSSH per-connection server daemon (10.0.0.1:58870). Jul 14 23:58:00.774740 systemd-logind[1492]: Removed session 18. Jul 14 23:58:00.806035 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 58870 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:58:00.807680 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:58:00.812353 systemd-logind[1492]: New session 19 of user core. Jul 14 23:58:00.824273 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 23:58:00.950465 sshd[4192]: Connection closed by 10.0.0.1 port 58870 Jul 14 23:58:00.950841 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Jul 14 23:58:00.955533 systemd[1]: sshd@18-10.0.0.18:22-10.0.0.1:58870.service: Deactivated successfully. Jul 14 23:58:00.957963 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 23:58:00.958788 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. Jul 14 23:58:00.959926 systemd-logind[1492]: Removed session 19. Jul 14 23:58:05.963133 systemd[1]: Started sshd@19-10.0.0.18:22-10.0.0.1:58872.service - OpenSSH per-connection server daemon (10.0.0.1:58872). Jul 14 23:58:06.000707 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 58872 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:58:06.002189 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:58:06.006296 systemd-logind[1492]: New session 20 of user core. Jul 14 23:58:06.016162 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 23:58:06.125600 sshd[4207]: Connection closed by 10.0.0.1 port 58872 Jul 14 23:58:06.125969 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Jul 14 23:58:06.129409 systemd[1]: sshd@19-10.0.0.18:22-10.0.0.1:58872.service: Deactivated successfully. Jul 14 23:58:06.131392 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 23:58:06.132194 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. Jul 14 23:58:06.133097 systemd-logind[1492]: Removed session 20. Jul 14 23:58:11.138039 systemd[1]: Started sshd@20-10.0.0.18:22-10.0.0.1:35454.service - OpenSSH per-connection server daemon (10.0.0.1:35454). Jul 14 23:58:11.174802 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 35454 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:58:11.176212 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:58:11.180392 systemd-logind[1492]: New session 21 of user core. Jul 14 23:58:11.195153 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 23:58:11.307700 sshd[4230]: Connection closed by 10.0.0.1 port 35454 Jul 14 23:58:11.308090 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Jul 14 23:58:11.312428 systemd[1]: sshd@20-10.0.0.18:22-10.0.0.1:35454.service: Deactivated successfully. Jul 14 23:58:11.314584 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 23:58:11.315310 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. Jul 14 23:58:11.316165 systemd-logind[1492]: Removed session 21. Jul 14 23:58:13.858310 kubelet[2602]: E0714 23:58:13.858270 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:16.320057 systemd[1]: Started sshd@21-10.0.0.18:22-10.0.0.1:35468.service - OpenSSH per-connection server daemon (10.0.0.1:35468). Jul 14 23:58:16.356315 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 35468 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:58:16.357612 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:58:16.361692 systemd-logind[1492]: New session 22 of user core. Jul 14 23:58:16.371134 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 23:58:16.477736 sshd[4246]: Connection closed by 10.0.0.1 port 35468 Jul 14 23:58:16.478100 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jul 14 23:58:16.481708 systemd[1]: sshd@21-10.0.0.18:22-10.0.0.1:35468.service: Deactivated successfully. Jul 14 23:58:16.483862 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 23:58:16.484621 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. Jul 14 23:58:16.485634 systemd-logind[1492]: Removed session 22. Jul 14 23:58:17.858620 kubelet[2602]: E0714 23:58:17.858587 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:21.489996 systemd[1]: Started sshd@22-10.0.0.18:22-10.0.0.1:57184.service - OpenSSH per-connection server daemon (10.0.0.1:57184). Jul 14 23:58:21.527620 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 57184 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:58:21.529316 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:58:21.533920 systemd-logind[1492]: New session 23 of user core. Jul 14 23:58:21.545143 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 23:58:21.652724 sshd[4262]: Connection closed by 10.0.0.1 port 57184 Jul 14 23:58:21.653250 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Jul 14 23:58:21.661987 systemd[1]: sshd@22-10.0.0.18:22-10.0.0.1:57184.service: Deactivated successfully. Jul 14 23:58:21.664095 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 23:58:21.665702 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. Jul 14 23:58:21.673291 systemd[1]: Started sshd@23-10.0.0.18:22-10.0.0.1:57200.service - OpenSSH per-connection server daemon (10.0.0.1:57200). Jul 14 23:58:21.674188 systemd-logind[1492]: Removed session 23. Jul 14 23:58:21.707115 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 57200 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:58:21.708574 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:58:21.712858 systemd-logind[1492]: New session 24 of user core. Jul 14 23:58:21.723144 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 23:58:23.046074 containerd[1512]: time="2025-07-14T23:58:23.046031545Z" level=info msg="StopContainer for \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\" with timeout 30 (s)" Jul 14 23:58:23.060625 containerd[1512]: time="2025-07-14T23:58:23.060586097Z" level=info msg="Stop container \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\" with signal terminated" Jul 14 23:58:23.072200 systemd[1]: cri-containerd-8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc.scope: Deactivated successfully. Jul 14 23:58:23.079653 containerd[1512]: time="2025-07-14T23:58:23.078793083Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 23:58:23.081135 containerd[1512]: time="2025-07-14T23:58:23.080952001Z" level=info msg="StopContainer for \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\" with timeout 2 (s)" Jul 14 23:58:23.081343 containerd[1512]: time="2025-07-14T23:58:23.081295679Z" level=info msg="Stop container \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\" with signal terminated" Jul 14 23:58:23.088577 systemd-networkd[1414]: lxc_health: Link DOWN Jul 14 23:58:23.088586 systemd-networkd[1414]: lxc_health: Lost carrier Jul 14 23:58:23.097232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc-rootfs.mount: Deactivated successfully. Jul 14 23:58:23.110879 systemd[1]: cri-containerd-b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026.scope: Deactivated successfully. Jul 14 23:58:23.111406 systemd[1]: cri-containerd-b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026.scope: Consumed 6.636s CPU time, 121.1M memory peak, 424K read from disk, 13.3M written to disk. Jul 14 23:58:23.113816 containerd[1512]: time="2025-07-14T23:58:23.113729059Z" level=info msg="shim disconnected" id=8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc namespace=k8s.io Jul 14 23:58:23.113816 containerd[1512]: time="2025-07-14T23:58:23.113800475Z" level=warning msg="cleaning up after shim disconnected" id=8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc namespace=k8s.io Jul 14 23:58:23.113816 containerd[1512]: time="2025-07-14T23:58:23.113812188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:58:23.131123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026-rootfs.mount: Deactivated successfully. Jul 14 23:58:23.136441 containerd[1512]: time="2025-07-14T23:58:23.136388570Z" level=info msg="StopContainer for \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\" returns successfully" Jul 14 23:58:23.138454 containerd[1512]: time="2025-07-14T23:58:23.138381971Z" level=info msg="shim disconnected" id=b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026 namespace=k8s.io Jul 14 23:58:23.138454 containerd[1512]: time="2025-07-14T23:58:23.138428140Z" level=warning msg="cleaning up after shim disconnected" id=b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026 namespace=k8s.io Jul 14 23:58:23.138454 containerd[1512]: time="2025-07-14T23:58:23.138437277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:58:23.140530 containerd[1512]: time="2025-07-14T23:58:23.140492727Z" level=info msg="StopPodSandbox for \"431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b\"" Jul 14 23:58:23.150245 containerd[1512]: time="2025-07-14T23:58:23.140535610Z" level=info msg="Container to stop \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:58:23.153217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b-shm.mount: Deactivated successfully. Jul 14 23:58:23.157773 containerd[1512]: time="2025-07-14T23:58:23.157731322Z" level=info msg="StopContainer for \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\" returns successfully" Jul 14 23:58:23.158276 systemd[1]: cri-containerd-431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b.scope: Deactivated successfully. Jul 14 23:58:23.158839 containerd[1512]: time="2025-07-14T23:58:23.158346369Z" level=info msg="StopPodSandbox for \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\"" Jul 14 23:58:23.158839 containerd[1512]: time="2025-07-14T23:58:23.158380824Z" level=info msg="Container to stop \"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:58:23.158839 containerd[1512]: time="2025-07-14T23:58:23.158417554Z" level=info msg="Container to stop \"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:58:23.158839 containerd[1512]: time="2025-07-14T23:58:23.158429778Z" level=info msg="Container to stop \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:58:23.158839 containerd[1512]: time="2025-07-14T23:58:23.158443103Z" level=info msg="Container to stop \"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:58:23.158839 containerd[1512]: time="2025-07-14T23:58:23.158455898Z" level=info msg="Container to stop \"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:58:23.163751 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a-shm.mount: Deactivated successfully. Jul 14 23:58:23.170520 systemd[1]: cri-containerd-3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a.scope: Deactivated successfully. Jul 14 23:58:23.191810 containerd[1512]: time="2025-07-14T23:58:23.191556514Z" level=info msg="shim disconnected" id=3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a namespace=k8s.io Jul 14 23:58:23.191810 containerd[1512]: time="2025-07-14T23:58:23.191612762Z" level=warning msg="cleaning up after shim disconnected" id=3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a namespace=k8s.io Jul 14 23:58:23.191810 containerd[1512]: time="2025-07-14T23:58:23.191621418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:58:23.191810 containerd[1512]: time="2025-07-14T23:58:23.191681873Z" level=info msg="shim disconnected" id=431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b namespace=k8s.io Jul 14 23:58:23.191810 containerd[1512]: time="2025-07-14T23:58:23.191751457Z" level=warning msg="cleaning up after shim disconnected" id=431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b namespace=k8s.io Jul 14 23:58:23.191810 containerd[1512]: time="2025-07-14T23:58:23.191761676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:58:23.208314 containerd[1512]: time="2025-07-14T23:58:23.208206823Z" level=info msg="TearDown network for sandbox \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" successfully" Jul 14 23:58:23.208314 containerd[1512]: time="2025-07-14T23:58:23.208246469Z" level=info msg="StopPodSandbox for \"3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a\" returns successfully" Jul 14 23:58:23.209242 containerd[1512]: time="2025-07-14T23:58:23.209205012Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:58:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 14 23:58:23.211133 containerd[1512]: time="2025-07-14T23:58:23.211045011Z" level=info msg="TearDown network for sandbox \"431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b\" successfully" Jul 14 23:58:23.211133 containerd[1512]: time="2025-07-14T23:58:23.211068656Z" level=info msg="StopPodSandbox for \"431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b\" returns successfully" Jul 14 23:58:23.364328 kubelet[2602]: I0714 23:58:23.364170 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cni-path\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.364328 kubelet[2602]: I0714 23:58:23.364219 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-lib-modules\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.364328 kubelet[2602]: I0714 23:58:23.364244 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6llx\" (UniqueName: \"kubernetes.io/projected/f3a1483d-7643-4fb7-bfeb-ede147797f61-kube-api-access-m6llx\") pod \"f3a1483d-7643-4fb7-bfeb-ede147797f61\" (UID: \"f3a1483d-7643-4fb7-bfeb-ede147797f61\") " Jul 14 23:58:23.364328 kubelet[2602]: I0714 23:58:23.364262 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-run\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.364328 kubelet[2602]: I0714 23:58:23.364275 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-xtables-lock\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.364328 kubelet[2602]: I0714 23:58:23.364292 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-host-proc-sys-kernel\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.364890 kubelet[2602]: I0714 23:58:23.364309 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-etc-cni-netd\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.364890 kubelet[2602]: I0714 23:58:23.364293 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cni-path" (OuterVolumeSpecName: "cni-path") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 23:58:23.364890 kubelet[2602]: I0714 23:58:23.364327 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-config-path\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.364890 kubelet[2602]: I0714 23:58:23.364395 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf58066f-1fec-434e-8e1c-e19982e73c96-hubble-tls\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.364890 kubelet[2602]: I0714 23:58:23.364414 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-host-proc-sys-net\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.364890 kubelet[2602]: I0714 23:58:23.364434 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf58066f-1fec-434e-8e1c-e19982e73c96-clustermesh-secrets\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.365072 kubelet[2602]: I0714 23:58:23.364449 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-cgroup\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.365072 kubelet[2602]: I0714 23:58:23.364435 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 23:58:23.365072 kubelet[2602]: I0714 23:58:23.364488 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-hostproc" (OuterVolumeSpecName: "hostproc") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 23:58:23.365072 kubelet[2602]: I0714 23:58:23.364464 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-hostproc\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.365072 kubelet[2602]: I0714 23:58:23.364544 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-bpf-maps\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.365072 kubelet[2602]: I0714 23:58:23.364569 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5sl6\" (UniqueName: \"kubernetes.io/projected/bf58066f-1fec-434e-8e1c-e19982e73c96-kube-api-access-l5sl6\") pod \"bf58066f-1fec-434e-8e1c-e19982e73c96\" (UID: \"bf58066f-1fec-434e-8e1c-e19982e73c96\") " Jul 14 23:58:23.365217 kubelet[2602]: I0714 23:58:23.364587 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3a1483d-7643-4fb7-bfeb-ede147797f61-cilium-config-path\") pod \"f3a1483d-7643-4fb7-bfeb-ede147797f61\" (UID: \"f3a1483d-7643-4fb7-bfeb-ede147797f61\") " Jul 14 23:58:23.365217 kubelet[2602]: I0714 23:58:23.364631 2602 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.365217 kubelet[2602]: I0714 23:58:23.364643 2602 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.365217 kubelet[2602]: I0714 23:58:23.364652 2602 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.365437 kubelet[2602]: I0714 23:58:23.365407 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 23:58:23.365479 kubelet[2602]: I0714 23:58:23.365440 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 23:58:23.365479 kubelet[2602]: I0714 23:58:23.365456 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 23:58:23.365543 kubelet[2602]: I0714 23:58:23.365488 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 23:58:23.366102 kubelet[2602]: I0714 23:58:23.365803 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 23:58:23.368480 kubelet[2602]: I0714 23:58:23.368357 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf58066f-1fec-434e-8e1c-e19982e73c96-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 23:58:23.368480 kubelet[2602]: I0714 23:58:23.368396 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 23:58:23.368480 kubelet[2602]: I0714 23:58:23.368453 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf58066f-1fec-434e-8e1c-e19982e73c96-kube-api-access-l5sl6" (OuterVolumeSpecName: "kube-api-access-l5sl6") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "kube-api-access-l5sl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 23:58:23.368666 kubelet[2602]: I0714 23:58:23.368632 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 23:58:23.369082 kubelet[2602]: I0714 23:58:23.369064 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 23:58:23.369528 kubelet[2602]: I0714 23:58:23.369499 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a1483d-7643-4fb7-bfeb-ede147797f61-kube-api-access-m6llx" (OuterVolumeSpecName: "kube-api-access-m6llx") pod "f3a1483d-7643-4fb7-bfeb-ede147797f61" (UID: "f3a1483d-7643-4fb7-bfeb-ede147797f61"). InnerVolumeSpecName "kube-api-access-m6llx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 23:58:23.371312 kubelet[2602]: I0714 23:58:23.371290 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf58066f-1fec-434e-8e1c-e19982e73c96-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bf58066f-1fec-434e-8e1c-e19982e73c96" (UID: "bf58066f-1fec-434e-8e1c-e19982e73c96"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 23:58:23.371424 kubelet[2602]: I0714 23:58:23.371400 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3a1483d-7643-4fb7-bfeb-ede147797f61-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f3a1483d-7643-4fb7-bfeb-ede147797f61" (UID: "f3a1483d-7643-4fb7-bfeb-ede147797f61"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 23:58:23.464848 kubelet[2602]: I0714 23:58:23.464816 2602 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf58066f-1fec-434e-8e1c-e19982e73c96-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.464905 kubelet[2602]: I0714 23:58:23.464857 2602 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.464905 kubelet[2602]: I0714 23:58:23.464867 2602 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.464905 kubelet[2602]: I0714 23:58:23.464876 2602 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5sl6\" (UniqueName: \"kubernetes.io/projected/bf58066f-1fec-434e-8e1c-e19982e73c96-kube-api-access-l5sl6\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.464905 kubelet[2602]: I0714 23:58:23.464884 2602 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3a1483d-7643-4fb7-bfeb-ede147797f61-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.464905 kubelet[2602]: I0714 23:58:23.464893 2602 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6llx\" (UniqueName: \"kubernetes.io/projected/f3a1483d-7643-4fb7-bfeb-ede147797f61-kube-api-access-m6llx\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.464905 kubelet[2602]: I0714 23:58:23.464900 2602 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.464905 kubelet[2602]: I0714 23:58:23.464908 2602 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.465101 kubelet[2602]: I0714 23:58:23.464916 2602 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.465101 kubelet[2602]: I0714 23:58:23.464923 2602 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.465101 kubelet[2602]: I0714 23:58:23.464933 2602 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf58066f-1fec-434e-8e1c-e19982e73c96-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.465101 kubelet[2602]: I0714 23:58:23.464940 2602 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf58066f-1fec-434e-8e1c-e19982e73c96-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.465101 kubelet[2602]: I0714 23:58:23.464948 2602 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf58066f-1fec-434e-8e1c-e19982e73c96-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 23:58:23.864929 systemd[1]: Removed slice kubepods-besteffort-podf3a1483d_7643_4fb7_bfeb_ede147797f61.slice - libcontainer container kubepods-besteffort-podf3a1483d_7643_4fb7_bfeb_ede147797f61.slice. Jul 14 23:58:23.866117 systemd[1]: Removed slice kubepods-burstable-podbf58066f_1fec_434e_8e1c_e19982e73c96.slice - libcontainer container kubepods-burstable-podbf58066f_1fec_434e_8e1c_e19982e73c96.slice. Jul 14 23:58:23.866326 systemd[1]: kubepods-burstable-podbf58066f_1fec_434e_8e1c_e19982e73c96.slice: Consumed 6.741s CPU time, 121.4M memory peak, 440K read from disk, 13.3M written to disk. Jul 14 23:58:24.054785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-431543f6eb7e81c31fdaf1cf9a39a01bfb5421dfc45710b81ea3afac750d8e8b-rootfs.mount: Deactivated successfully. Jul 14 23:58:24.054900 systemd[1]: var-lib-kubelet-pods-f3a1483d\x2d7643\x2d4fb7\x2dbfeb\x2dede147797f61-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm6llx.mount: Deactivated successfully. Jul 14 23:58:24.054982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3514f777f20b75d8d72c5b9b26834b4d2c70245f6740342ac633ac2ce7e9c66a-rootfs.mount: Deactivated successfully. Jul 14 23:58:24.055096 systemd[1]: var-lib-kubelet-pods-bf58066f\x2d1fec\x2d434e\x2d8e1c\x2de19982e73c96-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl5sl6.mount: Deactivated successfully. Jul 14 23:58:24.055186 systemd[1]: var-lib-kubelet-pods-bf58066f\x2d1fec\x2d434e\x2d8e1c\x2de19982e73c96-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 23:58:24.055274 systemd[1]: var-lib-kubelet-pods-bf58066f\x2d1fec\x2d434e\x2d8e1c\x2de19982e73c96-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 23:58:24.064878 kubelet[2602]: I0714 23:58:24.064795 2602 scope.go:117] "RemoveContainer" containerID="8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc" Jul 14 23:58:24.071121 containerd[1512]: time="2025-07-14T23:58:24.071077926Z" level=info msg="RemoveContainer for \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\"" Jul 14 23:58:24.079035 containerd[1512]: time="2025-07-14T23:58:24.078100357Z" level=info msg="RemoveContainer for \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\" returns successfully" Jul 14 23:58:24.079886 kubelet[2602]: I0714 23:58:24.079844 2602 scope.go:117] "RemoveContainer" containerID="8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc" Jul 14 23:58:24.081058 containerd[1512]: time="2025-07-14T23:58:24.081006843Z" level=error msg="ContainerStatus for \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\": not found" Jul 14 23:58:24.081174 kubelet[2602]: E0714 23:58:24.081155 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\": not found" containerID="8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc" Jul 14 23:58:24.081286 kubelet[2602]: I0714 23:58:24.081197 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc"} err="failed to get container status \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\": rpc error: code = NotFound desc = an error occurred when try to find container \"8983fc7c839a49f6559eb9a850911497e236d7a98cd45b187630a3018f57ebfc\": not found" Jul 14 23:58:24.081286 kubelet[2602]: I0714 23:58:24.081272 2602 scope.go:117] "RemoveContainer" containerID="b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026" Jul 14 23:58:24.082266 containerd[1512]: time="2025-07-14T23:58:24.082240511Z" level=info msg="RemoveContainer for \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\"" Jul 14 23:58:24.085873 containerd[1512]: time="2025-07-14T23:58:24.085838327Z" level=info msg="RemoveContainer for \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\" returns successfully" Jul 14 23:58:24.086232 kubelet[2602]: I0714 23:58:24.085975 2602 scope.go:117] "RemoveContainer" containerID="391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d" Jul 14 23:58:24.087086 containerd[1512]: time="2025-07-14T23:58:24.087058428Z" level=info msg="RemoveContainer for \"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d\"" Jul 14 23:58:24.091474 containerd[1512]: time="2025-07-14T23:58:24.091440054Z" level=info msg="RemoveContainer for \"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d\" returns successfully" Jul 14 23:58:24.091681 kubelet[2602]: I0714 23:58:24.091655 2602 scope.go:117] "RemoveContainer" containerID="9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440" Jul 14 23:58:24.092579 containerd[1512]: time="2025-07-14T23:58:24.092541709Z" level=info msg="RemoveContainer for \"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440\"" Jul 14 23:58:24.095645 containerd[1512]: time="2025-07-14T23:58:24.095616737Z" level=info msg="RemoveContainer for \"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440\" returns successfully" Jul 14 23:58:24.095761 kubelet[2602]: I0714 23:58:24.095728 2602 scope.go:117] "RemoveContainer" containerID="731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942" Jul 14 23:58:24.096471 containerd[1512]: time="2025-07-14T23:58:24.096444248Z" level=info msg="RemoveContainer for \"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942\"" Jul 14 23:58:24.099709 containerd[1512]: time="2025-07-14T23:58:24.099674101Z" level=info msg="RemoveContainer for \"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942\" returns successfully" Jul 14 23:58:24.099808 kubelet[2602]: I0714 23:58:24.099785 2602 scope.go:117] "RemoveContainer" containerID="6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5" Jul 14 23:58:24.100472 containerd[1512]: time="2025-07-14T23:58:24.100442701Z" level=info msg="RemoveContainer for \"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5\"" Jul 14 23:58:24.103477 containerd[1512]: time="2025-07-14T23:58:24.103445981Z" level=info msg="RemoveContainer for \"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5\" returns successfully" Jul 14 23:58:24.103580 kubelet[2602]: I0714 23:58:24.103562 2602 scope.go:117] "RemoveContainer" containerID="b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026" Jul 14 23:58:24.103711 containerd[1512]: time="2025-07-14T23:58:24.103681500Z" level=error msg="ContainerStatus for \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\": not found" Jul 14 23:58:24.103816 kubelet[2602]: E0714 23:58:24.103792 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\": not found" containerID="b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026" Jul 14 23:58:24.103859 kubelet[2602]: I0714 23:58:24.103820 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026"} err="failed to get container status \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\": rpc error: code = NotFound desc = an error occurred when try to find container \"b38fb4507788c3a965247691cee45196a3e149263ba9989ede79336c4ddc9026\": not found" Jul 14 23:58:24.103859 kubelet[2602]: I0714 23:58:24.103843 2602 scope.go:117] "RemoveContainer" containerID="391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d" Jul 14 23:58:24.103986 containerd[1512]: time="2025-07-14T23:58:24.103961365Z" level=error msg="ContainerStatus for \"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d\": not found" Jul 14 23:58:24.104084 kubelet[2602]: E0714 23:58:24.104064 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d\": not found" containerID="391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d" Jul 14 23:58:24.104122 kubelet[2602]: I0714 23:58:24.104085 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d"} err="failed to get container status \"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d\": rpc error: code = NotFound desc = an error occurred when try to find container \"391d967b96ffa2b34e379110d40d84de146fc21d19be904fd68f03d518ce929d\": not found" Jul 14 23:58:24.104122 kubelet[2602]: I0714 23:58:24.104099 2602 scope.go:117] "RemoveContainer" containerID="9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440" Jul 14 23:58:24.104260 containerd[1512]: time="2025-07-14T23:58:24.104223095Z" level=error msg="ContainerStatus for \"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440\": not found" Jul 14 23:58:24.104340 kubelet[2602]: E0714 23:58:24.104320 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440\": not found" containerID="9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440" Jul 14 23:58:24.104369 kubelet[2602]: I0714 23:58:24.104341 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440"} err="failed to get container status \"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440\": rpc error: code = NotFound desc = an error occurred when try to find container \"9aa1211fa8d2d97409920bb49ae7fa0c28440bfb7220b7e1c2cba4825ec02440\": not found" Jul 14 23:58:24.104369 kubelet[2602]: I0714 23:58:24.104354 2602 scope.go:117] "RemoveContainer" containerID="731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942" Jul 14 23:58:24.104515 containerd[1512]: time="2025-07-14T23:58:24.104494614Z" level=error msg="ContainerStatus for \"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942\": not found" Jul 14 23:58:24.104616 kubelet[2602]: E0714 23:58:24.104590 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942\": not found" containerID="731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942" Jul 14 23:58:24.104653 kubelet[2602]: I0714 23:58:24.104618 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942"} err="failed to get container status \"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942\": rpc error: code = NotFound desc = an error occurred when try to find container \"731a3ff082766883fedfdd70065fdd5896391c9ed074e031d1d84631c9698942\": not found" Jul 14 23:58:24.104653 kubelet[2602]: I0714 23:58:24.104634 2602 scope.go:117] "RemoveContainer" containerID="6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5" Jul 14 23:58:24.104761 containerd[1512]: time="2025-07-14T23:58:24.104738491Z" level=error msg="ContainerStatus for \"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5\": not found" Jul 14 23:58:24.104838 kubelet[2602]: E0714 23:58:24.104819 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5\": not found" containerID="6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5" Jul 14 23:58:24.104877 kubelet[2602]: I0714 23:58:24.104838 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5"} err="failed to get container status \"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ae375afe8d033b9ea2311f08cc8bf0b3ab3f798749a8edcfa600bb3a19419f5\": not found" Jul 14 23:58:25.013668 sshd[4278]: Connection closed by 10.0.0.1 port 57200 Jul 14 23:58:25.014359 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jul 14 23:58:25.031065 systemd[1]: sshd@23-10.0.0.18:22-10.0.0.1:57200.service: Deactivated successfully. Jul 14 23:58:25.033076 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 23:58:25.034420 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. Jul 14 23:58:25.035686 systemd[1]: Started sshd@24-10.0.0.18:22-10.0.0.1:57212.service - OpenSSH per-connection server daemon (10.0.0.1:57212). Jul 14 23:58:25.036354 systemd-logind[1492]: Removed session 24. Jul 14 23:58:25.075349 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 57212 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:58:25.077041 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:58:25.081499 systemd-logind[1492]: New session 25 of user core. Jul 14 23:58:25.091153 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 23:58:25.482419 sshd[4438]: Connection closed by 10.0.0.1 port 57212 Jul 14 23:58:25.484081 sshd-session[4435]: pam_unix(sshd:session): session closed for user core Jul 14 23:58:25.494747 kubelet[2602]: E0714 23:58:25.494667 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf58066f-1fec-434e-8e1c-e19982e73c96" containerName="apply-sysctl-overwrites" Jul 14 23:58:25.494747 kubelet[2602]: E0714 23:58:25.494699 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf58066f-1fec-434e-8e1c-e19982e73c96" containerName="mount-bpf-fs" Jul 14 23:58:25.494747 kubelet[2602]: E0714 23:58:25.494706 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf58066f-1fec-434e-8e1c-e19982e73c96" containerName="clean-cilium-state" Jul 14 23:58:25.494747 kubelet[2602]: E0714 23:58:25.494714 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf58066f-1fec-434e-8e1c-e19982e73c96" containerName="mount-cgroup" Jul 14 23:58:25.494747 kubelet[2602]: E0714 23:58:25.494721 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3a1483d-7643-4fb7-bfeb-ede147797f61" containerName="cilium-operator" Jul 14 23:58:25.494747 kubelet[2602]: E0714 23:58:25.494727 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf58066f-1fec-434e-8e1c-e19982e73c96" containerName="cilium-agent" Jul 14 23:58:25.494747 kubelet[2602]: I0714 23:58:25.494749 2602 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf58066f-1fec-434e-8e1c-e19982e73c96" containerName="cilium-agent" Jul 14 23:58:25.495264 kubelet[2602]: I0714 23:58:25.494757 2602 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a1483d-7643-4fb7-bfeb-ede147797f61" containerName="cilium-operator" Jul 14 23:58:25.498352 systemd[1]: sshd@24-10.0.0.18:22-10.0.0.1:57212.service: Deactivated successfully. Jul 14 23:58:25.503075 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 23:58:25.506308 systemd-logind[1492]: Session 25 logged out. Waiting for processes to exit. Jul 14 23:58:25.527720 systemd[1]: Started sshd@25-10.0.0.18:22-10.0.0.1:57224.service - OpenSSH per-connection server daemon (10.0.0.1:57224). Jul 14 23:58:25.529762 systemd-logind[1492]: Removed session 25. Jul 14 23:58:25.534241 systemd[1]: Created slice kubepods-burstable-pod51af879e_b915_4547_9958_4917d1c4105d.slice - libcontainer container kubepods-burstable-pod51af879e_b915_4547_9958_4917d1c4105d.slice. Jul 14 23:58:25.561357 sshd[4450]: Accepted publickey for core from 10.0.0.1 port 57224 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:58:25.562744 sshd-session[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:58:25.566660 systemd-logind[1492]: New session 26 of user core. Jul 14 23:58:25.579126 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 23:58:25.629169 sshd[4453]: Connection closed by 10.0.0.1 port 57224 Jul 14 23:58:25.629482 sshd-session[4450]: pam_unix(sshd:session): session closed for user core Jul 14 23:58:25.641809 systemd[1]: sshd@25-10.0.0.18:22-10.0.0.1:57224.service: Deactivated successfully. Jul 14 23:58:25.643811 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 23:58:25.645346 systemd-logind[1492]: Session 26 logged out. Waiting for processes to exit. Jul 14 23:58:25.652259 systemd[1]: Started sshd@26-10.0.0.18:22-10.0.0.1:57236.service - OpenSSH per-connection server daemon (10.0.0.1:57236). Jul 14 23:58:25.653124 systemd-logind[1492]: Removed session 26. Jul 14 23:58:25.675182 kubelet[2602]: I0714 23:58:25.675141 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/51af879e-b915-4547-9958-4917d1c4105d-cilium-ipsec-secrets\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675258 kubelet[2602]: I0714 23:58:25.675185 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51af879e-b915-4547-9958-4917d1c4105d-cilium-run\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675258 kubelet[2602]: I0714 23:58:25.675215 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51af879e-b915-4547-9958-4917d1c4105d-hostproc\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675258 kubelet[2602]: I0714 23:58:25.675240 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51af879e-b915-4547-9958-4917d1c4105d-cilium-cgroup\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675348 kubelet[2602]: I0714 23:58:25.675266 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51af879e-b915-4547-9958-4917d1c4105d-host-proc-sys-kernel\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675348 kubelet[2602]: I0714 23:58:25.675287 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjl4\" (UniqueName: \"kubernetes.io/projected/51af879e-b915-4547-9958-4917d1c4105d-kube-api-access-wrjl4\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675348 kubelet[2602]: I0714 23:58:25.675310 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51af879e-b915-4547-9958-4917d1c4105d-cilium-config-path\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675409 kubelet[2602]: I0714 23:58:25.675365 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51af879e-b915-4547-9958-4917d1c4105d-etc-cni-netd\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675435 kubelet[2602]: I0714 23:58:25.675405 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51af879e-b915-4547-9958-4917d1c4105d-xtables-lock\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675456 kubelet[2602]: I0714 23:58:25.675436 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51af879e-b915-4547-9958-4917d1c4105d-host-proc-sys-net\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675478 kubelet[2602]: I0714 23:58:25.675453 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51af879e-b915-4547-9958-4917d1c4105d-clustermesh-secrets\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675478 kubelet[2602]: I0714 23:58:25.675471 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51af879e-b915-4547-9958-4917d1c4105d-hubble-tls\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675542 kubelet[2602]: I0714 23:58:25.675491 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51af879e-b915-4547-9958-4917d1c4105d-bpf-maps\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675542 kubelet[2602]: I0714 23:58:25.675523 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51af879e-b915-4547-9958-4917d1c4105d-cni-path\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.675587 kubelet[2602]: I0714 23:58:25.675546 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51af879e-b915-4547-9958-4917d1c4105d-lib-modules\") pod \"cilium-5xn7w\" (UID: \"51af879e-b915-4547-9958-4917d1c4105d\") " pod="kube-system/cilium-5xn7w" Jul 14 23:58:25.685580 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 57236 ssh2: RSA SHA256:kxjHYs60kUl2l1qGxlWdltpVh6qgPEBQ2zCfME9ibHM Jul 14 23:58:25.686990 sshd-session[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:58:25.691129 systemd-logind[1492]: New session 27 of user core. Jul 14 23:58:25.701140 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 14 23:58:25.836877 kubelet[2602]: E0714 23:58:25.836832 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:25.837414 containerd[1512]: time="2025-07-14T23:58:25.837370827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5xn7w,Uid:51af879e-b915-4547-9958-4917d1c4105d,Namespace:kube-system,Attempt:0,}" Jul 14 23:58:25.857602 containerd[1512]: time="2025-07-14T23:58:25.857494000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:58:25.857602 containerd[1512]: time="2025-07-14T23:58:25.857555046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:58:25.857602 containerd[1512]: time="2025-07-14T23:58:25.857568982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:58:25.857799 containerd[1512]: time="2025-07-14T23:58:25.857644867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:58:25.860752 kubelet[2602]: I0714 23:58:25.860703 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf58066f-1fec-434e-8e1c-e19982e73c96" path="/var/lib/kubelet/pods/bf58066f-1fec-434e-8e1c-e19982e73c96/volumes" Jul 14 23:58:25.861566 kubelet[2602]: I0714 23:58:25.861536 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a1483d-7643-4fb7-bfeb-ede147797f61" path="/var/lib/kubelet/pods/f3a1483d-7643-4fb7-bfeb-ede147797f61/volumes" Jul 14 23:58:25.875174 systemd[1]: Started cri-containerd-6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595.scope - libcontainer container 6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595. Jul 14 23:58:25.895533 containerd[1512]: time="2025-07-14T23:58:25.895488082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5xn7w,Uid:51af879e-b915-4547-9958-4917d1c4105d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595\"" Jul 14 23:58:25.896461 kubelet[2602]: E0714 23:58:25.896440 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:25.898123 containerd[1512]: time="2025-07-14T23:58:25.898079833Z" level=info msg="CreateContainer within sandbox \"6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 23:58:25.912298 containerd[1512]: time="2025-07-14T23:58:25.912252950Z" level=info msg="CreateContainer within sandbox \"6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43ddfbe6bd569240dd6a88d23bc30bc7a2b24bdf08f32845dd093555dabed841\"" Jul 14 23:58:25.912822 containerd[1512]: time="2025-07-14T23:58:25.912795305Z" level=info msg="StartContainer for \"43ddfbe6bd569240dd6a88d23bc30bc7a2b24bdf08f32845dd093555dabed841\"" Jul 14 23:58:25.938251 systemd[1]: Started cri-containerd-43ddfbe6bd569240dd6a88d23bc30bc7a2b24bdf08f32845dd093555dabed841.scope - libcontainer container 43ddfbe6bd569240dd6a88d23bc30bc7a2b24bdf08f32845dd093555dabed841. Jul 14 23:58:25.965640 containerd[1512]: time="2025-07-14T23:58:25.965598089Z" level=info msg="StartContainer for \"43ddfbe6bd569240dd6a88d23bc30bc7a2b24bdf08f32845dd093555dabed841\" returns successfully" Jul 14 23:58:25.974431 systemd[1]: cri-containerd-43ddfbe6bd569240dd6a88d23bc30bc7a2b24bdf08f32845dd093555dabed841.scope: Deactivated successfully. Jul 14 23:58:26.005601 containerd[1512]: time="2025-07-14T23:58:26.005544498Z" level=info msg="shim disconnected" id=43ddfbe6bd569240dd6a88d23bc30bc7a2b24bdf08f32845dd093555dabed841 namespace=k8s.io Jul 14 23:58:26.005601 containerd[1512]: time="2025-07-14T23:58:26.005594223Z" level=warning msg="cleaning up after shim disconnected" id=43ddfbe6bd569240dd6a88d23bc30bc7a2b24bdf08f32845dd093555dabed841 namespace=k8s.io Jul 14 23:58:26.005601 containerd[1512]: time="2025-07-14T23:58:26.005602199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:58:26.072258 kubelet[2602]: E0714 23:58:26.072232 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:26.078928 containerd[1512]: time="2025-07-14T23:58:26.078887005Z" level=info msg="CreateContainer within sandbox \"6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 23:58:26.098875 containerd[1512]: time="2025-07-14T23:58:26.098762026Z" level=info msg="CreateContainer within sandbox \"6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0fd01896b03c1125c89ba14e29357bb49877bc3b3ab1aacb33da4bcc6589e4d7\"" Jul 14 23:58:26.099542 containerd[1512]: time="2025-07-14T23:58:26.099178661Z" level=info msg="StartContainer for \"0fd01896b03c1125c89ba14e29357bb49877bc3b3ab1aacb33da4bcc6589e4d7\"" Jul 14 23:58:26.131147 systemd[1]: Started cri-containerd-0fd01896b03c1125c89ba14e29357bb49877bc3b3ab1aacb33da4bcc6589e4d7.scope - libcontainer container 0fd01896b03c1125c89ba14e29357bb49877bc3b3ab1aacb33da4bcc6589e4d7. Jul 14 23:58:26.154805 containerd[1512]: time="2025-07-14T23:58:26.154750395Z" level=info msg="StartContainer for \"0fd01896b03c1125c89ba14e29357bb49877bc3b3ab1aacb33da4bcc6589e4d7\" returns successfully" Jul 14 23:58:26.162776 systemd[1]: cri-containerd-0fd01896b03c1125c89ba14e29357bb49877bc3b3ab1aacb33da4bcc6589e4d7.scope: Deactivated successfully. Jul 14 23:58:26.184004 containerd[1512]: time="2025-07-14T23:58:26.183944541Z" level=info msg="shim disconnected" id=0fd01896b03c1125c89ba14e29357bb49877bc3b3ab1aacb33da4bcc6589e4d7 namespace=k8s.io Jul 14 23:58:26.184004 containerd[1512]: time="2025-07-14T23:58:26.183996711Z" level=warning msg="cleaning up after shim disconnected" id=0fd01896b03c1125c89ba14e29357bb49877bc3b3ab1aacb33da4bcc6589e4d7 namespace=k8s.io Jul 14 23:58:26.184004 containerd[1512]: time="2025-07-14T23:58:26.184005999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:58:27.075499 kubelet[2602]: E0714 23:58:27.075465 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:27.077550 containerd[1512]: time="2025-07-14T23:58:27.077491527Z" level=info msg="CreateContainer within sandbox \"6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 23:58:27.104880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228519951.mount: Deactivated successfully. Jul 14 23:58:27.143750 containerd[1512]: time="2025-07-14T23:58:27.143686900Z" level=info msg="CreateContainer within sandbox \"6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"93e0c17bfb3b20f28457222cb4ffb3a27d903577341133827c1927256fff95a7\"" Jul 14 23:58:27.144308 containerd[1512]: time="2025-07-14T23:58:27.144276976Z" level=info msg="StartContainer for \"93e0c17bfb3b20f28457222cb4ffb3a27d903577341133827c1927256fff95a7\"" Jul 14 23:58:27.171145 systemd[1]: Started cri-containerd-93e0c17bfb3b20f28457222cb4ffb3a27d903577341133827c1927256fff95a7.scope - libcontainer container 93e0c17bfb3b20f28457222cb4ffb3a27d903577341133827c1927256fff95a7. Jul 14 23:58:27.201841 containerd[1512]: time="2025-07-14T23:58:27.201803533Z" level=info msg="StartContainer for \"93e0c17bfb3b20f28457222cb4ffb3a27d903577341133827c1927256fff95a7\" returns successfully" Jul 14 23:58:27.203706 systemd[1]: cri-containerd-93e0c17bfb3b20f28457222cb4ffb3a27d903577341133827c1927256fff95a7.scope: Deactivated successfully. Jul 14 23:58:27.227566 containerd[1512]: time="2025-07-14T23:58:27.227499117Z" level=info msg="shim disconnected" id=93e0c17bfb3b20f28457222cb4ffb3a27d903577341133827c1927256fff95a7 namespace=k8s.io Jul 14 23:58:27.227566 containerd[1512]: time="2025-07-14T23:58:27.227559502Z" level=warning msg="cleaning up after shim disconnected" id=93e0c17bfb3b20f28457222cb4ffb3a27d903577341133827c1927256fff95a7 namespace=k8s.io Jul 14 23:58:27.227566 containerd[1512]: time="2025-07-14T23:58:27.227568519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:58:27.781800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93e0c17bfb3b20f28457222cb4ffb3a27d903577341133827c1927256fff95a7-rootfs.mount: Deactivated successfully. Jul 14 23:58:27.916003 kubelet[2602]: E0714 23:58:27.915952 2602 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 23:58:28.079357 kubelet[2602]: E0714 23:58:28.079314 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:28.081002 containerd[1512]: time="2025-07-14T23:58:28.080834725Z" level=info msg="CreateContainer within sandbox \"6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 23:58:28.109355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435186224.mount: Deactivated successfully. Jul 14 23:58:28.111969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844527586.mount: Deactivated successfully. Jul 14 23:58:28.112777 containerd[1512]: time="2025-07-14T23:58:28.112730757Z" level=info msg="CreateContainer within sandbox \"6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cabcb56cfed66e6ecddb9df087cc2468a07f759681303e681bb2ee8c9d549322\"" Jul 14 23:58:28.113486 containerd[1512]: time="2025-07-14T23:58:28.113396176Z" level=info msg="StartContainer for \"cabcb56cfed66e6ecddb9df087cc2468a07f759681303e681bb2ee8c9d549322\"" Jul 14 23:58:28.143143 systemd[1]: Started cri-containerd-cabcb56cfed66e6ecddb9df087cc2468a07f759681303e681bb2ee8c9d549322.scope - libcontainer container cabcb56cfed66e6ecddb9df087cc2468a07f759681303e681bb2ee8c9d549322. Jul 14 23:58:28.168314 systemd[1]: cri-containerd-cabcb56cfed66e6ecddb9df087cc2468a07f759681303e681bb2ee8c9d549322.scope: Deactivated successfully. Jul 14 23:58:28.170207 containerd[1512]: time="2025-07-14T23:58:28.170158838Z" level=info msg="StartContainer for \"cabcb56cfed66e6ecddb9df087cc2468a07f759681303e681bb2ee8c9d549322\" returns successfully" Jul 14 23:58:28.191829 containerd[1512]: time="2025-07-14T23:58:28.191760098Z" level=info msg="shim disconnected" id=cabcb56cfed66e6ecddb9df087cc2468a07f759681303e681bb2ee8c9d549322 namespace=k8s.io Jul 14 23:58:28.191829 containerd[1512]: time="2025-07-14T23:58:28.191816336Z" level=warning msg="cleaning up after shim disconnected" id=cabcb56cfed66e6ecddb9df087cc2468a07f759681303e681bb2ee8c9d549322 namespace=k8s.io Jul 14 23:58:28.191829 containerd[1512]: time="2025-07-14T23:58:28.191826204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:58:28.782130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cabcb56cfed66e6ecddb9df087cc2468a07f759681303e681bb2ee8c9d549322-rootfs.mount: Deactivated successfully. Jul 14 23:58:29.082166 kubelet[2602]: E0714 23:58:29.082146 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:29.084282 containerd[1512]: time="2025-07-14T23:58:29.084248593Z" level=info msg="CreateContainer within sandbox \"6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 23:58:29.100584 containerd[1512]: time="2025-07-14T23:58:29.100541554Z" level=info msg="CreateContainer within sandbox \"6c440faa4eb8f160f1028bb8e20921f8c177491b42322d3365028bec65d46595\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9925f545b3a9947c91dd4b975a5a16c8012889631768b388aef0bac4e7048dd6\"" Jul 14 23:58:29.101038 containerd[1512]: time="2025-07-14T23:58:29.100991301Z" level=info msg="StartContainer for \"9925f545b3a9947c91dd4b975a5a16c8012889631768b388aef0bac4e7048dd6\"" Jul 14 23:58:29.128137 systemd[1]: Started cri-containerd-9925f545b3a9947c91dd4b975a5a16c8012889631768b388aef0bac4e7048dd6.scope - libcontainer container 9925f545b3a9947c91dd4b975a5a16c8012889631768b388aef0bac4e7048dd6. Jul 14 23:58:29.154432 containerd[1512]: time="2025-07-14T23:58:29.154387198Z" level=info msg="StartContainer for \"9925f545b3a9947c91dd4b975a5a16c8012889631768b388aef0bac4e7048dd6\" returns successfully" Jul 14 23:58:29.536041 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 14 23:58:29.832783 kubelet[2602]: I0714 23:58:29.832543 2602 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T23:58:29Z","lastTransitionTime":"2025-07-14T23:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 23:58:30.086364 kubelet[2602]: E0714 23:58:30.086244 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:30.099301 kubelet[2602]: I0714 23:58:30.099230 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5xn7w" podStartSLOduration=5.099207543 podStartE2EDuration="5.099207543s" podCreationTimestamp="2025-07-14 23:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:58:30.098965511 +0000 UTC m=+82.344864913" watchObservedRunningTime="2025-07-14 23:58:30.099207543 +0000 UTC m=+82.345106965" Jul 14 23:58:30.858516 kubelet[2602]: E0714 23:58:30.858415 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:31.837524 kubelet[2602]: E0714 23:58:31.837495 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:32.584207 systemd-networkd[1414]: lxc_health: Link UP Jul 14 23:58:32.585193 systemd-networkd[1414]: lxc_health: Gained carrier Jul 14 23:58:33.838438 kubelet[2602]: E0714 23:58:33.838396 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:33.841250 systemd-networkd[1414]: lxc_health: Gained IPv6LL Jul 14 23:58:34.095944 kubelet[2602]: E0714 23:58:34.095819 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:35.097302 kubelet[2602]: E0714 23:58:35.097262 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 23:58:40.395800 sshd[4462]: Connection closed by 10.0.0.1 port 57236 Jul 14 23:58:40.396285 sshd-session[4459]: pam_unix(sshd:session): session closed for user core Jul 14 23:58:40.400172 systemd[1]: sshd@26-10.0.0.18:22-10.0.0.1:57236.service: Deactivated successfully. Jul 14 23:58:40.402362 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 23:58:40.403107 systemd-logind[1492]: Session 27 logged out. Waiting for processes to exit. Jul 14 23:58:40.404029 systemd-logind[1492]: Removed session 27.