Jul 6 23:45:36.965164 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:45:36.965193 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:45:36.965209 kernel: BIOS-provided physical RAM map: Jul 6 23:45:36.965218 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:45:36.965226 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:45:36.965235 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:45:36.965245 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 6 23:45:36.965254 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 6 23:45:36.965262 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 6 23:45:36.965275 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 6 23:45:36.965284 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:45:36.965293 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:45:36.965306 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 6 23:45:36.965316 kernel: NX (Execute Disable) protection: active Jul 6 23:45:36.965327 kernel: APIC: Static calls initialized Jul 6 23:45:36.965344 kernel: SMBIOS 2.8 present. Jul 6 23:45:36.965354 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 6 23:45:36.965363 kernel: Hypervisor detected: KVM Jul 6 23:45:36.965373 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:45:36.965382 kernel: kvm-clock: using sched offset of 3484763015 cycles Jul 6 23:45:36.965392 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:45:36.965410 kernel: tsc: Detected 2794.748 MHz processor Jul 6 23:45:36.965420 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:45:36.965431 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:45:36.965441 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 6 23:45:36.965456 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:45:36.965465 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:45:36.965475 kernel: Using GB pages for direct mapping Jul 6 23:45:36.965485 kernel: ACPI: Early table checksum verification disabled Jul 6 23:45:36.965495 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 6 23:45:36.965504 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:45:36.965514 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:45:36.965524 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:45:36.965538 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 6 23:45:36.965547 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:45:36.965557 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:45:36.965567 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:45:36.965577 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:45:36.965586 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 6 23:45:36.965596 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 6 23:45:36.965612 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 6 23:45:36.965626 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 6 23:45:36.965636 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 6 23:45:36.965646 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 6 23:45:36.965656 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 6 23:45:36.965666 kernel: No NUMA configuration found Jul 6 23:45:36.965677 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 6 23:45:36.965687 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 6 23:45:36.965701 kernel: Zone ranges: Jul 6 23:45:36.965711 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:45:36.965721 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 6 23:45:36.965732 kernel: Normal empty Jul 6 23:45:36.965741 kernel: Movable zone start for each node Jul 6 23:45:36.965751 kernel: Early memory node ranges Jul 6 23:45:36.965761 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:45:36.965771 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 6 23:45:36.965782 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 6 23:45:36.965796 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:45:36.965811 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:45:36.965821 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 6 23:45:36.965831 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:45:36.965841 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:45:36.965852 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:45:36.965862 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:45:36.965872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:45:36.965882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:45:36.965896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:45:36.965907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:45:36.965916 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:45:36.965964 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:45:36.965974 kernel: TSC deadline timer available Jul 6 23:45:36.965983 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 6 23:45:36.965993 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:45:36.966003 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 6 23:45:36.966017 kernel: kvm-guest: setup PV sched yield Jul 6 23:45:36.966033 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 6 23:45:36.966043 kernel: Booting paravirtualized kernel on KVM Jul 6 23:45:36.966054 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:45:36.966064 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 6 23:45:36.966075 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 6 23:45:36.966085 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 6 23:45:36.966095 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 6 23:45:36.966105 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:45:36.966115 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:45:36.966130 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:45:36.966141 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:45:36.966151 kernel: random: crng init done Jul 6 23:45:36.966162 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:45:36.966172 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:45:36.966182 kernel: Fallback order for Node 0: 0 Jul 6 23:45:36.966192 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 6 23:45:36.966203 kernel: Policy zone: DMA32 Jul 6 23:45:36.966217 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:45:36.966228 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 136900K reserved, 0K cma-reserved) Jul 6 23:45:36.966238 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:45:36.966248 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:45:36.966259 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:45:36.966269 kernel: Dynamic Preempt: voluntary Jul 6 23:45:36.966279 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:45:36.966290 kernel: rcu: RCU event tracing is enabled. Jul 6 23:45:36.966300 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:45:36.966315 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:45:36.966325 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:45:36.966336 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:45:36.966346 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:45:36.966360 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:45:36.966370 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 6 23:45:36.966381 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:45:36.966391 kernel: Console: colour VGA+ 80x25 Jul 6 23:45:36.966410 kernel: printk: console [ttyS0] enabled Jul 6 23:45:36.966421 kernel: ACPI: Core revision 20230628 Jul 6 23:45:36.966435 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:45:36.966446 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:45:36.966456 kernel: x2apic enabled Jul 6 23:45:36.966466 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:45:36.966477 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 6 23:45:36.966487 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 6 23:45:36.966498 kernel: kvm-guest: setup PV IPIs Jul 6 23:45:36.966522 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:45:36.966533 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:45:36.966544 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 6 23:45:36.966554 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 6 23:45:36.966569 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 6 23:45:36.966580 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 6 23:45:36.966590 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:45:36.966601 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:45:36.966612 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:45:36.966627 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 6 23:45:36.966637 kernel: RETBleed: Mitigation: untrained return thunk Jul 6 23:45:36.966653 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:45:36.966664 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:45:36.966675 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 6 23:45:36.966686 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 6 23:45:36.966697 kernel: x86/bugs: return thunk changed Jul 6 23:45:36.966708 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 6 23:45:36.966723 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:45:36.966734 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:45:36.966745 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:45:36.966756 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:45:36.966767 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:45:36.966777 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:45:36.966788 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:45:36.966799 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:45:36.966809 kernel: landlock: Up and running. Jul 6 23:45:36.966824 kernel: SELinux: Initializing. Jul 6 23:45:36.966835 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:45:36.966845 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:45:36.966856 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 6 23:45:36.966867 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:45:36.966878 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:45:36.966888 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:45:36.966899 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 6 23:45:36.966914 kernel: ... version: 0 Jul 6 23:45:36.966945 kernel: ... bit width: 48 Jul 6 23:45:36.966955 kernel: ... generic registers: 6 Jul 6 23:45:36.966966 kernel: ... value mask: 0000ffffffffffff Jul 6 23:45:36.966977 kernel: ... max period: 00007fffffffffff Jul 6 23:45:36.966987 kernel: ... fixed-purpose events: 0 Jul 6 23:45:36.966998 kernel: ... event mask: 000000000000003f Jul 6 23:45:36.967008 kernel: signal: max sigframe size: 1776 Jul 6 23:45:36.967019 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:45:36.967030 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:45:36.967045 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:45:36.967056 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:45:36.967066 kernel: .... node #0, CPUs: #1 #2 #3 Jul 6 23:45:36.967076 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:45:36.967087 kernel: smpboot: Max logical packages: 1 Jul 6 23:45:36.967097 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 6 23:45:36.967108 kernel: devtmpfs: initialized Jul 6 23:45:36.967118 kernel: x86/mm: Memory block size: 128MB Jul 6 23:45:36.967129 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:45:36.967144 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:45:36.967154 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:45:36.967165 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:45:36.967176 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:45:36.967187 kernel: audit: type=2000 audit(1751845535.907:1): state=initialized audit_enabled=0 res=1 Jul 6 23:45:36.967197 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:45:36.967208 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:45:36.967219 kernel: cpuidle: using governor menu Jul 6 23:45:36.967229 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:45:36.967244 kernel: dca service started, version 1.12.1 Jul 6 23:45:36.967254 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 6 23:45:36.967265 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 6 23:45:36.967276 kernel: PCI: Using configuration type 1 for base access Jul 6 23:45:36.967287 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:45:36.967298 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:45:36.967308 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:45:36.967319 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:45:36.967330 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:45:36.967344 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:45:36.967354 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:45:36.967365 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:45:36.967376 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:45:36.967386 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:45:36.967397 kernel: ACPI: Interpreter enabled Jul 6 23:45:36.967531 kernel: ACPI: PM: (supports S0 S3 S5) Jul 6 23:45:36.967542 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:45:36.967552 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:45:36.967567 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:45:36.967578 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 6 23:45:36.967588 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:45:36.967884 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:45:36.968201 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 6 23:45:36.968365 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 6 23:45:36.968380 kernel: PCI host bridge to bus 0000:00 Jul 6 23:45:36.968598 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:45:36.968769 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:45:36.968938 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:45:36.969093 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 6 23:45:36.969252 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:45:36.969425 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 6 23:45:36.969587 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:45:36.969824 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 6 23:45:36.970047 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 6 23:45:36.970226 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 6 23:45:36.970392 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 6 23:45:36.970561 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 6 23:45:36.970713 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:45:36.970890 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 6 23:45:36.971092 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 6 23:45:36.971279 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 6 23:45:36.971458 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 6 23:45:36.971650 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:45:36.971815 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 6 23:45:36.972002 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 6 23:45:36.972165 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 6 23:45:36.972357 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:45:36.972534 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 6 23:45:36.972700 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 6 23:45:36.972872 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 6 23:45:36.973059 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 6 23:45:36.973249 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 6 23:45:36.973429 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 6 23:45:36.973622 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 6 23:45:36.973794 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 6 23:45:36.973973 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 6 23:45:36.974160 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 6 23:45:36.974323 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 6 23:45:36.974340 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:45:36.974357 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:45:36.974368 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:45:36.974379 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:45:36.974390 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 6 23:45:36.974410 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 6 23:45:36.974422 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 6 23:45:36.974433 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 6 23:45:36.974444 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 6 23:45:36.974455 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 6 23:45:36.974470 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 6 23:45:36.974481 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 6 23:45:36.974492 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 6 23:45:36.974503 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 6 23:45:36.974514 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 6 23:45:36.974525 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 6 23:45:36.974536 kernel: iommu: Default domain type: Translated Jul 6 23:45:36.974547 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:45:36.974558 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:45:36.974573 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:45:36.974583 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:45:36.974594 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 6 23:45:36.974785 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 6 23:45:36.974976 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 6 23:45:36.975139 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:45:36.975154 kernel: vgaarb: loaded Jul 6 23:45:36.975165 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:45:36.975182 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:45:36.975193 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:45:36.975204 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:45:36.975215 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:45:36.975226 kernel: pnp: PnP ACPI init Jul 6 23:45:36.975456 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 6 23:45:36.975473 kernel: pnp: PnP ACPI: found 6 devices Jul 6 23:45:36.975484 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:45:36.975500 kernel: NET: Registered PF_INET protocol family Jul 6 23:45:36.975511 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:45:36.975522 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:45:36.975533 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:45:36.975544 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:45:36.975555 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:45:36.975566 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:45:36.975577 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:45:36.975588 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:45:36.975603 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:45:36.975614 kernel: NET: Registered PF_XDP protocol family Jul 6 23:45:36.975757 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:45:36.975898 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:45:36.976056 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:45:36.976202 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 6 23:45:36.976343 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 6 23:45:36.976495 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 6 23:45:36.976510 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:45:36.976526 kernel: Initialise system trusted keyrings Jul 6 23:45:36.976537 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:45:36.976548 kernel: Key type asymmetric registered Jul 6 23:45:36.976559 kernel: Asymmetric key parser 'x509' registered Jul 6 23:45:36.976570 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:45:36.976581 kernel: io scheduler mq-deadline registered Jul 6 23:45:36.976592 kernel: io scheduler kyber registered Jul 6 23:45:36.976603 kernel: io scheduler bfq registered Jul 6 23:45:36.976614 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:45:36.976629 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 6 23:45:36.976640 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 6 23:45:36.976651 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 6 23:45:36.976663 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:45:36.976674 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:45:36.976685 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:45:36.976697 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:45:36.976708 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:45:36.976719 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:45:36.976892 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 6 23:45:36.977117 kernel: rtc_cmos 00:04: registered as rtc0 Jul 6 23:45:36.977262 kernel: rtc_cmos 00:04: setting system clock to 2025-07-06T23:45:36 UTC (1751845536) Jul 6 23:45:36.977414 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 6 23:45:36.977428 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:45:36.977440 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:45:36.977451 kernel: Segment Routing with IPv6 Jul 6 23:45:36.977463 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:45:36.977479 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:45:36.977490 kernel: Key type dns_resolver registered Jul 6 23:45:36.977501 kernel: IPI shorthand broadcast: enabled Jul 6 23:45:36.977513 kernel: sched_clock: Marking stable (979002168, 115498192)->(1121746815, -27246455) Jul 6 23:45:36.977524 kernel: registered taskstats version 1 Jul 6 23:45:36.977535 kernel: Loading compiled-in X.509 certificates Jul 6 23:45:36.977546 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:45:36.977557 kernel: Key type .fscrypt registered Jul 6 23:45:36.977568 kernel: Key type fscrypt-provisioning registered Jul 6 23:45:36.977582 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:45:36.977593 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:45:36.977604 kernel: ima: No architecture policies found Jul 6 23:45:36.977615 kernel: clk: Disabling unused clocks Jul 6 23:45:36.977626 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:45:36.977637 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:45:36.977648 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:45:36.977659 kernel: Run /init as init process Jul 6 23:45:36.977673 kernel: with arguments: Jul 6 23:45:36.977684 kernel: /init Jul 6 23:45:36.977695 kernel: with environment: Jul 6 23:45:36.977706 kernel: HOME=/ Jul 6 23:45:36.977716 kernel: TERM=linux Jul 6 23:45:36.977727 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:45:36.977741 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:45:36.977755 systemd[1]: Detected virtualization kvm. Jul 6 23:45:36.977770 systemd[1]: Detected architecture x86-64. Jul 6 23:45:36.977781 systemd[1]: Running in initrd. Jul 6 23:45:36.977792 systemd[1]: No hostname configured, using default hostname. Jul 6 23:45:36.977804 systemd[1]: Hostname set to . Jul 6 23:45:36.977816 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:45:36.977828 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:45:36.977839 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:45:36.977851 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:45:36.977867 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:45:36.977879 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:45:36.977904 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:45:36.977919 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:45:36.977947 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:45:36.977964 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:45:36.977984 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:45:36.977999 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:45:36.978011 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:45:36.978037 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:45:36.978054 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:45:36.978066 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:45:36.978101 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:45:36.978139 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:45:36.978152 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:45:36.978167 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:45:36.978179 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:45:36.978192 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:45:36.978204 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:45:36.978217 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:45:36.978229 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:45:36.978242 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:45:36.978262 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:45:36.978283 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:45:36.978320 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:45:36.978353 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:45:36.978366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:36.978379 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:45:36.978391 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:45:36.978411 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:45:36.978429 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:45:36.978469 systemd-journald[193]: Collecting audit messages is disabled. Jul 6 23:45:36.978501 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:45:36.978513 systemd-journald[193]: Journal started Jul 6 23:45:36.978545 systemd-journald[193]: Runtime Journal (/run/log/journal/c6379da0756b477f94d24584b99cd3dd) is 6.0M, max 48.4M, 42.3M free. Jul 6 23:45:36.983072 systemd-modules-load[194]: Inserted module 'overlay' Jul 6 23:45:37.012693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:45:37.012716 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:45:37.012732 kernel: Bridge firewalling registered Jul 6 23:45:37.012582 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 6 23:45:37.019808 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:45:37.020494 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:45:37.023559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:37.036300 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:45:37.053100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:45:37.057169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:45:37.060192 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:45:37.072185 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:45:37.075627 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:45:37.078455 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:45:37.086090 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:45:37.088487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:45:37.098092 dracut-cmdline[229]: dracut-dracut-053 Jul 6 23:45:37.100969 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:45:37.124278 systemd-resolved[230]: Positive Trust Anchors: Jul 6 23:45:37.124294 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:45:37.124325 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:45:37.126841 systemd-resolved[230]: Defaulting to hostname 'linux'. Jul 6 23:45:37.128073 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:45:37.135583 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:45:37.182956 kernel: SCSI subsystem initialized Jul 6 23:45:37.192945 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:45:37.215963 kernel: iscsi: registered transport (tcp) Jul 6 23:45:37.237950 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:45:37.237983 kernel: QLogic iSCSI HBA Driver Jul 6 23:45:37.286407 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:45:37.301060 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:45:37.330323 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:45:37.330373 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:45:37.331344 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:45:37.373961 kernel: raid6: avx2x4 gen() 17731 MB/s Jul 6 23:45:37.400952 kernel: raid6: avx2x2 gen() 25508 MB/s Jul 6 23:45:37.426208 kernel: raid6: avx2x1 gen() 24123 MB/s Jul 6 23:45:37.426247 kernel: raid6: using algorithm avx2x2 gen() 25508 MB/s Jul 6 23:45:37.447955 kernel: raid6: .... xor() 19883 MB/s, rmw enabled Jul 6 23:45:37.447985 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:45:37.468961 kernel: xor: automatically using best checksumming function avx Jul 6 23:45:37.638982 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:45:37.654340 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:45:37.666210 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:45:37.680261 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 6 23:45:37.685195 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:45:37.717205 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:45:37.732336 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jul 6 23:45:37.765556 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:45:37.782083 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:45:37.852632 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:45:37.859110 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:45:37.878335 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:45:37.880884 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:45:37.882330 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:45:37.883893 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:45:37.892950 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 6 23:45:37.894126 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:45:37.900609 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:45:37.900631 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:45:37.906812 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:45:37.906863 kernel: GPT:9289727 != 19775487 Jul 6 23:45:37.906874 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:45:37.906892 kernel: GPT:9289727 != 19775487 Jul 6 23:45:37.906902 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:45:37.906912 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:45:37.915013 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:45:37.925465 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:45:37.925514 kernel: libata version 3.00 loaded. Jul 6 23:45:37.925526 kernel: AES CTR mode by8 optimization enabled Jul 6 23:45:37.934264 kernel: ahci 0000:00:1f.2: version 3.0 Jul 6 23:45:37.934506 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 6 23:45:37.936960 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 6 23:45:37.937155 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 6 23:45:37.939882 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:45:37.943817 kernel: scsi host0: ahci Jul 6 23:45:37.944969 kernel: scsi host1: ahci Jul 6 23:45:37.945150 kernel: scsi host2: ahci Jul 6 23:45:37.940756 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:45:37.949936 kernel: scsi host3: ahci Jul 6 23:45:37.950169 kernel: scsi host4: ahci Jul 6 23:45:37.947617 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:45:37.959116 kernel: scsi host5: ahci Jul 6 23:45:37.959480 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 6 23:45:37.959498 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 6 23:45:37.959555 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (460) Jul 6 23:45:37.959579 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 6 23:45:37.959594 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 6 23:45:37.959609 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 6 23:45:37.959624 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (459) Jul 6 23:45:37.959639 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 6 23:45:37.961961 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:45:37.962500 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:37.967250 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:37.976262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:37.988912 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:45:37.994095 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:45:38.009462 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:45:38.029183 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:45:38.034899 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:45:38.036541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:38.054102 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:45:38.056086 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:45:38.082734 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:45:38.245888 disk-uuid[556]: Primary Header is updated. Jul 6 23:45:38.245888 disk-uuid[556]: Secondary Entries is updated. Jul 6 23:45:38.245888 disk-uuid[556]: Secondary Header is updated. Jul 6 23:45:38.249954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:45:38.254949 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:45:38.275958 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:45:38.276013 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 6 23:45:38.277017 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:45:38.278982 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 6 23:45:38.280988 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 6 23:45:38.281021 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:45:38.282949 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 6 23:45:38.282972 kernel: ata3.00: applying bridge limits Jul 6 23:45:38.283946 kernel: ata3.00: configured for UDMA/100 Jul 6 23:45:38.285945 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:45:38.339961 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 6 23:45:38.340244 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:45:38.354955 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:45:39.257945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:45:39.258092 disk-uuid[565]: The operation has completed successfully. Jul 6 23:45:39.365453 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:45:39.365589 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:45:39.370088 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:45:39.375614 sh[592]: Success Jul 6 23:45:39.387955 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 6 23:45:39.424399 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:45:39.441646 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:45:39.444425 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:45:39.457484 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:45:39.457516 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:45:39.457527 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:45:39.458493 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:45:39.459219 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:45:39.464748 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:45:39.465591 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:45:39.477109 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:45:39.478913 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:45:39.490254 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:45:39.490309 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:45:39.490321 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:45:39.494328 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:45:39.504562 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:45:39.506964 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:45:39.599246 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:45:39.607138 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:45:39.608501 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:45:39.619326 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:45:39.668212 systemd-networkd[773]: lo: Link UP Jul 6 23:45:39.668225 systemd-networkd[773]: lo: Gained carrier Jul 6 23:45:39.670403 systemd-networkd[773]: Enumeration completed Jul 6 23:45:39.670536 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:45:39.671681 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:45:39.671685 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:45:39.672041 systemd[1]: Reached target network.target - Network. Jul 6 23:45:39.673660 systemd-networkd[773]: eth0: Link UP Jul 6 23:45:39.673664 systemd-networkd[773]: eth0: Gained carrier Jul 6 23:45:39.673672 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:45:39.732060 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:45:39.748682 ignition[770]: Ignition 2.19.0 Jul 6 23:45:39.748698 ignition[770]: Stage: fetch-offline Jul 6 23:45:39.748744 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:39.748754 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:45:39.748893 ignition[770]: parsed url from cmdline: "" Jul 6 23:45:39.748897 ignition[770]: no config URL provided Jul 6 23:45:39.748902 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:45:39.748913 ignition[770]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:45:39.748981 ignition[770]: op(1): [started] loading QEMU firmware config module Jul 6 23:45:39.748987 ignition[770]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:45:39.769380 ignition[770]: op(1): [finished] loading QEMU firmware config module Jul 6 23:45:39.813572 ignition[770]: parsing config with SHA512: 8e4829c7f402c95b045b518d5b301f4bf68349e3452d40ed3821b5f93adc5ab05736406a453214252af50933a88e1c4aaa0e1332e314734f95572d4bd3ade4ff Jul 6 23:45:39.842264 unknown[770]: fetched base config from "system" Jul 6 23:45:39.842553 unknown[770]: fetched user config from "qemu" Jul 6 23:45:39.843216 ignition[770]: fetch-offline: fetch-offline passed Jul 6 23:45:39.843342 ignition[770]: Ignition finished successfully Jul 6 23:45:39.846174 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:45:39.848645 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:45:39.857080 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:45:39.877108 ignition[785]: Ignition 2.19.0 Jul 6 23:45:39.877123 ignition[785]: Stage: kargs Jul 6 23:45:39.877308 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:39.877321 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:45:39.878265 ignition[785]: kargs: kargs passed Jul 6 23:45:39.878314 ignition[785]: Ignition finished successfully Jul 6 23:45:39.880916 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:45:39.890143 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:45:39.912293 ignition[794]: Ignition 2.19.0 Jul 6 23:45:39.912306 ignition[794]: Stage: disks Jul 6 23:45:39.912515 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:39.912527 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:45:39.916158 ignition[794]: disks: disks passed Jul 6 23:45:39.916215 ignition[794]: Ignition finished successfully Jul 6 23:45:39.919738 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:45:39.920979 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:45:39.922706 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:45:39.922906 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:45:39.923395 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:45:39.923716 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:45:39.940186 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:45:39.956110 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:45:40.039968 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:45:40.052033 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:45:40.169955 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:45:40.170940 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:45:40.173093 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:45:40.186995 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:45:40.189428 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:45:40.192022 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:45:40.192081 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:45:40.206778 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Jul 6 23:45:40.206803 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:45:40.206815 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:45:40.206828 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:45:40.206840 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:45:40.192106 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:45:40.209176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:45:40.211301 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:45:40.214904 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:45:40.260792 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:45:40.265602 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:45:40.272095 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:45:40.289635 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:45:40.383524 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:45:40.400064 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:45:40.403708 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:45:40.416955 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:45:40.443749 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:45:40.456576 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:45:40.466089 ignition[926]: INFO : Ignition 2.19.0 Jul 6 23:45:40.466089 ignition[926]: INFO : Stage: mount Jul 6 23:45:40.468044 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:40.468044 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:45:40.468044 ignition[926]: INFO : mount: mount passed Jul 6 23:45:40.468044 ignition[926]: INFO : Ignition finished successfully Jul 6 23:45:40.470167 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:45:40.500187 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:45:40.509210 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:45:40.521957 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Jul 6 23:45:40.524309 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:45:40.524351 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:45:40.524363 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:45:40.542954 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:45:40.544664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:45:40.590579 ignition[956]: INFO : Ignition 2.19.0 Jul 6 23:45:40.590579 ignition[956]: INFO : Stage: files Jul 6 23:45:40.592632 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:40.592632 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:45:40.592632 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:45:40.596594 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:45:40.596594 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:45:40.596594 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:45:40.596594 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:45:40.596594 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:45:40.596350 unknown[956]: wrote ssh authorized keys file for user: core Jul 6 23:45:40.604690 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:45:40.604690 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 6 23:45:40.653808 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:45:40.823246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:45:40.823246 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:45:40.827161 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 6 23:45:41.321112 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:45:41.445329 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:45:41.447323 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 6 23:45:41.468093 systemd-networkd[773]: eth0: Gained IPv6LL Jul 6 23:45:42.102692 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:45:42.756723 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:45:42.756723 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:45:42.760891 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:45:42.760891 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:45:42.760891 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:45:42.760891 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 6 23:45:42.760891 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:45:42.760891 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:45:42.760891 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 6 23:45:42.760891 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:45:42.787390 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:45:42.792310 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:45:42.793849 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:45:42.793849 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:45:42.793849 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:45:42.793849 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:45:42.793849 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:45:42.793849 ignition[956]: INFO : files: files passed Jul 6 23:45:42.793849 ignition[956]: INFO : Ignition finished successfully Jul 6 23:45:42.795622 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:45:42.810128 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:45:42.812879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:45:42.814781 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:45:42.814891 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:45:42.823342 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:45:42.826509 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:45:42.826509 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:45:42.830705 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:45:42.829066 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:45:42.830935 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:45:42.843071 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:45:42.868061 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:45:42.868191 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:45:42.870470 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:45:42.872466 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:45:42.874337 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:45:42.876680 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:45:42.897230 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:45:42.899791 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:45:42.914131 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:45:42.916490 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:45:42.917750 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:45:42.919705 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:45:42.919843 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:45:42.922287 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:45:42.923776 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:45:42.925760 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:45:42.927729 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:45:42.929702 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:45:42.932028 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:45:42.933966 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:45:42.936166 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:45:42.938111 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:45:42.940363 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:45:42.942106 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:45:42.942281 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:45:42.944508 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:45:42.945943 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:45:42.947965 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:45:42.948100 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:45:42.950136 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:45:42.950287 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:45:42.952867 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:45:42.953025 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:45:42.954981 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:45:42.956671 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:45:42.959995 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:45:42.961350 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:45:42.963286 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:45:42.965285 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:45:42.965413 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:45:42.967083 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:45:42.967199 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:45:42.969132 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:45:42.969294 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:45:42.971700 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:45:42.971832 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:45:42.981139 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:45:42.982821 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:45:42.983893 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:45:42.984026 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:45:42.986136 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:45:42.986242 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:45:42.991873 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:45:42.992041 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:45:42.998522 ignition[1010]: INFO : Ignition 2.19.0 Jul 6 23:45:42.998522 ignition[1010]: INFO : Stage: umount Jul 6 23:45:43.000228 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:45:43.000228 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:45:43.000228 ignition[1010]: INFO : umount: umount passed Jul 6 23:45:43.000228 ignition[1010]: INFO : Ignition finished successfully Jul 6 23:45:43.001528 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:45:43.001665 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:45:43.004290 systemd[1]: Stopped target network.target - Network. Jul 6 23:45:43.005164 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:45:43.005234 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:45:43.007044 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:45:43.007098 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:45:43.009082 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:45:43.009136 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:45:43.011016 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:45:43.011068 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:45:43.013070 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:45:43.016825 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:45:43.019180 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:45:43.019759 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:45:43.019891 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:45:43.022007 systemd-networkd[773]: eth0: DHCPv6 lease lost Jul 6 23:45:43.024353 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:45:43.024469 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:45:43.026748 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:45:43.026908 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:45:43.029504 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:45:43.029588 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:45:43.044056 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:45:43.045187 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:45:43.045278 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:45:43.047485 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:45:43.047542 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:45:43.051246 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:45:43.051334 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:45:43.055399 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:45:43.069283 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:45:43.069454 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:45:43.080807 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:45:43.081072 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:45:43.084490 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:45:43.084550 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:45:43.087584 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:45:43.087633 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:45:43.090473 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:45:43.090533 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:45:43.093487 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:45:43.094388 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:45:43.096467 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:45:43.097410 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:45:43.111134 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:45:43.112295 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:45:43.112373 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:45:43.113526 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:45:43.113587 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:45:43.113833 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:45:43.113882 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:45:43.114347 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:45:43.114403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:43.135532 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:45:43.135666 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:45:43.151094 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:45:43.151227 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:45:43.152307 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:45:43.154740 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:45:43.154799 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:45:43.167057 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:45:43.176415 systemd[1]: Switching root. Jul 6 23:45:43.213495 systemd-journald[193]: Journal stopped Jul 6 23:45:44.250210 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 6 23:45:44.250295 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:45:44.250327 kernel: SELinux: policy capability open_perms=1 Jul 6 23:45:44.250343 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:45:44.250355 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:45:44.250366 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:45:44.250378 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:45:44.250389 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:45:44.250401 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:45:44.250413 kernel: audit: type=1403 audit(1751845543.494:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:45:44.250431 systemd[1]: Successfully loaded SELinux policy in 43.375ms. Jul 6 23:45:44.250455 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.418ms. Jul 6 23:45:44.250468 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:45:44.250481 systemd[1]: Detected virtualization kvm. Jul 6 23:45:44.250493 systemd[1]: Detected architecture x86-64. Jul 6 23:45:44.250505 systemd[1]: Detected first boot. Jul 6 23:45:44.250518 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:45:44.250542 zram_generator::config[1055]: No configuration found. Jul 6 23:45:44.250555 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:45:44.250571 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:45:44.250583 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:45:44.250596 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:45:44.250609 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:45:44.250621 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:45:44.250633 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:45:44.250646 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:45:44.250658 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:45:44.250671 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:45:44.250686 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:45:44.250698 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:45:44.250710 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:45:44.250722 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:45:44.250734 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:45:44.250747 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:45:44.250759 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:45:44.250771 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:45:44.250784 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:45:44.250804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:45:44.250825 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:45:44.250837 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:45:44.250850 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:45:44.250862 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:45:44.250874 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:45:44.250886 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:45:44.250902 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:45:44.250914 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:45:44.250945 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:45:44.250958 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:45:44.250971 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:45:44.250983 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:45:44.250996 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:45:44.251008 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:45:44.251020 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:45:44.251033 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:45:44.251049 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:45:44.251062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:45:44.251077 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:45:44.251090 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:45:44.251102 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:45:44.251123 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:45:44.251135 systemd[1]: Reached target machines.target - Containers. Jul 6 23:45:44.251148 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:45:44.251163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:45:44.251176 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:45:44.251188 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:45:44.251200 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:45:44.251213 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:45:44.251234 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:45:44.251250 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:45:44.251269 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:45:44.251289 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:45:44.251306 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:45:44.251322 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:45:44.251338 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:45:44.251351 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:45:44.251362 kernel: fuse: init (API version 7.39) Jul 6 23:45:44.251374 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:45:44.251386 kernel: loop: module loaded Jul 6 23:45:44.251398 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:45:44.251414 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:45:44.251436 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:45:44.251448 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:45:44.251460 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:45:44.251472 systemd[1]: Stopped verity-setup.service. Jul 6 23:45:44.251485 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:45:44.251497 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:45:44.251509 kernel: ACPI: bus type drm_connector registered Jul 6 23:45:44.251542 systemd-journald[1125]: Collecting audit messages is disabled. Jul 6 23:45:44.251568 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:45:44.251581 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:45:44.251593 systemd-journald[1125]: Journal started Jul 6 23:45:44.251618 systemd-journald[1125]: Runtime Journal (/run/log/journal/c6379da0756b477f94d24584b99cd3dd) is 6.0M, max 48.4M, 42.3M free. Jul 6 23:45:44.026533 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:45:44.047067 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:45:44.047562 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:45:44.252987 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:45:44.254317 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:45:44.255496 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:45:44.256679 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:45:44.257899 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:45:44.259356 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:45:44.260860 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:45:44.261096 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:45:44.262575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:45:44.262762 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:45:44.264311 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:45:44.264497 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:45:44.265823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:45:44.266021 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:45:44.267495 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:45:44.267676 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:45:44.269030 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:45:44.269207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:45:44.270570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:45:44.271980 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:45:44.273496 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:45:44.289821 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:45:44.303042 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:45:44.305373 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:45:44.306476 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:45:44.306504 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:45:44.308525 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:45:44.310941 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:45:44.314124 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:45:44.315323 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:45:44.318097 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:45:44.325188 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:45:44.326462 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:45:44.328032 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:45:44.329471 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:45:44.335127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:45:44.338161 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:45:44.340585 systemd-journald[1125]: Time spent on flushing to /var/log/journal/c6379da0756b477f94d24584b99cd3dd is 22.962ms for 954 entries. Jul 6 23:45:44.340585 systemd-journald[1125]: System Journal (/var/log/journal/c6379da0756b477f94d24584b99cd3dd) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:45:44.370640 systemd-journald[1125]: Received client request to flush runtime journal. Jul 6 23:45:44.343098 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:45:44.346875 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:45:44.348192 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:45:44.349668 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:45:44.360843 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:45:44.363567 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:45:44.365956 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:45:44.377176 kernel: loop0: detected capacity change from 0 to 229808 Jul 6 23:45:44.378125 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:45:44.382175 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:45:44.384659 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:45:44.387061 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:45:44.387912 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jul 6 23:45:44.387946 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jul 6 23:45:44.394756 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:45:44.406727 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:45:44.407009 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:45:44.411054 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:45:44.412580 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:45:44.415395 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:45:44.436771 kernel: loop1: detected capacity change from 0 to 142488 Jul 6 23:45:44.436568 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:45:44.446165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:45:44.464243 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jul 6 23:45:44.464621 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jul 6 23:45:44.471201 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:45:44.480948 kernel: loop2: detected capacity change from 0 to 140768 Jul 6 23:45:44.523125 kernel: loop3: detected capacity change from 0 to 229808 Jul 6 23:45:44.532943 kernel: loop4: detected capacity change from 0 to 142488 Jul 6 23:45:44.543949 kernel: loop5: detected capacity change from 0 to 140768 Jul 6 23:45:44.552060 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:45:44.552680 (sd-merge)[1199]: Merged extensions into '/usr'. Jul 6 23:45:44.556960 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:45:44.557157 systemd[1]: Reloading... Jul 6 23:45:44.617051 zram_generator::config[1224]: No configuration found. Jul 6 23:45:44.664884 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:45:44.743426 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:45:44.793484 systemd[1]: Reloading finished in 235 ms. Jul 6 23:45:44.824977 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:45:44.826510 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:45:44.842118 systemd[1]: Starting ensure-sysext.service... Jul 6 23:45:44.844287 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:45:44.851953 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:45:44.851969 systemd[1]: Reloading... Jul 6 23:45:44.869115 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:45:44.869510 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:45:44.870546 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:45:44.870852 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jul 6 23:45:44.870959 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jul 6 23:45:44.890473 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:45:44.890489 systemd-tmpfiles[1263]: Skipping /boot Jul 6 23:45:44.904894 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:45:44.905051 systemd-tmpfiles[1263]: Skipping /boot Jul 6 23:45:44.911948 zram_generator::config[1290]: No configuration found. Jul 6 23:45:45.027580 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:45:45.077731 systemd[1]: Reloading finished in 225 ms. Jul 6 23:45:45.095676 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:45:45.108404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:45:45.117500 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:45:45.120050 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:45:45.122612 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:45:45.127273 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:45:45.134787 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:45:45.140147 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:45:45.143583 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:45:45.143766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:45:45.145081 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:45:45.148613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:45:45.152119 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:45:45.153270 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:45:45.155116 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:45:45.157360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:45:45.162741 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:45:45.162961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:45:45.163169 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:45:45.163309 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:45:45.164900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:45:45.165145 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:45:45.168316 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:45:45.170397 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:45:45.170572 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:45:45.172594 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:45:45.173297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:45:45.177886 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jul 6 23:45:45.181041 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:45:45.186381 augenrules[1357]: No rules Jul 6 23:45:45.188433 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:45:45.196121 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:45:45.196337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:45:45.205131 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:45:45.210165 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:45:45.216169 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:45:45.220788 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:45:45.222552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:45:45.226167 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:45:45.229994 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:45:45.231080 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:45:45.232489 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:45:45.235346 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:45:45.239378 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:45:45.239572 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:45:45.241384 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:45:45.241600 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:45:45.243642 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:45:45.243819 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:45:45.249017 systemd[1]: Finished ensure-sysext.service. Jul 6 23:45:45.250371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:45:45.250564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:45:45.256542 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:45:45.278156 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:45:45.279249 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:45:45.279320 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:45:45.281360 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:45:45.282480 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:45:45.282632 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:45:45.291949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1386) Jul 6 23:45:45.309268 systemd-resolved[1333]: Positive Trust Anchors: Jul 6 23:45:45.309286 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:45:45.309318 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:45:45.315983 systemd-resolved[1333]: Defaulting to hostname 'linux'. Jul 6 23:45:45.317915 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:45:45.326430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:45:45.348789 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:45:45.353744 systemd-networkd[1401]: lo: Link UP Jul 6 23:45:45.353755 systemd-networkd[1401]: lo: Gained carrier Jul 6 23:45:45.355646 systemd-networkd[1401]: Enumeration completed Jul 6 23:45:45.357429 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:45:45.358787 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:45:45.359995 systemd[1]: Reached target network.target - Network. Jul 6 23:45:45.361581 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:45:45.361596 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:45:45.362979 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:45:45.363125 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:45:45.363188 systemd-networkd[1401]: eth0: Link UP Jul 6 23:45:45.363208 systemd-networkd[1401]: eth0: Gained carrier Jul 6 23:45:45.363223 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:45:45.366969 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:45:45.375010 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:45:45.378621 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:45:45.381293 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:45:45.381878 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:45:45.382233 systemd-timesyncd[1403]: Initial clock synchronization to Sun 2025-07-06 23:45:45.678929 UTC. Jul 6 23:45:45.383124 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:45:45.391241 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 6 23:45:45.391566 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 6 23:45:45.391774 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 6 23:45:45.394953 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:45:45.422187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:45:45.507954 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:45:45.605628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:45:45.622365 kernel: kvm_amd: TSC scaling supported Jul 6 23:45:45.622557 kernel: kvm_amd: Nested Virtualization enabled Jul 6 23:45:45.622583 kernel: kvm_amd: Nested Paging enabled Jul 6 23:45:45.623318 kernel: kvm_amd: LBR virtualization supported Jul 6 23:45:45.623372 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 6 23:45:45.624424 kernel: kvm_amd: Virtual GIF supported Jul 6 23:45:45.647955 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:45:45.678723 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:45:45.690338 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:45:45.700345 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:45:45.730610 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:45:45.732314 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:45:45.733564 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:45:45.735233 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:45:45.736700 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:45:45.738957 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:45:45.740435 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:45:45.741874 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:45:45.743314 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:45:45.743346 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:45:45.744529 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:45:45.746707 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:45:45.750293 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:45:45.759453 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:45:45.762122 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:45:45.763890 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:45:45.765146 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:45:45.766205 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:45:45.767232 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:45:45.767261 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:45:45.768435 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:45:45.770755 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:45:45.775212 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:45:45.779720 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:45:45.781005 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:45:45.784951 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:45:45.785625 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:45:45.789076 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:45:45.791762 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:45:45.794370 jq[1434]: false Jul 6 23:45:45.795202 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:45:45.801605 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:45:45.803779 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:45:45.808474 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:45:45.810224 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:45:45.814708 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:45:45.820726 dbus-daemon[1433]: [system] SELinux support is enabled Jul 6 23:45:45.821279 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:45:45.826174 extend-filesystems[1435]: Found loop3 Jul 6 23:45:45.827378 extend-filesystems[1435]: Found loop4 Jul 6 23:45:45.827378 extend-filesystems[1435]: Found loop5 Jul 6 23:45:45.827378 extend-filesystems[1435]: Found sr0 Jul 6 23:45:45.827378 extend-filesystems[1435]: Found vda Jul 6 23:45:45.827378 extend-filesystems[1435]: Found vda1 Jul 6 23:45:45.827378 extend-filesystems[1435]: Found vda2 Jul 6 23:45:45.827378 extend-filesystems[1435]: Found vda3 Jul 6 23:45:45.827378 extend-filesystems[1435]: Found usr Jul 6 23:45:45.827378 extend-filesystems[1435]: Found vda4 Jul 6 23:45:45.827378 extend-filesystems[1435]: Found vda6 Jul 6 23:45:45.827378 extend-filesystems[1435]: Found vda7 Jul 6 23:45:45.827378 extend-filesystems[1435]: Found vda9 Jul 6 23:45:45.827378 extend-filesystems[1435]: Checking size of /dev/vda9 Jul 6 23:45:45.829095 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:45:45.876686 update_engine[1442]: I20250706 23:45:45.852756 1442 main.cc:92] Flatcar Update Engine starting Jul 6 23:45:45.876686 update_engine[1442]: I20250706 23:45:45.858412 1442 update_check_scheduler.cc:74] Next update check in 7m54s Jul 6 23:45:45.877004 extend-filesystems[1435]: Resized partition /dev/vda9 Jul 6 23:45:45.829362 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:45:45.881460 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:45:45.836387 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:45:45.836713 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:45:45.882471 jq[1443]: true Jul 6 23:45:45.849172 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:45:45.849223 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:45:45.854098 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:45:45.854119 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:45:45.865241 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:45:45.866735 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:45:45.869546 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:45:45.869791 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:45:45.874300 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:45:45.888977 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:45:45.891935 tar[1446]: linux-amd64/LICENSE Jul 6 23:45:45.893025 tar[1446]: linux-amd64/helm Jul 6 23:45:45.907485 jq[1462]: true Jul 6 23:45:45.930709 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:45:45.930744 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:45:45.931210 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:45:45.936960 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1376) Jul 6 23:45:45.937047 systemd-logind[1440]: New seat seat0. Jul 6 23:45:45.954814 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:45:45.964507 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:45:46.013619 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:45:46.013619 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:45:46.013619 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:45:46.024549 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Jul 6 23:45:46.020658 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:45:46.021046 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:45:46.030349 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:45:46.034200 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:45:46.039045 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:45:46.041550 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:45:46.044232 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:45:46.093368 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:45:46.140294 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:45:46.148567 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:45:46.149127 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:45:46.159383 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:45:46.241385 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:45:46.251660 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:45:46.255864 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:45:46.257744 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:45:46.386778 containerd[1452]: time="2025-07-06T23:45:46.386568775Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:45:46.419902 containerd[1452]: time="2025-07-06T23:45:46.419726387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:46.426264 containerd[1452]: time="2025-07-06T23:45:46.426143944Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:45:46.426264 containerd[1452]: time="2025-07-06T23:45:46.426201790Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:45:46.426264 containerd[1452]: time="2025-07-06T23:45:46.426222177Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:45:46.426543 containerd[1452]: time="2025-07-06T23:45:46.426505497Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:45:46.426543 containerd[1452]: time="2025-07-06T23:45:46.426532254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:46.426738 containerd[1452]: time="2025-07-06T23:45:46.426619922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:45:46.426738 containerd[1452]: time="2025-07-06T23:45:46.426638667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:46.426942 containerd[1452]: time="2025-07-06T23:45:46.426896603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:45:46.426942 containerd[1452]: time="2025-07-06T23:45:46.426918620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:46.426942 containerd[1452]: time="2025-07-06T23:45:46.426933531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:45:46.427077 containerd[1452]: time="2025-07-06T23:45:46.426946115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:46.427102 containerd[1452]: time="2025-07-06T23:45:46.427086133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:46.427409 containerd[1452]: time="2025-07-06T23:45:46.427375407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:45:46.427603 containerd[1452]: time="2025-07-06T23:45:46.427534295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:45:46.427603 containerd[1452]: time="2025-07-06T23:45:46.427554163Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:45:46.427764 containerd[1452]: time="2025-07-06T23:45:46.427734726Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:45:46.427890 containerd[1452]: time="2025-07-06T23:45:46.427840859Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:45:46.471113 containerd[1452]: time="2025-07-06T23:45:46.471038937Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:45:46.471276 containerd[1452]: time="2025-07-06T23:45:46.471181054Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:45:46.471276 containerd[1452]: time="2025-07-06T23:45:46.471204662Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:45:46.471276 containerd[1452]: time="2025-07-06T23:45:46.471221412Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:45:46.471276 containerd[1452]: time="2025-07-06T23:45:46.471238578Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:45:46.471491 containerd[1452]: time="2025-07-06T23:45:46.471464956Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:45:46.471962 containerd[1452]: time="2025-07-06T23:45:46.471892917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:45:46.472276 containerd[1452]: time="2025-07-06T23:45:46.472236775Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:45:46.472276 containerd[1452]: time="2025-07-06T23:45:46.472264415Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:45:46.472276 containerd[1452]: time="2025-07-06T23:45:46.472280178Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:45:46.472276 containerd[1452]: time="2025-07-06T23:45:46.472295266Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:45:46.472458 containerd[1452]: time="2025-07-06T23:45:46.472309782Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:45:46.472458 containerd[1452]: time="2025-07-06T23:45:46.472322729Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:45:46.472458 containerd[1452]: time="2025-07-06T23:45:46.472338399Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:45:46.472458 containerd[1452]: time="2025-07-06T23:45:46.472374413Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:45:46.472458 containerd[1452]: time="2025-07-06T23:45:46.472389335Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:45:46.472458 containerd[1452]: time="2025-07-06T23:45:46.472403540Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:45:46.472458 containerd[1452]: time="2025-07-06T23:45:46.472415167Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:45:46.472458 containerd[1452]: time="2025-07-06T23:45:46.472439346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472458 containerd[1452]: time="2025-07-06T23:45:46.472457561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472642 containerd[1452]: time="2025-07-06T23:45:46.472476909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472642 containerd[1452]: time="2025-07-06T23:45:46.472497431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472642 containerd[1452]: time="2025-07-06T23:45:46.472516603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472642 containerd[1452]: time="2025-07-06T23:45:46.472533343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472642 containerd[1452]: time="2025-07-06T23:45:46.472547350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472642 containerd[1452]: time="2025-07-06T23:45:46.472562604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472642 containerd[1452]: time="2025-07-06T23:45:46.472578835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472642 containerd[1452]: time="2025-07-06T23:45:46.472600167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472642 containerd[1452]: time="2025-07-06T23:45:46.472615848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472642 containerd[1452]: time="2025-07-06T23:45:46.472631184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472642 containerd[1452]: time="2025-07-06T23:45:46.472646615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472878 containerd[1452]: time="2025-07-06T23:45:46.472670878Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:45:46.472878 containerd[1452]: time="2025-07-06T23:45:46.472699380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472878 containerd[1452]: time="2025-07-06T23:45:46.472712068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472878 containerd[1452]: time="2025-07-06T23:45:46.472724277Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:45:46.472878 containerd[1452]: time="2025-07-06T23:45:46.472782154Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:45:46.472878 containerd[1452]: time="2025-07-06T23:45:46.472818866Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:45:46.472878 containerd[1452]: time="2025-07-06T23:45:46.472834993Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:45:46.472878 containerd[1452]: time="2025-07-06T23:45:46.472847836Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:45:46.472878 containerd[1452]: time="2025-07-06T23:45:46.472858850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.472878 containerd[1452]: time="2025-07-06T23:45:46.472871153Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:45:46.472878 containerd[1452]: time="2025-07-06T23:45:46.472881929Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:45:46.473173 containerd[1452]: time="2025-07-06T23:45:46.472913319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:45:46.474149 containerd[1452]: time="2025-07-06T23:45:46.473518406Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:45:46.474688 containerd[1452]: time="2025-07-06T23:45:46.474511500Z" level=info msg="Connect containerd service" Jul 6 23:45:46.474893 containerd[1452]: time="2025-07-06T23:45:46.474827655Z" level=info msg="using legacy CRI server" Jul 6 23:45:46.474893 containerd[1452]: time="2025-07-06T23:45:46.474890925Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:45:46.475270 containerd[1452]: time="2025-07-06T23:45:46.475223882Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:45:46.476322 containerd[1452]: time="2025-07-06T23:45:46.476278066Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:45:46.476998 containerd[1452]: time="2025-07-06T23:45:46.476612696Z" level=info msg="Start subscribing containerd event" Jul 6 23:45:46.476998 containerd[1452]: time="2025-07-06T23:45:46.476688560Z" level=info msg="Start recovering state" Jul 6 23:45:46.476998 containerd[1452]: time="2025-07-06T23:45:46.476689766Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:45:46.476998 containerd[1452]: time="2025-07-06T23:45:46.476772582Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:45:46.476998 containerd[1452]: time="2025-07-06T23:45:46.476782619Z" level=info msg="Start event monitor" Jul 6 23:45:46.476998 containerd[1452]: time="2025-07-06T23:45:46.476808046Z" level=info msg="Start snapshots syncer" Jul 6 23:45:46.476998 containerd[1452]: time="2025-07-06T23:45:46.476823258Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:45:46.476998 containerd[1452]: time="2025-07-06T23:45:46.476836351Z" level=info msg="Start streaming server" Jul 6 23:45:46.477283 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:45:46.477544 containerd[1452]: time="2025-07-06T23:45:46.477522226Z" level=info msg="containerd successfully booted in 0.092872s" Jul 6 23:45:46.703437 tar[1446]: linux-amd64/README.md Jul 6 23:45:46.725375 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:45:47.168136 systemd-networkd[1401]: eth0: Gained IPv6LL Jul 6 23:45:47.171821 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:45:47.173680 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:45:47.185216 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:45:47.196081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:45:47.198412 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:45:47.222264 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:45:47.222596 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:45:47.236949 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:45:47.239925 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:45:48.922783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:45:48.924512 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:45:48.925810 systemd[1]: Startup finished in 1.113s (kernel) + 6.735s (initrd) + 5.472s (userspace) = 13.321s. Jul 6 23:45:48.945511 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:45:49.678727 kubelet[1545]: E0706 23:45:49.678633 1545 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:45:49.683853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:45:49.684117 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:45:49.684508 systemd[1]: kubelet.service: Consumed 2.250s CPU time. Jul 6 23:45:50.241663 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:45:50.243010 systemd[1]: Started sshd@0-10.0.0.18:22-10.0.0.1:34294.service - OpenSSH per-connection server daemon (10.0.0.1:34294). Jul 6 23:45:50.285943 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 34294 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:45:50.287996 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:50.297787 systemd-logind[1440]: New session 1 of user core. Jul 6 23:45:50.299442 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:45:50.314197 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:45:50.326314 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:45:50.341198 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:45:50.344446 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:45:50.471141 systemd[1562]: Queued start job for default target default.target. Jul 6 23:45:50.487311 systemd[1562]: Created slice app.slice - User Application Slice. Jul 6 23:45:50.487337 systemd[1562]: Reached target paths.target - Paths. Jul 6 23:45:50.487352 systemd[1562]: Reached target timers.target - Timers. Jul 6 23:45:50.489018 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:45:50.501172 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:45:50.501315 systemd[1562]: Reached target sockets.target - Sockets. Jul 6 23:45:50.501334 systemd[1562]: Reached target basic.target - Basic System. Jul 6 23:45:50.501373 systemd[1562]: Reached target default.target - Main User Target. Jul 6 23:45:50.501407 systemd[1562]: Startup finished in 149ms. Jul 6 23:45:50.502173 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:45:50.504169 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:45:50.573026 systemd[1]: Started sshd@1-10.0.0.18:22-10.0.0.1:34304.service - OpenSSH per-connection server daemon (10.0.0.1:34304). Jul 6 23:45:50.610732 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 34304 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:45:50.612789 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:50.617465 systemd-logind[1440]: New session 2 of user core. Jul 6 23:45:50.628100 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:45:50.686518 sshd[1573]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:50.699987 systemd[1]: sshd@1-10.0.0.18:22-10.0.0.1:34304.service: Deactivated successfully. Jul 6 23:45:50.701877 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:45:50.703661 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:45:50.715233 systemd[1]: Started sshd@2-10.0.0.18:22-10.0.0.1:34312.service - OpenSSH per-connection server daemon (10.0.0.1:34312). Jul 6 23:45:50.716813 systemd-logind[1440]: Removed session 2. Jul 6 23:45:50.747510 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 34312 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:45:50.749498 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:50.753872 systemd-logind[1440]: New session 3 of user core. Jul 6 23:45:50.763142 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:45:50.813984 sshd[1580]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:50.831833 systemd[1]: sshd@2-10.0.0.18:22-10.0.0.1:34312.service: Deactivated successfully. Jul 6 23:45:50.833653 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:45:50.835346 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:45:50.844323 systemd[1]: Started sshd@3-10.0.0.18:22-10.0.0.1:34322.service - OpenSSH per-connection server daemon (10.0.0.1:34322). Jul 6 23:45:50.845322 systemd-logind[1440]: Removed session 3. Jul 6 23:45:50.873671 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 34322 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:45:50.875513 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:50.879535 systemd-logind[1440]: New session 4 of user core. Jul 6 23:45:50.886092 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:45:50.942360 sshd[1587]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:50.953980 systemd[1]: sshd@3-10.0.0.18:22-10.0.0.1:34322.service: Deactivated successfully. Jul 6 23:45:50.955833 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:45:50.957180 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:45:50.958447 systemd[1]: Started sshd@4-10.0.0.18:22-10.0.0.1:34328.service - OpenSSH per-connection server daemon (10.0.0.1:34328). Jul 6 23:45:50.959281 systemd-logind[1440]: Removed session 4. Jul 6 23:45:50.991261 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 34328 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:45:50.992743 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:50.996810 systemd-logind[1440]: New session 5 of user core. Jul 6 23:45:51.006088 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:45:51.066613 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:45:51.066997 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:45:51.084569 sudo[1597]: pam_unix(sudo:session): session closed for user root Jul 6 23:45:51.086846 sshd[1594]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:51.099792 systemd[1]: sshd@4-10.0.0.18:22-10.0.0.1:34328.service: Deactivated successfully. Jul 6 23:45:51.102373 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:45:51.103916 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:45:51.119425 systemd[1]: Started sshd@5-10.0.0.18:22-10.0.0.1:34338.service - OpenSSH per-connection server daemon (10.0.0.1:34338). Jul 6 23:45:51.120465 systemd-logind[1440]: Removed session 5. Jul 6 23:45:51.147996 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 34338 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:45:51.149735 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:51.153933 systemd-logind[1440]: New session 6 of user core. Jul 6 23:45:51.164098 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:45:51.220071 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:45:51.220437 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:45:51.224912 sudo[1606]: pam_unix(sudo:session): session closed for user root Jul 6 23:45:51.232057 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:45:51.232422 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:45:51.253338 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:45:51.255221 auditctl[1609]: No rules Jul 6 23:45:51.257027 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:45:51.257420 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:45:51.259655 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:45:51.293706 augenrules[1627]: No rules Jul 6 23:45:51.295844 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:45:51.297484 sudo[1605]: pam_unix(sudo:session): session closed for user root Jul 6 23:45:51.299879 sshd[1602]: pam_unix(sshd:session): session closed for user core Jul 6 23:45:51.309993 systemd[1]: sshd@5-10.0.0.18:22-10.0.0.1:34338.service: Deactivated successfully. Jul 6 23:45:51.312815 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:45:51.315227 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:45:51.325513 systemd[1]: Started sshd@6-10.0.0.18:22-10.0.0.1:34348.service - OpenSSH per-connection server daemon (10.0.0.1:34348). Jul 6 23:45:51.326643 systemd-logind[1440]: Removed session 6. Jul 6 23:45:51.353931 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 34348 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:45:51.355571 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:45:51.359737 systemd-logind[1440]: New session 7 of user core. Jul 6 23:45:51.369243 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:45:51.424924 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:45:51.425297 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:45:51.990674 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:45:51.991041 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:45:52.698157 dockerd[1657]: time="2025-07-06T23:45:52.698032349Z" level=info msg="Starting up" Jul 6 23:45:53.585862 dockerd[1657]: time="2025-07-06T23:45:53.585770479Z" level=info msg="Loading containers: start." Jul 6 23:45:53.719978 kernel: Initializing XFRM netlink socket Jul 6 23:45:53.807543 systemd-networkd[1401]: docker0: Link UP Jul 6 23:45:53.831046 dockerd[1657]: time="2025-07-06T23:45:53.830986648Z" level=info msg="Loading containers: done." Jul 6 23:45:53.856694 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck476409656-merged.mount: Deactivated successfully. Jul 6 23:45:53.860170 dockerd[1657]: time="2025-07-06T23:45:53.860118866Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:45:53.860288 dockerd[1657]: time="2025-07-06T23:45:53.860255757Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:45:53.860455 dockerd[1657]: time="2025-07-06T23:45:53.860423641Z" level=info msg="Daemon has completed initialization" Jul 6 23:45:53.908431 dockerd[1657]: time="2025-07-06T23:45:53.908341993Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:45:53.908617 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:45:54.547806 containerd[1452]: time="2025-07-06T23:45:54.547750409Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:45:55.330742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381809070.mount: Deactivated successfully. Jul 6 23:45:56.750240 containerd[1452]: time="2025-07-06T23:45:56.750169108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:56.750996 containerd[1452]: time="2025-07-06T23:45:56.750948792Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 6 23:45:56.752380 containerd[1452]: time="2025-07-06T23:45:56.752344278Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:56.755117 containerd[1452]: time="2025-07-06T23:45:56.755078133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:56.759314 containerd[1452]: time="2025-07-06T23:45:56.759280093Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.211476457s" Jul 6 23:45:56.759358 containerd[1452]: time="2025-07-06T23:45:56.759316301Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 6 23:45:56.760244 containerd[1452]: time="2025-07-06T23:45:56.760219074Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:45:58.543188 containerd[1452]: time="2025-07-06T23:45:58.543113858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:58.544074 containerd[1452]: time="2025-07-06T23:45:58.544000371Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 6 23:45:58.545480 containerd[1452]: time="2025-07-06T23:45:58.545429434Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:58.548562 containerd[1452]: time="2025-07-06T23:45:58.548524208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:45:58.549510 containerd[1452]: time="2025-07-06T23:45:58.549461120Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.78921225s" Jul 6 23:45:58.549510 containerd[1452]: time="2025-07-06T23:45:58.549498447Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 6 23:45:58.550162 containerd[1452]: time="2025-07-06T23:45:58.550136846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:45:59.781591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:45:59.794202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:00.077467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:00.082342 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:46:00.376262 kubelet[1874]: E0706 23:46:00.375911 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:46:00.383066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:46:00.383381 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:46:00.662997 containerd[1452]: time="2025-07-06T23:46:00.662810211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:00.664061 containerd[1452]: time="2025-07-06T23:46:00.664023192Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 6 23:46:00.665836 containerd[1452]: time="2025-07-06T23:46:00.665788548Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:00.668746 containerd[1452]: time="2025-07-06T23:46:00.668678850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:00.669651 containerd[1452]: time="2025-07-06T23:46:00.669613466Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 2.119443389s" Jul 6 23:46:00.669695 containerd[1452]: time="2025-07-06T23:46:00.669651010Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 6 23:46:00.670221 containerd[1452]: time="2025-07-06T23:46:00.670194005Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:46:02.157881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753349424.mount: Deactivated successfully. Jul 6 23:46:02.441011 containerd[1452]: time="2025-07-06T23:46:02.440841506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:02.441820 containerd[1452]: time="2025-07-06T23:46:02.441774582Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 6 23:46:02.443200 containerd[1452]: time="2025-07-06T23:46:02.443147229Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:02.445484 containerd[1452]: time="2025-07-06T23:46:02.445427524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:02.445987 containerd[1452]: time="2025-07-06T23:46:02.445959933Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.775731703s" Jul 6 23:46:02.446033 containerd[1452]: time="2025-07-06T23:46:02.445991560Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 6 23:46:02.446554 containerd[1452]: time="2025-07-06T23:46:02.446513503Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:46:03.005975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870155381.mount: Deactivated successfully. Jul 6 23:46:04.021907 containerd[1452]: time="2025-07-06T23:46:04.021817607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:04.022783 containerd[1452]: time="2025-07-06T23:46:04.022677191Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 6 23:46:04.024210 containerd[1452]: time="2025-07-06T23:46:04.024159061Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:04.027534 containerd[1452]: time="2025-07-06T23:46:04.027482976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:04.028660 containerd[1452]: time="2025-07-06T23:46:04.028633398Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.582088607s" Jul 6 23:46:04.028715 containerd[1452]: time="2025-07-06T23:46:04.028664408Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 6 23:46:04.029343 containerd[1452]: time="2025-07-06T23:46:04.029279819Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:46:04.514660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999695880.mount: Deactivated successfully. Jul 6 23:46:04.520889 containerd[1452]: time="2025-07-06T23:46:04.520813140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:04.521619 containerd[1452]: time="2025-07-06T23:46:04.521559885Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:46:04.522903 containerd[1452]: time="2025-07-06T23:46:04.522852737Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:04.525365 containerd[1452]: time="2025-07-06T23:46:04.525323826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:04.526028 containerd[1452]: time="2025-07-06T23:46:04.525994283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 496.6811ms" Jul 6 23:46:04.526028 containerd[1452]: time="2025-07-06T23:46:04.526026089Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:46:04.526495 containerd[1452]: time="2025-07-06T23:46:04.526416647Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:46:05.070119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950252141.mount: Deactivated successfully. Jul 6 23:46:06.808369 containerd[1452]: time="2025-07-06T23:46:06.808295087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:06.809057 containerd[1452]: time="2025-07-06T23:46:06.809006170Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 6 23:46:06.810360 containerd[1452]: time="2025-07-06T23:46:06.810325501Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:06.813855 containerd[1452]: time="2025-07-06T23:46:06.813801677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:06.814911 containerd[1452]: time="2025-07-06T23:46:06.814874204Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.288422824s" Jul 6 23:46:06.814911 containerd[1452]: time="2025-07-06T23:46:06.814906446Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 6 23:46:10.531611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:46:10.541119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:10.718707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:10.723205 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:46:10.975682 kubelet[2032]: E0706 23:46:10.975537 2032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:46:10.980608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:46:10.980902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:46:12.205420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:12.220139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:12.243181 systemd[1]: Reloading requested from client PID 2047 ('systemctl') (unit session-7.scope)... Jul 6 23:46:12.243198 systemd[1]: Reloading... Jul 6 23:46:12.336966 zram_generator::config[2092]: No configuration found. Jul 6 23:46:12.520482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:12.598254 systemd[1]: Reloading finished in 354 ms. Jul 6 23:46:12.655133 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:12.658705 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:46:12.658983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:12.660761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:12.837980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:12.843133 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:46:12.940839 kubelet[2136]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:12.940839 kubelet[2136]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:46:12.940839 kubelet[2136]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:12.941311 kubelet[2136]: I0706 23:46:12.940938 2136 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:46:13.380003 kubelet[2136]: I0706 23:46:13.379942 2136 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:46:13.380003 kubelet[2136]: I0706 23:46:13.379985 2136 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:46:13.380247 kubelet[2136]: I0706 23:46:13.380223 2136 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:46:13.404608 kubelet[2136]: E0706 23:46:13.404546 2136 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:46:13.405778 kubelet[2136]: I0706 23:46:13.405727 2136 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:46:13.414586 kubelet[2136]: E0706 23:46:13.414546 2136 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:46:13.414586 kubelet[2136]: I0706 23:46:13.414585 2136 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:46:13.421076 kubelet[2136]: I0706 23:46:13.421049 2136 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:46:13.421356 kubelet[2136]: I0706 23:46:13.421312 2136 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:46:13.421556 kubelet[2136]: I0706 23:46:13.421348 2136 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:46:13.421665 kubelet[2136]: I0706 23:46:13.421568 2136 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:46:13.421665 kubelet[2136]: I0706 23:46:13.421579 2136 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:46:13.421783 kubelet[2136]: I0706 23:46:13.421760 2136 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:13.424500 kubelet[2136]: I0706 23:46:13.424325 2136 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:46:13.424500 kubelet[2136]: I0706 23:46:13.424352 2136 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:46:13.424500 kubelet[2136]: I0706 23:46:13.424382 2136 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:46:13.427189 kubelet[2136]: I0706 23:46:13.427072 2136 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:46:13.434471 kubelet[2136]: E0706 23:46:13.434191 2136 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:46:13.434739 kubelet[2136]: I0706 23:46:13.434684 2136 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:46:13.435839 kubelet[2136]: I0706 23:46:13.435649 2136 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:46:13.435839 kubelet[2136]: E0706 23:46:13.435716 2136 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:46:13.437604 kubelet[2136]: W0706 23:46:13.436811 2136 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:46:13.440703 kubelet[2136]: I0706 23:46:13.440658 2136 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:46:13.440761 kubelet[2136]: I0706 23:46:13.440741 2136 server.go:1289] "Started kubelet" Jul 6 23:46:13.441786 kubelet[2136]: I0706 23:46:13.441598 2136 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:46:13.443991 kubelet[2136]: I0706 23:46:13.443264 2136 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:46:13.445281 kubelet[2136]: I0706 23:46:13.445255 2136 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:46:13.445379 kubelet[2136]: I0706 23:46:13.445333 2136 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:46:13.447027 kubelet[2136]: E0706 23:46:13.445127 2136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.18:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.18:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fce447246022e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:46:13.440700974 +0000 UTC m=+0.591273813,LastTimestamp:2025-07-06 23:46:13.440700974 +0000 UTC m=+0.591273813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:46:13.447027 kubelet[2136]: I0706 23:46:13.443265 2136 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:46:13.447027 kubelet[2136]: I0706 23:46:13.446943 2136 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:46:13.447027 kubelet[2136]: I0706 23:46:13.443260 2136 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:46:13.447027 kubelet[2136]: E0706 23:46:13.446954 2136 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:46:13.447263 kubelet[2136]: E0706 23:46:13.446979 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:13.447315 kubelet[2136]: E0706 23:46:13.447289 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="200ms" Jul 6 23:46:13.447449 kubelet[2136]: I0706 23:46:13.447431 2136 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:46:13.447529 kubelet[2136]: I0706 23:46:13.447511 2136 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:46:13.447606 kubelet[2136]: I0706 23:46:13.447581 2136 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:46:13.448733 kubelet[2136]: E0706 23:46:13.448286 2136 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:46:13.448733 kubelet[2136]: I0706 23:46:13.448344 2136 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:46:13.448733 kubelet[2136]: I0706 23:46:13.448498 2136 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:46:13.466504 kubelet[2136]: I0706 23:46:13.466473 2136 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:46:13.466504 kubelet[2136]: I0706 23:46:13.466490 2136 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:46:13.466504 kubelet[2136]: I0706 23:46:13.466510 2136 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:13.471919 kubelet[2136]: I0706 23:46:13.471881 2136 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:46:13.474830 kubelet[2136]: I0706 23:46:13.474216 2136 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:46:13.474830 kubelet[2136]: I0706 23:46:13.474562 2136 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:46:13.474830 kubelet[2136]: I0706 23:46:13.474614 2136 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:46:13.474830 kubelet[2136]: I0706 23:46:13.474627 2136 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:46:13.474830 kubelet[2136]: E0706 23:46:13.474787 2136 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:46:13.476034 kubelet[2136]: E0706 23:46:13.475504 2136 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:46:13.548342 kubelet[2136]: E0706 23:46:13.548299 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:13.575542 kubelet[2136]: E0706 23:46:13.575493 2136 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:46:13.648545 kubelet[2136]: E0706 23:46:13.648390 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:13.648545 kubelet[2136]: E0706 23:46:13.648397 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="400ms" Jul 6 23:46:13.700526 kubelet[2136]: I0706 23:46:13.700468 2136 policy_none.go:49] "None policy: Start" Jul 6 23:46:13.700526 kubelet[2136]: I0706 23:46:13.700530 2136 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:46:13.700671 kubelet[2136]: I0706 23:46:13.700566 2136 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:46:13.709537 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:46:13.724893 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:46:13.728627 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:46:13.749431 kubelet[2136]: E0706 23:46:13.749386 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:13.761114 kubelet[2136]: W0706 23:46:13.761055 2136 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Jul 6 23:46:13.769456 kubelet[2136]: E0706 23:46:13.769249 2136 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:46:13.769609 kubelet[2136]: I0706 23:46:13.769564 2136 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:46:13.769717 kubelet[2136]: I0706 23:46:13.769623 2136 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:46:13.770029 kubelet[2136]: I0706 23:46:13.769988 2136 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:46:13.771053 kubelet[2136]: E0706 23:46:13.770977 2136 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:46:13.771151 kubelet[2136]: E0706 23:46:13.771056 2136 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:46:13.788919 systemd[1]: Created slice kubepods-burstable-pod11585b572b37ef8d8aa47edfd64001d9.slice - libcontainer container kubepods-burstable-pod11585b572b37ef8d8aa47edfd64001d9.slice. Jul 6 23:46:13.799791 kubelet[2136]: E0706 23:46:13.799748 2136 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:46:13.803200 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 6 23:46:13.805202 kubelet[2136]: E0706 23:46:13.805168 2136 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:46:13.806837 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 6 23:46:13.808559 kubelet[2136]: E0706 23:46:13.808525 2136 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:46:13.850075 kubelet[2136]: I0706 23:46:13.850024 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:46:13.850139 kubelet[2136]: I0706 23:46:13.850077 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11585b572b37ef8d8aa47edfd64001d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"11585b572b37ef8d8aa47edfd64001d9\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:13.850139 kubelet[2136]: I0706 23:46:13.850106 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11585b572b37ef8d8aa47edfd64001d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"11585b572b37ef8d8aa47edfd64001d9\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:13.850181 kubelet[2136]: I0706 23:46:13.850133 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11585b572b37ef8d8aa47edfd64001d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"11585b572b37ef8d8aa47edfd64001d9\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:13.850181 kubelet[2136]: I0706 23:46:13.850164 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:13.850237 kubelet[2136]: I0706 23:46:13.850211 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:13.850263 kubelet[2136]: I0706 23:46:13.850244 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:13.850289 kubelet[2136]: I0706 23:46:13.850267 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:13.850354 kubelet[2136]: I0706 23:46:13.850313 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:13.871321 kubelet[2136]: I0706 23:46:13.871288 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:46:13.871735 kubelet[2136]: E0706 23:46:13.871703 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jul 6 23:46:14.049631 kubelet[2136]: E0706 23:46:14.049456 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="800ms" Jul 6 23:46:14.073881 kubelet[2136]: I0706 23:46:14.073837 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:46:14.074201 kubelet[2136]: E0706 23:46:14.074166 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jul 6 23:46:14.100725 kubelet[2136]: E0706 23:46:14.100687 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:14.101375 containerd[1452]: time="2025-07-06T23:46:14.101329607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:11585b572b37ef8d8aa47edfd64001d9,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:14.106588 kubelet[2136]: E0706 23:46:14.106552 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:14.107081 containerd[1452]: time="2025-07-06T23:46:14.107043356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:14.109396 kubelet[2136]: E0706 23:46:14.109372 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:14.109850 containerd[1452]: time="2025-07-06T23:46:14.109809891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:14.344459 kubelet[2136]: E0706 23:46:14.344333 2136 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:46:14.475552 kubelet[2136]: I0706 23:46:14.475509 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:46:14.475963 kubelet[2136]: E0706 23:46:14.475916 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Jul 6 23:46:14.605968 kubelet[2136]: E0706 23:46:14.605846 2136 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:46:14.640443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197901573.mount: Deactivated successfully. Jul 6 23:46:14.651437 containerd[1452]: time="2025-07-06T23:46:14.651391641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:14.652318 containerd[1452]: time="2025-07-06T23:46:14.652236981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:46:14.653551 containerd[1452]: time="2025-07-06T23:46:14.653500192Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:14.654603 containerd[1452]: time="2025-07-06T23:46:14.654559512Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:14.655978 containerd[1452]: time="2025-07-06T23:46:14.655942775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:46:14.657102 containerd[1452]: time="2025-07-06T23:46:14.657048301Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:14.658050 containerd[1452]: time="2025-07-06T23:46:14.657984171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:46:14.660496 containerd[1452]: time="2025-07-06T23:46:14.660453568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:14.662299 containerd[1452]: time="2025-07-06T23:46:14.662265101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 552.371208ms" Jul 6 23:46:14.662885 containerd[1452]: time="2025-07-06T23:46:14.662851277Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.725283ms" Jul 6 23:46:14.663472 containerd[1452]: time="2025-07-06T23:46:14.663442797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.032067ms" Jul 6 23:46:14.734251 kubelet[2136]: E0706 23:46:14.734173 2136 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:46:14.833968 containerd[1452]: time="2025-07-06T23:46:14.832872214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:14.833968 containerd[1452]: time="2025-07-06T23:46:14.833166776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:14.833968 containerd[1452]: time="2025-07-06T23:46:14.833196889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:14.833968 containerd[1452]: time="2025-07-06T23:46:14.833315958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:14.833968 containerd[1452]: time="2025-07-06T23:46:14.833283608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:14.833968 containerd[1452]: time="2025-07-06T23:46:14.833358374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:14.833968 containerd[1452]: time="2025-07-06T23:46:14.833373054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:14.834690 containerd[1452]: time="2025-07-06T23:46:14.834422818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:14.835656 containerd[1452]: time="2025-07-06T23:46:14.835434677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:14.836377 containerd[1452]: time="2025-07-06T23:46:14.836314884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:14.836416 containerd[1452]: time="2025-07-06T23:46:14.836385548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:14.836662 containerd[1452]: time="2025-07-06T23:46:14.836561062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:14.850771 kubelet[2136]: E0706 23:46:14.850705 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="1.6s" Jul 6 23:46:14.867087 systemd[1]: Started cri-containerd-2ac55261dc13a024a6d2707e57c03663e24f86d5c06c41e9869a427d409aae8f.scope - libcontainer container 2ac55261dc13a024a6d2707e57c03663e24f86d5c06c41e9869a427d409aae8f. Jul 6 23:46:14.868856 systemd[1]: Started cri-containerd-bd154265921428d0e6280a29ce0adcaf1555f2959985033a011899318f6c0a93.scope - libcontainer container bd154265921428d0e6280a29ce0adcaf1555f2959985033a011899318f6c0a93. Jul 6 23:46:14.871117 systemd[1]: Started cri-containerd-f3f7392fbf6b5a528b60d6143ba89019fca30a0430327c71953f5a3ad06cb093.scope - libcontainer container f3f7392fbf6b5a528b60d6143ba89019fca30a0430327c71953f5a3ad06cb093. Jul 6 23:46:14.887368 kubelet[2136]: E0706 23:46:14.887281 2136 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:46:14.910572 containerd[1452]: time="2025-07-06T23:46:14.910444186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd154265921428d0e6280a29ce0adcaf1555f2959985033a011899318f6c0a93\"" Jul 6 23:46:14.912676 kubelet[2136]: E0706 23:46:14.912386 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:14.916530 containerd[1452]: time="2025-07-06T23:46:14.916290991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3f7392fbf6b5a528b60d6143ba89019fca30a0430327c71953f5a3ad06cb093\"" Jul 6 23:46:14.917623 kubelet[2136]: E0706 23:46:14.917587 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:14.919706 containerd[1452]: time="2025-07-06T23:46:14.919498332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:11585b572b37ef8d8aa47edfd64001d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ac55261dc13a024a6d2707e57c03663e24f86d5c06c41e9869a427d409aae8f\"" Jul 6 23:46:14.919706 containerd[1452]: time="2025-07-06T23:46:14.919532556Z" level=info msg="CreateContainer within sandbox \"bd154265921428d0e6280a29ce0adcaf1555f2959985033a011899318f6c0a93\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:46:14.920657 kubelet[2136]: E0706 23:46:14.920509 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:14.923206 containerd[1452]: time="2025-07-06T23:46:14.923159493Z" level=info msg="CreateContainer within sandbox \"f3f7392fbf6b5a528b60d6143ba89019fca30a0430327c71953f5a3ad06cb093\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:46:14.925888 containerd[1452]: time="2025-07-06T23:46:14.925856537Z" level=info msg="CreateContainer within sandbox \"2ac55261dc13a024a6d2707e57c03663e24f86d5c06c41e9869a427d409aae8f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:46:14.944484 containerd[1452]: time="2025-07-06T23:46:14.944452292Z" level=info msg="CreateContainer within sandbox \"bd154265921428d0e6280a29ce0adcaf1555f2959985033a011899318f6c0a93\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"270b233af7c9d01abba40f6f3a8c1c56bd06e975e594f467ae775665e8aa56a6\"" Jul 6 23:46:14.945203 containerd[1452]: time="2025-07-06T23:46:14.945167393Z" level=info msg="StartContainer for \"270b233af7c9d01abba40f6f3a8c1c56bd06e975e594f467ae775665e8aa56a6\"" Jul 6 23:46:14.949263 containerd[1452]: time="2025-07-06T23:46:14.949239497Z" level=info msg="CreateContainer within sandbox \"f3f7392fbf6b5a528b60d6143ba89019fca30a0430327c71953f5a3ad06cb093\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bc687abf411dff651e21c31c6c3adf2ef3773b024c92d4d2a125b9cf673d87a7\"" Jul 6 23:46:14.949740 containerd[1452]: time="2025-07-06T23:46:14.949694812Z" level=info msg="StartContainer for \"bc687abf411dff651e21c31c6c3adf2ef3773b024c92d4d2a125b9cf673d87a7\"" Jul 6 23:46:14.955425 containerd[1452]: time="2025-07-06T23:46:14.955329152Z" level=info msg="CreateContainer within sandbox \"2ac55261dc13a024a6d2707e57c03663e24f86d5c06c41e9869a427d409aae8f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"827653302585c9d16cdd9b139a75869ddedde2e31e583458b46579de19b1f8ec\"" Jul 6 23:46:14.955674 containerd[1452]: time="2025-07-06T23:46:14.955655381Z" level=info msg="StartContainer for \"827653302585c9d16cdd9b139a75869ddedde2e31e583458b46579de19b1f8ec\"" Jul 6 23:46:14.974045 systemd[1]: Started cri-containerd-270b233af7c9d01abba40f6f3a8c1c56bd06e975e594f467ae775665e8aa56a6.scope - libcontainer container 270b233af7c9d01abba40f6f3a8c1c56bd06e975e594f467ae775665e8aa56a6. Jul 6 23:46:14.978481 systemd[1]: Started cri-containerd-bc687abf411dff651e21c31c6c3adf2ef3773b024c92d4d2a125b9cf673d87a7.scope - libcontainer container bc687abf411dff651e21c31c6c3adf2ef3773b024c92d4d2a125b9cf673d87a7. Jul 6 23:46:14.982190 systemd[1]: Started cri-containerd-827653302585c9d16cdd9b139a75869ddedde2e31e583458b46579de19b1f8ec.scope - libcontainer container 827653302585c9d16cdd9b139a75869ddedde2e31e583458b46579de19b1f8ec. Jul 6 23:46:15.026964 containerd[1452]: time="2025-07-06T23:46:15.026754539Z" level=info msg="StartContainer for \"bc687abf411dff651e21c31c6c3adf2ef3773b024c92d4d2a125b9cf673d87a7\" returns successfully" Jul 6 23:46:15.026964 containerd[1452]: time="2025-07-06T23:46:15.026772737Z" level=info msg="StartContainer for \"270b233af7c9d01abba40f6f3a8c1c56bd06e975e594f467ae775665e8aa56a6\" returns successfully" Jul 6 23:46:15.031773 containerd[1452]: time="2025-07-06T23:46:15.031729313Z" level=info msg="StartContainer for \"827653302585c9d16cdd9b139a75869ddedde2e31e583458b46579de19b1f8ec\" returns successfully" Jul 6 23:46:15.277803 kubelet[2136]: I0706 23:46:15.277666 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:46:15.485032 kubelet[2136]: E0706 23:46:15.484769 2136 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:46:15.485032 kubelet[2136]: E0706 23:46:15.484949 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:15.485908 kubelet[2136]: E0706 23:46:15.485766 2136 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:46:15.486210 kubelet[2136]: E0706 23:46:15.486166 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:15.488226 kubelet[2136]: E0706 23:46:15.488130 2136 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:46:15.488365 kubelet[2136]: E0706 23:46:15.488305 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:16.186980 kubelet[2136]: I0706 23:46:16.186694 2136 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:46:16.186980 kubelet[2136]: E0706 23:46:16.186757 2136 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 6 23:46:16.196474 kubelet[2136]: E0706 23:46:16.196422 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:16.297114 kubelet[2136]: E0706 23:46:16.297052 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:16.397787 kubelet[2136]: E0706 23:46:16.397684 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:16.490134 kubelet[2136]: E0706 23:46:16.489977 2136 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:46:16.490134 kubelet[2136]: E0706 23:46:16.490080 2136 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:46:16.490134 kubelet[2136]: E0706 23:46:16.490109 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:16.490387 kubelet[2136]: E0706 23:46:16.490242 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:16.498452 kubelet[2136]: E0706 23:46:16.498416 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:16.599088 kubelet[2136]: E0706 23:46:16.599042 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:16.699313 kubelet[2136]: E0706 23:46:16.699238 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:16.799971 kubelet[2136]: E0706 23:46:16.799914 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:16.900503 kubelet[2136]: E0706 23:46:16.900447 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:17.001035 kubelet[2136]: E0706 23:46:17.000973 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:17.101860 kubelet[2136]: E0706 23:46:17.101732 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:17.202390 kubelet[2136]: E0706 23:46:17.202332 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:17.303039 kubelet[2136]: E0706 23:46:17.302981 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:17.403749 kubelet[2136]: E0706 23:46:17.403595 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:17.504103 kubelet[2136]: E0706 23:46:17.504065 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:17.647454 kubelet[2136]: I0706 23:46:17.647400 2136 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:17.655823 kubelet[2136]: I0706 23:46:17.655711 2136 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:46:17.659329 kubelet[2136]: I0706 23:46:17.659298 2136 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:18.287123 systemd[1]: Reloading requested from client PID 2425 ('systemctl') (unit session-7.scope)... Jul 6 23:46:18.287145 systemd[1]: Reloading... Jul 6 23:46:18.371987 zram_generator::config[2465]: No configuration found. Jul 6 23:46:18.432137 kubelet[2136]: I0706 23:46:18.432093 2136 apiserver.go:52] "Watching apiserver" Jul 6 23:46:18.470765 kubelet[2136]: E0706 23:46:18.470724 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:18.470856 kubelet[2136]: E0706 23:46:18.470802 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:18.470903 kubelet[2136]: E0706 23:46:18.470884 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:18.537289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:18.547534 kubelet[2136]: I0706 23:46:18.547474 2136 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:46:18.635070 systemd[1]: Reloading finished in 347 ms. Jul 6 23:46:18.679801 kubelet[2136]: I0706 23:46:18.679676 2136 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:46:18.679741 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:18.680100 kubelet[2136]: E0706 23:46:18.679674 2136 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.184fce447246022e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:46:13.440700974 +0000 UTC m=+0.591273813,LastTimestamp:2025-07-06 23:46:13.440700974 +0000 UTC m=+0.591273813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:46:18.703415 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:46:18.703762 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:18.703824 systemd[1]: kubelet.service: Consumed 1.346s CPU time, 132.1M memory peak, 0B memory swap peak. Jul 6 23:46:18.715176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:18.916894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:18.922540 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:46:18.973737 kubelet[2509]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:18.973737 kubelet[2509]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:46:18.973737 kubelet[2509]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:18.974236 kubelet[2509]: I0706 23:46:18.973803 2509 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:46:18.982512 kubelet[2509]: I0706 23:46:18.982475 2509 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:46:18.983115 kubelet[2509]: I0706 23:46:18.982602 2509 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:46:18.983422 kubelet[2509]: I0706 23:46:18.983387 2509 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:46:18.985169 kubelet[2509]: I0706 23:46:18.985140 2509 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:46:18.987483 kubelet[2509]: I0706 23:46:18.987447 2509 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:46:18.991270 kubelet[2509]: E0706 23:46:18.991220 2509 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:46:18.991270 kubelet[2509]: I0706 23:46:18.991270 2509 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:46:18.996413 kubelet[2509]: I0706 23:46:18.996384 2509 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:46:18.996657 kubelet[2509]: I0706 23:46:18.996627 2509 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:46:18.996794 kubelet[2509]: I0706 23:46:18.996649 2509 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:46:18.996902 kubelet[2509]: I0706 23:46:18.996796 2509 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:46:18.996902 kubelet[2509]: I0706 23:46:18.996813 2509 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:46:18.996902 kubelet[2509]: I0706 23:46:18.996860 2509 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:18.997097 kubelet[2509]: I0706 23:46:18.997081 2509 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:46:18.997260 kubelet[2509]: I0706 23:46:18.997109 2509 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:46:18.997260 kubelet[2509]: I0706 23:46:18.997131 2509 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:46:18.997260 kubelet[2509]: I0706 23:46:18.997151 2509 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:46:19.004330 kubelet[2509]: I0706 23:46:19.002361 2509 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:46:19.004330 kubelet[2509]: I0706 23:46:19.002911 2509 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:46:19.008267 kubelet[2509]: I0706 23:46:19.008231 2509 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:46:19.008900 kubelet[2509]: I0706 23:46:19.008360 2509 server.go:1289] "Started kubelet" Jul 6 23:46:19.008900 kubelet[2509]: I0706 23:46:19.008509 2509 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:46:19.009388 kubelet[2509]: I0706 23:46:19.009321 2509 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:46:19.012030 kubelet[2509]: I0706 23:46:19.011313 2509 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:46:19.012030 kubelet[2509]: I0706 23:46:19.011548 2509 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:46:19.012281 kubelet[2509]: I0706 23:46:19.012254 2509 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:46:19.012630 kubelet[2509]: I0706 23:46:19.012608 2509 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:46:19.013945 kubelet[2509]: I0706 23:46:19.012752 2509 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:46:19.013945 kubelet[2509]: I0706 23:46:19.012861 2509 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:46:19.013945 kubelet[2509]: I0706 23:46:19.013067 2509 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:46:19.018535 kubelet[2509]: I0706 23:46:19.018500 2509 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:46:19.018728 kubelet[2509]: I0706 23:46:19.018714 2509 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:46:19.019163 kubelet[2509]: I0706 23:46:19.019108 2509 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:46:19.019610 kubelet[2509]: E0706 23:46:19.019574 2509 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:46:19.028573 kubelet[2509]: I0706 23:46:19.028519 2509 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:46:19.030000 kubelet[2509]: I0706 23:46:19.029900 2509 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:46:19.030000 kubelet[2509]: I0706 23:46:19.029964 2509 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:46:19.030000 kubelet[2509]: I0706 23:46:19.030003 2509 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:46:19.030115 kubelet[2509]: I0706 23:46:19.030013 2509 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:46:19.030115 kubelet[2509]: E0706 23:46:19.030068 2509 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:46:19.066462 kubelet[2509]: I0706 23:46:19.066397 2509 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:46:19.066462 kubelet[2509]: I0706 23:46:19.066443 2509 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:46:19.066462 kubelet[2509]: I0706 23:46:19.066466 2509 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:19.066714 kubelet[2509]: I0706 23:46:19.066656 2509 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:46:19.066714 kubelet[2509]: I0706 23:46:19.066674 2509 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:46:19.066714 kubelet[2509]: I0706 23:46:19.066698 2509 policy_none.go:49] "None policy: Start" Jul 6 23:46:19.066714 kubelet[2509]: I0706 23:46:19.066711 2509 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:46:19.066863 kubelet[2509]: I0706 23:46:19.066726 2509 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:46:19.066900 kubelet[2509]: I0706 23:46:19.066867 2509 state_mem.go:75] "Updated machine memory state" Jul 6 23:46:19.071759 kubelet[2509]: E0706 23:46:19.071509 2509 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:46:19.071940 kubelet[2509]: I0706 23:46:19.071796 2509 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:46:19.071940 kubelet[2509]: I0706 23:46:19.071812 2509 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:46:19.077696 kubelet[2509]: E0706 23:46:19.077659 2509 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:46:19.081071 kubelet[2509]: I0706 23:46:19.080482 2509 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:46:19.131242 kubelet[2509]: I0706 23:46:19.131166 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:19.131411 kubelet[2509]: I0706 23:46:19.131382 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:46:19.131478 kubelet[2509]: I0706 23:46:19.131391 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:19.138519 kubelet[2509]: E0706 23:46:19.138259 2509 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:19.138606 kubelet[2509]: E0706 23:46:19.138558 2509 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:19.138640 kubelet[2509]: E0706 23:46:19.138601 2509 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:46:19.184151 kubelet[2509]: I0706 23:46:19.184020 2509 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:46:19.190025 kubelet[2509]: I0706 23:46:19.189989 2509 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 6 23:46:19.190128 kubelet[2509]: I0706 23:46:19.190091 2509 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:46:19.292509 sudo[2550]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:46:19.292898 sudo[2550]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:46:19.314027 kubelet[2509]: I0706 23:46:19.313954 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11585b572b37ef8d8aa47edfd64001d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"11585b572b37ef8d8aa47edfd64001d9\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:19.314027 kubelet[2509]: I0706 23:46:19.314023 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:19.314161 kubelet[2509]: I0706 23:46:19.314078 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:19.314161 kubelet[2509]: I0706 23:46:19.314115 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:19.314161 kubelet[2509]: I0706 23:46:19.314143 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:19.314355 kubelet[2509]: I0706 23:46:19.314164 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:46:19.314355 kubelet[2509]: I0706 23:46:19.314185 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11585b572b37ef8d8aa47edfd64001d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"11585b572b37ef8d8aa47edfd64001d9\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:19.314355 kubelet[2509]: I0706 23:46:19.314233 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11585b572b37ef8d8aa47edfd64001d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"11585b572b37ef8d8aa47edfd64001d9\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:19.314355 kubelet[2509]: I0706 23:46:19.314279 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:19.439234 kubelet[2509]: E0706 23:46:19.439069 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:19.439234 kubelet[2509]: E0706 23:46:19.439072 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:19.439234 kubelet[2509]: E0706 23:46:19.439171 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:19.777539 sudo[2550]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:19.998431 kubelet[2509]: I0706 23:46:19.998370 2509 apiserver.go:52] "Watching apiserver" Jul 6 23:46:20.013600 kubelet[2509]: I0706 23:46:20.013546 2509 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:46:20.044206 kubelet[2509]: I0706 23:46:20.043266 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:20.044206 kubelet[2509]: I0706 23:46:20.043527 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:20.044206 kubelet[2509]: E0706 23:46:20.043782 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:20.158200 kubelet[2509]: E0706 23:46:20.158142 2509 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:20.158646 kubelet[2509]: E0706 23:46:20.158326 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:20.159875 kubelet[2509]: E0706 23:46:20.159641 2509 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:20.159875 kubelet[2509]: E0706 23:46:20.159744 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:20.276122 kubelet[2509]: I0706 23:46:20.276024 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.275991011 podStartE2EDuration="3.275991011s" podCreationTimestamp="2025-07-06 23:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:20.158654505 +0000 UTC m=+1.231784852" watchObservedRunningTime="2025-07-06 23:46:20.275991011 +0000 UTC m=+1.349121368" Jul 6 23:46:20.286593 kubelet[2509]: I0706 23:46:20.286215 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.286197549 podStartE2EDuration="3.286197549s" podCreationTimestamp="2025-07-06 23:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:20.276338372 +0000 UTC m=+1.349468730" watchObservedRunningTime="2025-07-06 23:46:20.286197549 +0000 UTC m=+1.359327906" Jul 6 23:46:20.296297 kubelet[2509]: I0706 23:46:20.296110 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.2960855589999998 podStartE2EDuration="3.296085559s" podCreationTimestamp="2025-07-06 23:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:20.286420986 +0000 UTC m=+1.359551333" watchObservedRunningTime="2025-07-06 23:46:20.296085559 +0000 UTC m=+1.369215917" Jul 6 23:46:20.977068 sudo[1639]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:20.979044 sshd[1635]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:20.983288 systemd[1]: sshd@6-10.0.0.18:22-10.0.0.1:34348.service: Deactivated successfully. Jul 6 23:46:20.985452 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:46:20.985652 systemd[1]: session-7.scope: Consumed 7.700s CPU time, 159.8M memory peak, 0B memory swap peak. Jul 6 23:46:20.986094 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:46:20.987067 systemd-logind[1440]: Removed session 7. Jul 6 23:46:21.044502 kubelet[2509]: E0706 23:46:21.044457 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:21.045016 kubelet[2509]: E0706 23:46:21.044631 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:21.045016 kubelet[2509]: E0706 23:46:21.044803 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:22.045552 kubelet[2509]: E0706 23:46:22.045501 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:22.569026 kubelet[2509]: E0706 23:46:22.568991 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:23.406171 kubelet[2509]: E0706 23:46:23.406134 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:24.115248 kubelet[2509]: I0706 23:46:24.115208 2509 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:46:24.115550 containerd[1452]: time="2025-07-06T23:46:24.115515278Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:46:24.115963 kubelet[2509]: I0706 23:46:24.115670 2509 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:46:25.252863 kubelet[2509]: I0706 23:46:25.252797 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bbee2d9a-7bc0-479b-bd39-0f902bcec4ea-kube-proxy\") pod \"kube-proxy-khdnt\" (UID: \"bbee2d9a-7bc0-479b-bd39-0f902bcec4ea\") " pod="kube-system/kube-proxy-khdnt" Jul 6 23:46:25.252863 kubelet[2509]: I0706 23:46:25.252856 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbee2d9a-7bc0-479b-bd39-0f902bcec4ea-lib-modules\") pod \"kube-proxy-khdnt\" (UID: \"bbee2d9a-7bc0-479b-bd39-0f902bcec4ea\") " pod="kube-system/kube-proxy-khdnt" Jul 6 23:46:25.252863 kubelet[2509]: I0706 23:46:25.252882 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbee2d9a-7bc0-479b-bd39-0f902bcec4ea-xtables-lock\") pod \"kube-proxy-khdnt\" (UID: \"bbee2d9a-7bc0-479b-bd39-0f902bcec4ea\") " pod="kube-system/kube-proxy-khdnt" Jul 6 23:46:25.253471 kubelet[2509]: I0706 23:46:25.252898 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q7gb\" (UniqueName: \"kubernetes.io/projected/bbee2d9a-7bc0-479b-bd39-0f902bcec4ea-kube-api-access-9q7gb\") pod \"kube-proxy-khdnt\" (UID: \"bbee2d9a-7bc0-479b-bd39-0f902bcec4ea\") " pod="kube-system/kube-proxy-khdnt" Jul 6 23:46:25.263018 systemd[1]: Created slice kubepods-besteffort-podbbee2d9a_7bc0_479b_bd39_0f902bcec4ea.slice - libcontainer container kubepods-besteffort-podbbee2d9a_7bc0_479b_bd39_0f902bcec4ea.slice. Jul 6 23:46:25.279764 systemd[1]: Created slice kubepods-burstable-podaeec81f1_9f8f_4aa1_86f4_45ef34453f42.slice - libcontainer container kubepods-burstable-podaeec81f1_9f8f_4aa1_86f4_45ef34453f42.slice. Jul 6 23:46:25.325374 systemd[1]: Created slice kubepods-besteffort-pode6202cf7_5f5c_40a7_af62_43824356eaef.slice - libcontainer container kubepods-besteffort-pode6202cf7_5f5c_40a7_af62_43824356eaef.slice. Jul 6 23:46:25.353621 kubelet[2509]: I0706 23:46:25.353557 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-hostproc\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.353621 kubelet[2509]: I0706 23:46:25.353601 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-cgroup\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.353621 kubelet[2509]: I0706 23:46:25.353619 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-hubble-tls\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.353839 kubelet[2509]: I0706 23:46:25.353674 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5b6t\" (UniqueName: \"kubernetes.io/projected/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-kube-api-access-f5b6t\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.353839 kubelet[2509]: I0706 23:46:25.353710 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttdnv\" (UniqueName: \"kubernetes.io/projected/e6202cf7-5f5c-40a7-af62-43824356eaef-kube-api-access-ttdnv\") pod \"cilium-operator-6c4d7847fc-8dhwm\" (UID: \"e6202cf7-5f5c-40a7-af62-43824356eaef\") " pod="kube-system/cilium-operator-6c4d7847fc-8dhwm" Jul 6 23:46:25.353839 kubelet[2509]: I0706 23:46:25.353735 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-bpf-maps\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.353839 kubelet[2509]: I0706 23:46:25.353755 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cni-path\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.353839 kubelet[2509]: I0706 23:46:25.353771 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-clustermesh-secrets\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.353987 kubelet[2509]: I0706 23:46:25.353798 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-etc-cni-netd\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.353987 kubelet[2509]: I0706 23:46:25.353897 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-lib-modules\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.353987 kubelet[2509]: I0706 23:46:25.353914 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-config-path\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.353987 kubelet[2509]: I0706 23:46:25.353969 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-host-proc-sys-kernel\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.353987 kubelet[2509]: I0706 23:46:25.353985 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-xtables-lock\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.354132 kubelet[2509]: I0706 23:46:25.354002 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-host-proc-sys-net\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.354132 kubelet[2509]: I0706 23:46:25.354033 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6202cf7-5f5c-40a7-af62-43824356eaef-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8dhwm\" (UID: \"e6202cf7-5f5c-40a7-af62-43824356eaef\") " pod="kube-system/cilium-operator-6c4d7847fc-8dhwm" Jul 6 23:46:25.354132 kubelet[2509]: I0706 23:46:25.354067 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-run\") pod \"cilium-w7n2z\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " pod="kube-system/cilium-w7n2z" Jul 6 23:46:25.577794 kubelet[2509]: E0706 23:46:25.577721 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:25.578398 containerd[1452]: time="2025-07-06T23:46:25.578338419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-khdnt,Uid:bbee2d9a-7bc0-479b-bd39-0f902bcec4ea,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:25.582166 kubelet[2509]: E0706 23:46:25.582129 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:25.582559 containerd[1452]: time="2025-07-06T23:46:25.582503850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7n2z,Uid:aeec81f1-9f8f-4aa1-86f4-45ef34453f42,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:25.610332 containerd[1452]: time="2025-07-06T23:46:25.610221010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:25.610332 containerd[1452]: time="2025-07-06T23:46:25.610308829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:25.610530 containerd[1452]: time="2025-07-06T23:46:25.610337026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:25.610530 containerd[1452]: time="2025-07-06T23:46:25.610456149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:25.617136 containerd[1452]: time="2025-07-06T23:46:25.616883998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:25.617136 containerd[1452]: time="2025-07-06T23:46:25.616967316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:25.617136 containerd[1452]: time="2025-07-06T23:46:25.616981830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:25.617281 containerd[1452]: time="2025-07-06T23:46:25.617192010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:25.629787 kubelet[2509]: E0706 23:46:25.628433 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:25.630009 containerd[1452]: time="2025-07-06T23:46:25.629951648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8dhwm,Uid:e6202cf7-5f5c-40a7-af62-43824356eaef,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:25.642096 systemd[1]: Started cri-containerd-e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17.scope - libcontainer container e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17. Jul 6 23:46:25.649231 systemd[1]: Started cri-containerd-c6ce6bfe85d27d08698747d4508f23b9c106dcf722354657290ab23a497f2e37.scope - libcontainer container c6ce6bfe85d27d08698747d4508f23b9c106dcf722354657290ab23a497f2e37. Jul 6 23:46:25.661982 containerd[1452]: time="2025-07-06T23:46:25.661402132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:25.661982 containerd[1452]: time="2025-07-06T23:46:25.661544832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:25.661982 containerd[1452]: time="2025-07-06T23:46:25.661583253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:25.661982 containerd[1452]: time="2025-07-06T23:46:25.661791187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:25.678019 containerd[1452]: time="2025-07-06T23:46:25.677961713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7n2z,Uid:aeec81f1-9f8f-4aa1-86f4-45ef34453f42,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\"" Jul 6 23:46:25.679443 kubelet[2509]: E0706 23:46:25.679315 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:25.681958 containerd[1452]: time="2025-07-06T23:46:25.681545321Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:46:25.691099 systemd[1]: Started cri-containerd-c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546.scope - libcontainer container c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546. Jul 6 23:46:25.692036 containerd[1452]: time="2025-07-06T23:46:25.692004791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-khdnt,Uid:bbee2d9a-7bc0-479b-bd39-0f902bcec4ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6ce6bfe85d27d08698747d4508f23b9c106dcf722354657290ab23a497f2e37\"" Jul 6 23:46:25.693013 kubelet[2509]: E0706 23:46:25.692982 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:25.699023 containerd[1452]: time="2025-07-06T23:46:25.698979379Z" level=info msg="CreateContainer within sandbox \"c6ce6bfe85d27d08698747d4508f23b9c106dcf722354657290ab23a497f2e37\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:46:25.722440 containerd[1452]: time="2025-07-06T23:46:25.722349217Z" level=info msg="CreateContainer within sandbox \"c6ce6bfe85d27d08698747d4508f23b9c106dcf722354657290ab23a497f2e37\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fc89eb5f5f8d86a87c21b10127396fd883f4719b327ed735d08b6de98abd5883\"" Jul 6 23:46:25.723091 containerd[1452]: time="2025-07-06T23:46:25.723071152Z" level=info msg="StartContainer for \"fc89eb5f5f8d86a87c21b10127396fd883f4719b327ed735d08b6de98abd5883\"" Jul 6 23:46:25.732003 containerd[1452]: time="2025-07-06T23:46:25.731879108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8dhwm,Uid:e6202cf7-5f5c-40a7-af62-43824356eaef,Namespace:kube-system,Attempt:0,} returns sandbox id \"c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546\"" Jul 6 23:46:25.732634 kubelet[2509]: E0706 23:46:25.732602 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:25.752126 systemd[1]: Started cri-containerd-fc89eb5f5f8d86a87c21b10127396fd883f4719b327ed735d08b6de98abd5883.scope - libcontainer container fc89eb5f5f8d86a87c21b10127396fd883f4719b327ed735d08b6de98abd5883. Jul 6 23:46:25.784187 containerd[1452]: time="2025-07-06T23:46:25.784147194Z" level=info msg="StartContainer for \"fc89eb5f5f8d86a87c21b10127396fd883f4719b327ed735d08b6de98abd5883\" returns successfully" Jul 6 23:46:26.055150 kubelet[2509]: E0706 23:46:26.055111 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:30.613121 kubelet[2509]: E0706 23:46:30.613080 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:30.625824 kubelet[2509]: I0706 23:46:30.625501 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-khdnt" podStartSLOduration=5.625482978 podStartE2EDuration="5.625482978s" podCreationTimestamp="2025-07-06 23:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:26.063716446 +0000 UTC m=+7.136846803" watchObservedRunningTime="2025-07-06 23:46:30.625482978 +0000 UTC m=+11.698613335" Jul 6 23:46:31.068035 kubelet[2509]: E0706 23:46:31.067990 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:31.344402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311909983.mount: Deactivated successfully. Jul 6 23:46:31.605169 update_engine[1442]: I20250706 23:46:31.604905 1442 update_attempter.cc:509] Updating boot flags... Jul 6 23:46:32.416620 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2898) Jul 6 23:46:32.454959 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2900) Jul 6 23:46:32.523307 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2900) Jul 6 23:46:32.573697 kubelet[2509]: E0706 23:46:32.573642 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:33.070940 kubelet[2509]: E0706 23:46:33.070891 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:33.411641 kubelet[2509]: E0706 23:46:33.411472 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:36.366850 containerd[1452]: time="2025-07-06T23:46:36.366773912Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:36.367505 containerd[1452]: time="2025-07-06T23:46:36.367451979Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 6 23:46:36.368600 containerd[1452]: time="2025-07-06T23:46:36.368566070Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:36.370249 containerd[1452]: time="2025-07-06T23:46:36.370184663Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.688586319s" Jul 6 23:46:36.370249 containerd[1452]: time="2025-07-06T23:46:36.370237607Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 6 23:46:36.374134 containerd[1452]: time="2025-07-06T23:46:36.374103403Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:46:36.386191 containerd[1452]: time="2025-07-06T23:46:36.386135859Z" level=info msg="CreateContainer within sandbox \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:46:36.400241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542927852.mount: Deactivated successfully. Jul 6 23:46:36.401578 containerd[1452]: time="2025-07-06T23:46:36.401528767Z" level=info msg="CreateContainer within sandbox \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3\"" Jul 6 23:46:36.402083 containerd[1452]: time="2025-07-06T23:46:36.402050647Z" level=info msg="StartContainer for \"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3\"" Jul 6 23:46:36.437139 systemd[1]: Started cri-containerd-6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3.scope - libcontainer container 6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3. Jul 6 23:46:36.466048 containerd[1452]: time="2025-07-06T23:46:36.465994511Z" level=info msg="StartContainer for \"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3\" returns successfully" Jul 6 23:46:36.481241 systemd[1]: cri-containerd-6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3.scope: Deactivated successfully. Jul 6 23:46:36.925122 containerd[1452]: time="2025-07-06T23:46:36.922567733Z" level=info msg="shim disconnected" id=6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3 namespace=k8s.io Jul 6 23:46:36.925401 containerd[1452]: time="2025-07-06T23:46:36.925133444Z" level=warning msg="cleaning up after shim disconnected" id=6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3 namespace=k8s.io Jul 6 23:46:36.925401 containerd[1452]: time="2025-07-06T23:46:36.925154449Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:46:37.080813 kubelet[2509]: E0706 23:46:37.080757 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:37.086609 containerd[1452]: time="2025-07-06T23:46:37.086319199Z" level=info msg="CreateContainer within sandbox \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:46:37.100712 containerd[1452]: time="2025-07-06T23:46:37.100644753Z" level=info msg="CreateContainer within sandbox \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964\"" Jul 6 23:46:37.101835 containerd[1452]: time="2025-07-06T23:46:37.101033620Z" level=info msg="StartContainer for \"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964\"" Jul 6 23:46:37.132070 systemd[1]: Started cri-containerd-cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964.scope - libcontainer container cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964. Jul 6 23:46:37.161200 containerd[1452]: time="2025-07-06T23:46:37.161161022Z" level=info msg="StartContainer for \"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964\" returns successfully" Jul 6 23:46:37.172482 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:46:37.172729 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:46:37.172804 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:46:37.180864 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:46:37.181157 systemd[1]: cri-containerd-cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964.scope: Deactivated successfully. Jul 6 23:46:37.202196 containerd[1452]: time="2025-07-06T23:46:37.202118911Z" level=info msg="shim disconnected" id=cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964 namespace=k8s.io Jul 6 23:46:37.202196 containerd[1452]: time="2025-07-06T23:46:37.202183060Z" level=warning msg="cleaning up after shim disconnected" id=cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964 namespace=k8s.io Jul 6 23:46:37.202196 containerd[1452]: time="2025-07-06T23:46:37.202192771Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:46:37.205447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:46:37.397789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3-rootfs.mount: Deactivated successfully. Jul 6 23:46:37.952202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount979769477.mount: Deactivated successfully. Jul 6 23:46:38.091280 kubelet[2509]: E0706 23:46:38.091239 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:38.098600 containerd[1452]: time="2025-07-06T23:46:38.098530436Z" level=info msg="CreateContainer within sandbox \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:46:38.118766 containerd[1452]: time="2025-07-06T23:46:38.118708194Z" level=info msg="CreateContainer within sandbox \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566\"" Jul 6 23:46:38.119377 containerd[1452]: time="2025-07-06T23:46:38.119349506Z" level=info msg="StartContainer for \"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566\"" Jul 6 23:46:38.156220 systemd[1]: Started cri-containerd-f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566.scope - libcontainer container f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566. Jul 6 23:46:38.198651 systemd[1]: cri-containerd-f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566.scope: Deactivated successfully. Jul 6 23:46:38.199549 containerd[1452]: time="2025-07-06T23:46:38.199427525Z" level=info msg="StartContainer for \"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566\" returns successfully" Jul 6 23:46:38.459716 containerd[1452]: time="2025-07-06T23:46:38.459618498Z" level=info msg="shim disconnected" id=f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566 namespace=k8s.io Jul 6 23:46:38.459716 containerd[1452]: time="2025-07-06T23:46:38.459692256Z" level=warning msg="cleaning up after shim disconnected" id=f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566 namespace=k8s.io Jul 6 23:46:38.459716 containerd[1452]: time="2025-07-06T23:46:38.459704853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:46:38.792978 containerd[1452]: time="2025-07-06T23:46:38.792890625Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:38.793633 containerd[1452]: time="2025-07-06T23:46:38.793556920Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 6 23:46:38.794718 containerd[1452]: time="2025-07-06T23:46:38.794689012Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:38.796094 containerd[1452]: time="2025-07-06T23:46:38.796044762Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.421905672s" Jul 6 23:46:38.796094 containerd[1452]: time="2025-07-06T23:46:38.796080989Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 6 23:46:38.801417 containerd[1452]: time="2025-07-06T23:46:38.801380945Z" level=info msg="CreateContainer within sandbox \"c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:46:38.813942 containerd[1452]: time="2025-07-06T23:46:38.813891995Z" level=info msg="CreateContainer within sandbox \"c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\"" Jul 6 23:46:38.814413 containerd[1452]: time="2025-07-06T23:46:38.814373234Z" level=info msg="StartContainer for \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\"" Jul 6 23:46:38.854057 systemd[1]: Started cri-containerd-8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1.scope - libcontainer container 8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1. Jul 6 23:46:38.883163 containerd[1452]: time="2025-07-06T23:46:38.883077506Z" level=info msg="StartContainer for \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\" returns successfully" Jul 6 23:46:39.087258 kubelet[2509]: E0706 23:46:39.086853 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:39.090235 kubelet[2509]: E0706 23:46:39.090198 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:39.493463 kubelet[2509]: I0706 23:46:39.493270 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8dhwm" podStartSLOduration=1.429924929 podStartE2EDuration="14.493241955s" podCreationTimestamp="2025-07-06 23:46:25 +0000 UTC" firstStartedPulling="2025-07-06 23:46:25.733485697 +0000 UTC m=+6.806616054" lastFinishedPulling="2025-07-06 23:46:38.796802723 +0000 UTC m=+19.869933080" observedRunningTime="2025-07-06 23:46:39.493145029 +0000 UTC m=+20.566275396" watchObservedRunningTime="2025-07-06 23:46:39.493241955 +0000 UTC m=+20.566372482" Jul 6 23:46:39.531693 containerd[1452]: time="2025-07-06T23:46:39.531626692Z" level=info msg="CreateContainer within sandbox \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:46:39.556304 containerd[1452]: time="2025-07-06T23:46:39.556229566Z" level=info msg="CreateContainer within sandbox \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791\"" Jul 6 23:46:39.557187 containerd[1452]: time="2025-07-06T23:46:39.557136586Z" level=info msg="StartContainer for \"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791\"" Jul 6 23:46:39.633176 systemd[1]: Started cri-containerd-577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791.scope - libcontainer container 577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791. Jul 6 23:46:39.670546 systemd[1]: cri-containerd-577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791.scope: Deactivated successfully. Jul 6 23:46:39.676602 containerd[1452]: time="2025-07-06T23:46:39.676478044Z" level=info msg="StartContainer for \"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791\" returns successfully" Jul 6 23:46:39.698582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791-rootfs.mount: Deactivated successfully. Jul 6 23:46:39.706595 containerd[1452]: time="2025-07-06T23:46:39.706529843Z" level=info msg="shim disconnected" id=577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791 namespace=k8s.io Jul 6 23:46:39.706595 containerd[1452]: time="2025-07-06T23:46:39.706592455Z" level=warning msg="cleaning up after shim disconnected" id=577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791 namespace=k8s.io Jul 6 23:46:39.706794 containerd[1452]: time="2025-07-06T23:46:39.706601755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:46:40.095125 kubelet[2509]: E0706 23:46:40.094685 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:40.095125 kubelet[2509]: E0706 23:46:40.094799 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:40.215174 containerd[1452]: time="2025-07-06T23:46:40.215105967Z" level=info msg="CreateContainer within sandbox \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:46:40.282083 containerd[1452]: time="2025-07-06T23:46:40.282007284Z" level=info msg="CreateContainer within sandbox \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\"" Jul 6 23:46:40.282755 containerd[1452]: time="2025-07-06T23:46:40.282707686Z" level=info msg="StartContainer for \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\"" Jul 6 23:46:40.313178 systemd[1]: Started cri-containerd-bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094.scope - libcontainer container bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094. Jul 6 23:46:40.350400 containerd[1452]: time="2025-07-06T23:46:40.350246774Z" level=info msg="StartContainer for \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\" returns successfully" Jul 6 23:46:40.462424 kubelet[2509]: I0706 23:46:40.462384 2509 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:46:40.529867 systemd[1]: Created slice kubepods-burstable-pod63db5f8a_4cbb_472e_a777_2724beff5688.slice - libcontainer container kubepods-burstable-pod63db5f8a_4cbb_472e_a777_2724beff5688.slice. Jul 6 23:46:40.545252 systemd[1]: Created slice kubepods-burstable-poda398e4f2_ffa3_449c_9ea0_52e5a55084bc.slice - libcontainer container kubepods-burstable-poda398e4f2_ffa3_449c_9ea0_52e5a55084bc.slice. Jul 6 23:46:40.558978 kubelet[2509]: I0706 23:46:40.558883 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a398e4f2-ffa3-449c-9ea0-52e5a55084bc-config-volume\") pod \"coredns-674b8bbfcf-vxzkq\" (UID: \"a398e4f2-ffa3-449c-9ea0-52e5a55084bc\") " pod="kube-system/coredns-674b8bbfcf-vxzkq" Jul 6 23:46:40.559403 kubelet[2509]: I0706 23:46:40.558988 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wxk8\" (UniqueName: \"kubernetes.io/projected/63db5f8a-4cbb-472e-a777-2724beff5688-kube-api-access-9wxk8\") pod \"coredns-674b8bbfcf-2n78r\" (UID: \"63db5f8a-4cbb-472e-a777-2724beff5688\") " pod="kube-system/coredns-674b8bbfcf-2n78r" Jul 6 23:46:40.559403 kubelet[2509]: I0706 23:46:40.559021 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq8g9\" (UniqueName: \"kubernetes.io/projected/a398e4f2-ffa3-449c-9ea0-52e5a55084bc-kube-api-access-mq8g9\") pod \"coredns-674b8bbfcf-vxzkq\" (UID: \"a398e4f2-ffa3-449c-9ea0-52e5a55084bc\") " pod="kube-system/coredns-674b8bbfcf-vxzkq" Jul 6 23:46:40.559403 kubelet[2509]: I0706 23:46:40.559042 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63db5f8a-4cbb-472e-a777-2724beff5688-config-volume\") pod \"coredns-674b8bbfcf-2n78r\" (UID: \"63db5f8a-4cbb-472e-a777-2724beff5688\") " pod="kube-system/coredns-674b8bbfcf-2n78r" Jul 6 23:46:40.835570 kubelet[2509]: E0706 23:46:40.835505 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:40.836380 containerd[1452]: time="2025-07-06T23:46:40.836287066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2n78r,Uid:63db5f8a-4cbb-472e-a777-2724beff5688,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:40.849998 kubelet[2509]: E0706 23:46:40.849949 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:40.850652 containerd[1452]: time="2025-07-06T23:46:40.850561063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vxzkq,Uid:a398e4f2-ffa3-449c-9ea0-52e5a55084bc,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:41.100320 kubelet[2509]: E0706 23:46:41.100020 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:41.191335 kubelet[2509]: I0706 23:46:41.191255 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w7n2z" podStartSLOduration=5.498540868 podStartE2EDuration="16.19123743s" podCreationTimestamp="2025-07-06 23:46:25 +0000 UTC" firstStartedPulling="2025-07-06 23:46:25.681209172 +0000 UTC m=+6.754339519" lastFinishedPulling="2025-07-06 23:46:36.373905724 +0000 UTC m=+17.447036081" observedRunningTime="2025-07-06 23:46:41.190647487 +0000 UTC m=+22.263777854" watchObservedRunningTime="2025-07-06 23:46:41.19123743 +0000 UTC m=+22.264367787" Jul 6 23:46:42.102242 kubelet[2509]: E0706 23:46:42.102205 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:42.814839 systemd-networkd[1401]: cilium_host: Link UP Jul 6 23:46:42.816008 systemd-networkd[1401]: cilium_net: Link UP Jul 6 23:46:42.816586 systemd-networkd[1401]: cilium_net: Gained carrier Jul 6 23:46:42.816767 systemd-networkd[1401]: cilium_host: Gained carrier Jul 6 23:46:42.816918 systemd-networkd[1401]: cilium_net: Gained IPv6LL Jul 6 23:46:42.817146 systemd-networkd[1401]: cilium_host: Gained IPv6LL Jul 6 23:46:42.926993 systemd-networkd[1401]: cilium_vxlan: Link UP Jul 6 23:46:42.927151 systemd-networkd[1401]: cilium_vxlan: Gained carrier Jul 6 23:46:43.107069 kubelet[2509]: E0706 23:46:43.104474 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:43.191955 kernel: NET: Registered PF_ALG protocol family Jul 6 23:46:43.890542 systemd-networkd[1401]: lxc_health: Link UP Jul 6 23:46:43.897660 systemd-networkd[1401]: lxc_health: Gained carrier Jul 6 23:46:44.112545 systemd[1]: Started sshd@7-10.0.0.18:22-10.0.0.1:34234.service - OpenSSH per-connection server daemon (10.0.0.1:34234). Jul 6 23:46:44.151315 sshd[3720]: Accepted publickey for core from 10.0.0.1 port 34234 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:46:44.152996 sshd[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:44.157001 systemd-logind[1440]: New session 8 of user core. Jul 6 23:46:44.164064 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:46:44.407283 sshd[3720]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:44.412057 systemd[1]: sshd@7-10.0.0.18:22-10.0.0.1:34234.service: Deactivated successfully. Jul 6 23:46:44.414148 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:46:44.414828 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:46:44.415775 systemd-logind[1440]: Removed session 8. Jul 6 23:46:44.445078 systemd-networkd[1401]: cilium_vxlan: Gained IPv6LL Jul 6 23:46:44.453744 systemd-networkd[1401]: lxc7535f047d4f1: Link UP Jul 6 23:46:44.459969 kernel: eth0: renamed from tmpf2f7a Jul 6 23:46:44.466009 systemd-networkd[1401]: lxc7535f047d4f1: Gained carrier Jul 6 23:46:44.472507 systemd-networkd[1401]: lxca926fc39939a: Link UP Jul 6 23:46:44.478952 kernel: eth0: renamed from tmp734ed Jul 6 23:46:44.487015 systemd-networkd[1401]: lxca926fc39939a: Gained carrier Jul 6 23:46:45.584140 kubelet[2509]: E0706 23:46:45.584084 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:45.788286 systemd-networkd[1401]: lxc7535f047d4f1: Gained IPv6LL Jul 6 23:46:45.791076 systemd-networkd[1401]: lxc_health: Gained IPv6LL Jul 6 23:46:46.109719 kubelet[2509]: E0706 23:46:46.109664 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:46.236232 systemd-networkd[1401]: lxca926fc39939a: Gained IPv6LL Jul 6 23:46:47.880693 containerd[1452]: time="2025-07-06T23:46:47.880575812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:47.882048 containerd[1452]: time="2025-07-06T23:46:47.881395640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:46:47.882048 containerd[1452]: time="2025-07-06T23:46:47.881582023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:47.882048 containerd[1452]: time="2025-07-06T23:46:47.881638901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:47.882048 containerd[1452]: time="2025-07-06T23:46:47.881771654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:47.882260 containerd[1452]: time="2025-07-06T23:46:47.881857341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:46:47.882260 containerd[1452]: time="2025-07-06T23:46:47.881885799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:47.882260 containerd[1452]: time="2025-07-06T23:46:47.881988821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:46:47.915103 systemd[1]: Started cri-containerd-734edab67db0664dc1aa50e46c18417e613eae1a9559f51dbb8b98f1582ed60b.scope - libcontainer container 734edab67db0664dc1aa50e46c18417e613eae1a9559f51dbb8b98f1582ed60b. Jul 6 23:46:47.917453 systemd[1]: Started cri-containerd-f2f7a7f55c9cc769d7a0d1bb9c20c47e0bf0334904e610fd28ab5d03c0e55885.scope - libcontainer container f2f7a7f55c9cc769d7a0d1bb9c20c47e0bf0334904e610fd28ab5d03c0e55885. Jul 6 23:46:47.929106 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:46:47.932714 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:46:47.961209 containerd[1452]: time="2025-07-06T23:46:47.961149520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2n78r,Uid:63db5f8a-4cbb-472e-a777-2724beff5688,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2f7a7f55c9cc769d7a0d1bb9c20c47e0bf0334904e610fd28ab5d03c0e55885\"" Jul 6 23:46:47.962261 kubelet[2509]: E0706 23:46:47.962094 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:47.964059 containerd[1452]: time="2025-07-06T23:46:47.963858828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vxzkq,Uid:a398e4f2-ffa3-449c-9ea0-52e5a55084bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"734edab67db0664dc1aa50e46c18417e613eae1a9559f51dbb8b98f1582ed60b\"" Jul 6 23:46:47.965039 kubelet[2509]: E0706 23:46:47.964994 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:47.967999 containerd[1452]: time="2025-07-06T23:46:47.967956436Z" level=info msg="CreateContainer within sandbox \"f2f7a7f55c9cc769d7a0d1bb9c20c47e0bf0334904e610fd28ab5d03c0e55885\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:46:47.970882 containerd[1452]: time="2025-07-06T23:46:47.970830264Z" level=info msg="CreateContainer within sandbox \"734edab67db0664dc1aa50e46c18417e613eae1a9559f51dbb8b98f1582ed60b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:46:47.986452 containerd[1452]: time="2025-07-06T23:46:47.986405695Z" level=info msg="CreateContainer within sandbox \"f2f7a7f55c9cc769d7a0d1bb9c20c47e0bf0334904e610fd28ab5d03c0e55885\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d95f57d902508e497306de8491dabe399e4ea8c3e6784101b8f1be3f47ec5c2f\"" Jul 6 23:46:47.987189 containerd[1452]: time="2025-07-06T23:46:47.987088370Z" level=info msg="StartContainer for \"d95f57d902508e497306de8491dabe399e4ea8c3e6784101b8f1be3f47ec5c2f\"" Jul 6 23:46:47.991252 containerd[1452]: time="2025-07-06T23:46:47.991211260Z" level=info msg="CreateContainer within sandbox \"734edab67db0664dc1aa50e46c18417e613eae1a9559f51dbb8b98f1582ed60b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee4515973a75ae37d0d923eadc91583b19ee50e441243a67b0d473072204ddcb\"" Jul 6 23:46:47.992401 containerd[1452]: time="2025-07-06T23:46:47.992328500Z" level=info msg="StartContainer for \"ee4515973a75ae37d0d923eadc91583b19ee50e441243a67b0d473072204ddcb\"" Jul 6 23:46:48.022154 systemd[1]: Started cri-containerd-d95f57d902508e497306de8491dabe399e4ea8c3e6784101b8f1be3f47ec5c2f.scope - libcontainer container d95f57d902508e497306de8491dabe399e4ea8c3e6784101b8f1be3f47ec5c2f. Jul 6 23:46:48.025999 systemd[1]: Started cri-containerd-ee4515973a75ae37d0d923eadc91583b19ee50e441243a67b0d473072204ddcb.scope - libcontainer container ee4515973a75ae37d0d923eadc91583b19ee50e441243a67b0d473072204ddcb. Jul 6 23:46:48.069002 containerd[1452]: time="2025-07-06T23:46:48.068961122Z" level=info msg="StartContainer for \"ee4515973a75ae37d0d923eadc91583b19ee50e441243a67b0d473072204ddcb\" returns successfully" Jul 6 23:46:48.069343 containerd[1452]: time="2025-07-06T23:46:48.069029032Z" level=info msg="StartContainer for \"d95f57d902508e497306de8491dabe399e4ea8c3e6784101b8f1be3f47ec5c2f\" returns successfully" Jul 6 23:46:48.115328 kubelet[2509]: E0706 23:46:48.115283 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:48.118115 kubelet[2509]: E0706 23:46:48.117534 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:48.143296 kubelet[2509]: I0706 23:46:48.143119 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vxzkq" podStartSLOduration=23.143100122 podStartE2EDuration="23.143100122s" podCreationTimestamp="2025-07-06 23:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:48.141546332 +0000 UTC m=+29.214676699" watchObservedRunningTime="2025-07-06 23:46:48.143100122 +0000 UTC m=+29.216230479" Jul 6 23:46:48.887636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3577700441.mount: Deactivated successfully. Jul 6 23:46:49.119705 kubelet[2509]: E0706 23:46:49.119647 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:49.121145 kubelet[2509]: E0706 23:46:49.120458 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:49.338310 kubelet[2509]: I0706 23:46:49.338176 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2n78r" podStartSLOduration=24.338147083 podStartE2EDuration="24.338147083s" podCreationTimestamp="2025-07-06 23:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:48.157576067 +0000 UTC m=+29.230706424" watchObservedRunningTime="2025-07-06 23:46:49.338147083 +0000 UTC m=+30.411277440" Jul 6 23:46:49.433382 systemd[1]: Started sshd@8-10.0.0.18:22-10.0.0.1:34248.service - OpenSSH per-connection server daemon (10.0.0.1:34248). Jul 6 23:46:49.471958 sshd[3941]: Accepted publickey for core from 10.0.0.1 port 34248 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:46:49.474043 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:49.478897 systemd-logind[1440]: New session 9 of user core. Jul 6 23:46:49.489250 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:46:49.685779 sshd[3941]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:49.690372 systemd[1]: sshd@8-10.0.0.18:22-10.0.0.1:34248.service: Deactivated successfully. Jul 6 23:46:49.692601 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:46:49.693378 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:46:49.694482 systemd-logind[1440]: Removed session 9. Jul 6 23:46:50.121099 kubelet[2509]: E0706 23:46:50.121045 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:50.121099 kubelet[2509]: E0706 23:46:50.121076 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:46:54.698202 systemd[1]: Started sshd@9-10.0.0.18:22-10.0.0.1:56444.service - OpenSSH per-connection server daemon (10.0.0.1:56444). Jul 6 23:46:54.735694 sshd[3962]: Accepted publickey for core from 10.0.0.1 port 56444 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:46:54.737596 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:54.742272 systemd-logind[1440]: New session 10 of user core. Jul 6 23:46:54.756106 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:46:54.879371 sshd[3962]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:54.883909 systemd[1]: sshd@9-10.0.0.18:22-10.0.0.1:56444.service: Deactivated successfully. Jul 6 23:46:54.886337 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:46:54.887128 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:46:54.888224 systemd-logind[1440]: Removed session 10. Jul 6 23:46:59.895214 systemd[1]: Started sshd@10-10.0.0.18:22-10.0.0.1:49590.service - OpenSSH per-connection server daemon (10.0.0.1:49590). Jul 6 23:46:59.929869 sshd[3979]: Accepted publickey for core from 10.0.0.1 port 49590 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:46:59.931541 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:59.935669 systemd-logind[1440]: New session 11 of user core. Jul 6 23:46:59.945067 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:47:00.070097 sshd[3979]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:00.074474 systemd[1]: sshd@10-10.0.0.18:22-10.0.0.1:49590.service: Deactivated successfully. Jul 6 23:47:00.076391 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:47:00.076999 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:47:00.077950 systemd-logind[1440]: Removed session 11. Jul 6 23:47:05.087826 systemd[1]: Started sshd@11-10.0.0.18:22-10.0.0.1:49606.service - OpenSSH per-connection server daemon (10.0.0.1:49606). Jul 6 23:47:05.124557 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 49606 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:05.126189 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:05.130242 systemd-logind[1440]: New session 12 of user core. Jul 6 23:47:05.142044 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:47:05.259497 sshd[3994]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:05.272718 systemd[1]: sshd@11-10.0.0.18:22-10.0.0.1:49606.service: Deactivated successfully. Jul 6 23:47:05.274441 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:47:05.275798 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:47:05.277457 systemd[1]: Started sshd@12-10.0.0.18:22-10.0.0.1:49620.service - OpenSSH per-connection server daemon (10.0.0.1:49620). Jul 6 23:47:05.278365 systemd-logind[1440]: Removed session 12. Jul 6 23:47:05.309972 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 49620 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:05.311400 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:05.315826 systemd-logind[1440]: New session 13 of user core. Jul 6 23:47:05.327055 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:47:05.566206 sshd[4009]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:05.581306 systemd[1]: sshd@12-10.0.0.18:22-10.0.0.1:49620.service: Deactivated successfully. Jul 6 23:47:05.585630 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:47:05.588440 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:47:05.594589 systemd[1]: Started sshd@13-10.0.0.18:22-10.0.0.1:49624.service - OpenSSH per-connection server daemon (10.0.0.1:49624). Jul 6 23:47:05.595716 systemd-logind[1440]: Removed session 13. Jul 6 23:47:05.628257 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 49624 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:05.629998 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:05.634347 systemd-logind[1440]: New session 14 of user core. Jul 6 23:47:05.645098 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:47:05.771388 sshd[4021]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:05.775589 systemd[1]: sshd@13-10.0.0.18:22-10.0.0.1:49624.service: Deactivated successfully. Jul 6 23:47:05.777704 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:47:05.778343 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:47:05.779186 systemd-logind[1440]: Removed session 14. Jul 6 23:47:10.783736 systemd[1]: Started sshd@14-10.0.0.18:22-10.0.0.1:41958.service - OpenSSH per-connection server daemon (10.0.0.1:41958). Jul 6 23:47:10.817159 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 41958 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:10.819043 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:10.823075 systemd-logind[1440]: New session 15 of user core. Jul 6 23:47:10.833065 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:47:10.946258 sshd[4035]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:10.950873 systemd[1]: sshd@14-10.0.0.18:22-10.0.0.1:41958.service: Deactivated successfully. Jul 6 23:47:10.953035 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:47:10.953638 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:47:10.954507 systemd-logind[1440]: Removed session 15. Jul 6 23:47:15.962239 systemd[1]: Started sshd@15-10.0.0.18:22-10.0.0.1:41980.service - OpenSSH per-connection server daemon (10.0.0.1:41980). Jul 6 23:47:15.996281 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 41980 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:15.998410 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:16.003396 systemd-logind[1440]: New session 16 of user core. Jul 6 23:47:16.029089 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:47:16.146775 sshd[4050]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:16.156136 systemd[1]: sshd@15-10.0.0.18:22-10.0.0.1:41980.service: Deactivated successfully. Jul 6 23:47:16.159050 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:47:16.160953 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:47:16.172459 systemd[1]: Started sshd@16-10.0.0.18:22-10.0.0.1:41988.service - OpenSSH per-connection server daemon (10.0.0.1:41988). Jul 6 23:47:16.173645 systemd-logind[1440]: Removed session 16. Jul 6 23:47:16.206966 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 41988 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:16.209062 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:16.213986 systemd-logind[1440]: New session 17 of user core. Jul 6 23:47:16.226100 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:47:16.501050 sshd[4064]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:16.509729 systemd[1]: sshd@16-10.0.0.18:22-10.0.0.1:41988.service: Deactivated successfully. Jul 6 23:47:16.512257 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:47:16.514111 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:47:16.519326 systemd[1]: Started sshd@17-10.0.0.18:22-10.0.0.1:41998.service - OpenSSH per-connection server daemon (10.0.0.1:41998). Jul 6 23:47:16.520466 systemd-logind[1440]: Removed session 17. Jul 6 23:47:16.553316 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 41998 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:16.555051 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:16.560047 systemd-logind[1440]: New session 18 of user core. Jul 6 23:47:16.574241 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:47:17.460399 sshd[4077]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:17.468793 systemd[1]: sshd@17-10.0.0.18:22-10.0.0.1:41998.service: Deactivated successfully. Jul 6 23:47:17.474861 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:47:17.477610 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:47:17.488400 systemd[1]: Started sshd@18-10.0.0.18:22-10.0.0.1:42012.service - OpenSSH per-connection server daemon (10.0.0.1:42012). Jul 6 23:47:17.488952 systemd-logind[1440]: Removed session 18. Jul 6 23:47:17.516833 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 42012 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:17.518669 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:17.523010 systemd-logind[1440]: New session 19 of user core. Jul 6 23:47:17.535149 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:47:17.872669 sshd[4097]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:17.882241 systemd[1]: sshd@18-10.0.0.18:22-10.0.0.1:42012.service: Deactivated successfully. Jul 6 23:47:17.884444 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:47:17.887248 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:47:17.895303 systemd[1]: Started sshd@19-10.0.0.18:22-10.0.0.1:42024.service - OpenSSH per-connection server daemon (10.0.0.1:42024). Jul 6 23:47:17.896351 systemd-logind[1440]: Removed session 19. Jul 6 23:47:17.924518 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 42024 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:17.926217 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:17.930234 systemd-logind[1440]: New session 20 of user core. Jul 6 23:47:17.937050 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:47:18.046165 sshd[4109]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:18.050593 systemd[1]: sshd@19-10.0.0.18:22-10.0.0.1:42024.service: Deactivated successfully. Jul 6 23:47:18.052794 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:47:18.053521 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:47:18.054420 systemd-logind[1440]: Removed session 20. Jul 6 23:47:23.061589 systemd[1]: Started sshd@20-10.0.0.18:22-10.0.0.1:59318.service - OpenSSH per-connection server daemon (10.0.0.1:59318). Jul 6 23:47:23.105737 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 59318 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:23.107255 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:23.111126 systemd-logind[1440]: New session 21 of user core. Jul 6 23:47:23.121091 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:47:23.249814 sshd[4126]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:23.254237 systemd[1]: sshd@20-10.0.0.18:22-10.0.0.1:59318.service: Deactivated successfully. Jul 6 23:47:23.256434 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:47:23.257374 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:47:23.258292 systemd-logind[1440]: Removed session 21. Jul 6 23:47:28.263109 systemd[1]: Started sshd@21-10.0.0.18:22-10.0.0.1:59356.service - OpenSSH per-connection server daemon (10.0.0.1:59356). Jul 6 23:47:28.307895 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 59356 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:28.309863 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:28.314993 systemd-logind[1440]: New session 22 of user core. Jul 6 23:47:28.327129 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:47:28.438277 sshd[4145]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:28.444051 systemd[1]: sshd@21-10.0.0.18:22-10.0.0.1:59356.service: Deactivated successfully. Jul 6 23:47:28.446516 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:47:28.447355 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:47:28.448631 systemd-logind[1440]: Removed session 22. Jul 6 23:47:33.450460 systemd[1]: Started sshd@22-10.0.0.18:22-10.0.0.1:54758.service - OpenSSH per-connection server daemon (10.0.0.1:54758). Jul 6 23:47:33.485676 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 54758 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:33.488005 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:33.493410 systemd-logind[1440]: New session 23 of user core. Jul 6 23:47:33.501317 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:47:33.625147 sshd[4159]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:33.643206 systemd[1]: sshd@22-10.0.0.18:22-10.0.0.1:54758.service: Deactivated successfully. Jul 6 23:47:33.645380 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:47:33.647047 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:47:33.657387 systemd[1]: Started sshd@23-10.0.0.18:22-10.0.0.1:54764.service - OpenSSH per-connection server daemon (10.0.0.1:54764). Jul 6 23:47:33.658700 systemd-logind[1440]: Removed session 23. Jul 6 23:47:33.690184 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 54764 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:33.691999 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:33.697171 systemd-logind[1440]: New session 24 of user core. Jul 6 23:47:33.708070 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:47:35.166437 containerd[1452]: time="2025-07-06T23:47:35.166374759Z" level=info msg="StopContainer for \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\" with timeout 30 (s)" Jul 6 23:47:35.167050 containerd[1452]: time="2025-07-06T23:47:35.166727516Z" level=info msg="Stop container \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\" with signal terminated" Jul 6 23:47:35.207463 systemd[1]: cri-containerd-8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1.scope: Deactivated successfully. Jul 6 23:47:35.230594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1-rootfs.mount: Deactivated successfully. Jul 6 23:47:35.231291 containerd[1452]: time="2025-07-06T23:47:35.230794161Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:47:35.239970 containerd[1452]: time="2025-07-06T23:47:35.239907510Z" level=info msg="StopContainer for \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\" with timeout 2 (s)" Jul 6 23:47:35.240199 containerd[1452]: time="2025-07-06T23:47:35.240176383Z" level=info msg="Stop container \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\" with signal terminated" Jul 6 23:47:35.248677 systemd-networkd[1401]: lxc_health: Link DOWN Jul 6 23:47:35.248692 systemd-networkd[1401]: lxc_health: Lost carrier Jul 6 23:47:35.301511 systemd[1]: cri-containerd-bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094.scope: Deactivated successfully. Jul 6 23:47:35.301912 systemd[1]: cri-containerd-bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094.scope: Consumed 7.032s CPU time. Jul 6 23:47:35.322222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094-rootfs.mount: Deactivated successfully. Jul 6 23:47:35.475099 containerd[1452]: time="2025-07-06T23:47:35.474868729Z" level=info msg="shim disconnected" id=8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1 namespace=k8s.io Jul 6 23:47:35.475099 containerd[1452]: time="2025-07-06T23:47:35.474990004Z" level=warning msg="cleaning up after shim disconnected" id=8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1 namespace=k8s.io Jul 6 23:47:35.475099 containerd[1452]: time="2025-07-06T23:47:35.475011955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:47:35.518278 containerd[1452]: time="2025-07-06T23:47:35.518196048Z" level=info msg="shim disconnected" id=bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094 namespace=k8s.io Jul 6 23:47:35.518278 containerd[1452]: time="2025-07-06T23:47:35.518257913Z" level=warning msg="cleaning up after shim disconnected" id=bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094 namespace=k8s.io Jul 6 23:47:35.518278 containerd[1452]: time="2025-07-06T23:47:35.518270416Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:47:35.585765 containerd[1452]: time="2025-07-06T23:47:35.585702449Z" level=info msg="StopContainer for \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\" returns successfully" Jul 6 23:47:35.586558 containerd[1452]: time="2025-07-06T23:47:35.586517680Z" level=info msg="StopPodSandbox for \"c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546\"" Jul 6 23:47:35.586614 containerd[1452]: time="2025-07-06T23:47:35.586585496Z" level=info msg="Container to stop \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:47:35.588654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546-shm.mount: Deactivated successfully. Jul 6 23:47:35.590044 containerd[1452]: time="2025-07-06T23:47:35.590014352Z" level=info msg="StopContainer for \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\" returns successfully" Jul 6 23:47:35.590547 containerd[1452]: time="2025-07-06T23:47:35.590449454Z" level=info msg="StopPodSandbox for \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\"" Jul 6 23:47:35.590547 containerd[1452]: time="2025-07-06T23:47:35.590480702Z" level=info msg="Container to stop \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:47:35.590547 containerd[1452]: time="2025-07-06T23:47:35.590491552Z" level=info msg="Container to stop \"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:47:35.590547 containerd[1452]: time="2025-07-06T23:47:35.590501090Z" level=info msg="Container to stop \"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:47:35.590547 containerd[1452]: time="2025-07-06T23:47:35.590509967Z" level=info msg="Container to stop \"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:47:35.590547 containerd[1452]: time="2025-07-06T23:47:35.590520126Z" level=info msg="Container to stop \"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:47:35.592267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17-shm.mount: Deactivated successfully. Jul 6 23:47:35.597143 systemd[1]: cri-containerd-e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17.scope: Deactivated successfully. Jul 6 23:47:35.598232 systemd[1]: cri-containerd-c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546.scope: Deactivated successfully. Jul 6 23:47:35.626120 containerd[1452]: time="2025-07-06T23:47:35.626043879Z" level=info msg="shim disconnected" id=c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546 namespace=k8s.io Jul 6 23:47:35.626478 containerd[1452]: time="2025-07-06T23:47:35.626102378Z" level=info msg="shim disconnected" id=e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17 namespace=k8s.io Jul 6 23:47:35.626478 containerd[1452]: time="2025-07-06T23:47:35.626180704Z" level=warning msg="cleaning up after shim disconnected" id=e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17 namespace=k8s.io Jul 6 23:47:35.626478 containerd[1452]: time="2025-07-06T23:47:35.626192105Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:47:35.626478 containerd[1452]: time="2025-07-06T23:47:35.626109481Z" level=warning msg="cleaning up after shim disconnected" id=c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546 namespace=k8s.io Jul 6 23:47:35.626478 containerd[1452]: time="2025-07-06T23:47:35.626462118Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:47:35.659276 containerd[1452]: time="2025-07-06T23:47:35.657806025Z" level=info msg="TearDown network for sandbox \"c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546\" successfully" Jul 6 23:47:35.659276 containerd[1452]: time="2025-07-06T23:47:35.657865176Z" level=info msg="StopPodSandbox for \"c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546\" returns successfully" Jul 6 23:47:35.661503 containerd[1452]: time="2025-07-06T23:47:35.661404888Z" level=info msg="TearDown network for sandbox \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" successfully" Jul 6 23:47:35.661503 containerd[1452]: time="2025-07-06T23:47:35.661477604Z" level=info msg="StopPodSandbox for \"e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17\" returns successfully" Jul 6 23:47:35.701538 kubelet[2509]: I0706 23:47:35.701466 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-cgroup\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.701538 kubelet[2509]: I0706 23:47:35.701516 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-host-proc-sys-kernel\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.701538 kubelet[2509]: I0706 23:47:35.701544 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5b6t\" (UniqueName: \"kubernetes.io/projected/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-kube-api-access-f5b6t\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702176 kubelet[2509]: I0706 23:47:35.701562 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttdnv\" (UniqueName: \"kubernetes.io/projected/e6202cf7-5f5c-40a7-af62-43824356eaef-kube-api-access-ttdnv\") pod \"e6202cf7-5f5c-40a7-af62-43824356eaef\" (UID: \"e6202cf7-5f5c-40a7-af62-43824356eaef\") " Jul 6 23:47:35.702176 kubelet[2509]: I0706 23:47:35.701580 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-config-path\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702176 kubelet[2509]: I0706 23:47:35.701595 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-etc-cni-netd\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702176 kubelet[2509]: I0706 23:47:35.701612 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-hostproc\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702176 kubelet[2509]: I0706 23:47:35.701633 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-hubble-tls\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702176 kubelet[2509]: I0706 23:47:35.701654 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cni-path\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702325 kubelet[2509]: I0706 23:47:35.701651 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:47:35.702325 kubelet[2509]: I0706 23:47:35.701681 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-host-proc-sys-net\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702325 kubelet[2509]: I0706 23:47:35.701734 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:47:35.702325 kubelet[2509]: I0706 23:47:35.701757 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-xtables-lock\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702325 kubelet[2509]: I0706 23:47:35.701779 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-clustermesh-secrets\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702457 kubelet[2509]: I0706 23:47:35.701798 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-lib-modules\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702457 kubelet[2509]: I0706 23:47:35.701818 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-run\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702457 kubelet[2509]: I0706 23:47:35.701834 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6202cf7-5f5c-40a7-af62-43824356eaef-cilium-config-path\") pod \"e6202cf7-5f5c-40a7-af62-43824356eaef\" (UID: \"e6202cf7-5f5c-40a7-af62-43824356eaef\") " Jul 6 23:47:35.702457 kubelet[2509]: I0706 23:47:35.701855 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-bpf-maps\") pod \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\" (UID: \"aeec81f1-9f8f-4aa1-86f4-45ef34453f42\") " Jul 6 23:47:35.702457 kubelet[2509]: I0706 23:47:35.701901 2509 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.702457 kubelet[2509]: I0706 23:47:35.701911 2509 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.702592 kubelet[2509]: I0706 23:47:35.701965 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:47:35.702592 kubelet[2509]: I0706 23:47:35.702011 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:47:35.702592 kubelet[2509]: I0706 23:47:35.702030 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:47:35.715754 kubelet[2509]: I0706 23:47:35.715396 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:47:35.715754 kubelet[2509]: I0706 23:47:35.715426 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:47:35.715754 kubelet[2509]: I0706 23:47:35.715411 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:47:35.715754 kubelet[2509]: I0706 23:47:35.715454 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-hostproc" (OuterVolumeSpecName: "hostproc") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:47:35.719130 kubelet[2509]: I0706 23:47:35.719099 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:47:35.719191 kubelet[2509]: I0706 23:47:35.719144 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cni-path" (OuterVolumeSpecName: "cni-path") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:47:35.719462 kubelet[2509]: I0706 23:47:35.719318 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6202cf7-5f5c-40a7-af62-43824356eaef-kube-api-access-ttdnv" (OuterVolumeSpecName: "kube-api-access-ttdnv") pod "e6202cf7-5f5c-40a7-af62-43824356eaef" (UID: "e6202cf7-5f5c-40a7-af62-43824356eaef"). InnerVolumeSpecName "kube-api-access-ttdnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:47:35.719462 kubelet[2509]: I0706 23:47:35.719370 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:47:35.719537 kubelet[2509]: I0706 23:47:35.719520 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6202cf7-5f5c-40a7-af62-43824356eaef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e6202cf7-5f5c-40a7-af62-43824356eaef" (UID: "e6202cf7-5f5c-40a7-af62-43824356eaef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:47:35.721807 kubelet[2509]: I0706 23:47:35.721772 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:47:35.722248 kubelet[2509]: I0706 23:47:35.722203 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-kube-api-access-f5b6t" (OuterVolumeSpecName: "kube-api-access-f5b6t") pod "aeec81f1-9f8f-4aa1-86f4-45ef34453f42" (UID: "aeec81f1-9f8f-4aa1-86f4-45ef34453f42"). InnerVolumeSpecName "kube-api-access-f5b6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:47:35.802654 kubelet[2509]: I0706 23:47:35.802611 2509 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f5b6t\" (UniqueName: \"kubernetes.io/projected/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-kube-api-access-f5b6t\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.802654 kubelet[2509]: I0706 23:47:35.802644 2509 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ttdnv\" (UniqueName: \"kubernetes.io/projected/e6202cf7-5f5c-40a7-af62-43824356eaef-kube-api-access-ttdnv\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.802654 kubelet[2509]: I0706 23:47:35.802653 2509 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.802805 kubelet[2509]: I0706 23:47:35.802665 2509 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.802805 kubelet[2509]: I0706 23:47:35.802676 2509 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.802805 kubelet[2509]: I0706 23:47:35.802686 2509 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.802805 kubelet[2509]: I0706 23:47:35.802694 2509 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.802805 kubelet[2509]: I0706 23:47:35.802702 2509 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.802805 kubelet[2509]: I0706 23:47:35.802710 2509 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.802805 kubelet[2509]: I0706 23:47:35.802717 2509 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.802805 kubelet[2509]: I0706 23:47:35.802725 2509 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.803025 kubelet[2509]: I0706 23:47:35.802733 2509 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6202cf7-5f5c-40a7-af62-43824356eaef-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.803025 kubelet[2509]: I0706 23:47:35.802741 2509 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:35.803025 kubelet[2509]: I0706 23:47:35.802749 2509 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aeec81f1-9f8f-4aa1-86f4-45ef34453f42-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 6 23:47:36.205264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c28b0c60633800d3e68f2768292fe5590926639de05fc194a8481c4af6825546-rootfs.mount: Deactivated successfully. Jul 6 23:47:36.205397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3603706e9528bc2f769edad21603659320b0b191da5430b6c0125da3e1d4b17-rootfs.mount: Deactivated successfully. Jul 6 23:47:36.205490 systemd[1]: var-lib-kubelet-pods-e6202cf7\x2d5f5c\x2d40a7\x2daf62\x2d43824356eaef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dttdnv.mount: Deactivated successfully. Jul 6 23:47:36.205568 systemd[1]: var-lib-kubelet-pods-aeec81f1\x2d9f8f\x2d4aa1\x2d86f4\x2d45ef34453f42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df5b6t.mount: Deactivated successfully. Jul 6 23:47:36.205650 systemd[1]: var-lib-kubelet-pods-aeec81f1\x2d9f8f\x2d4aa1\x2d86f4\x2d45ef34453f42-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:47:36.205727 systemd[1]: var-lib-kubelet-pods-aeec81f1\x2d9f8f\x2d4aa1\x2d86f4\x2d45ef34453f42-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:47:36.220489 kubelet[2509]: I0706 23:47:36.220450 2509 scope.go:117] "RemoveContainer" containerID="8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1" Jul 6 23:47:36.221970 containerd[1452]: time="2025-07-06T23:47:36.221889177Z" level=info msg="RemoveContainer for \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\"" Jul 6 23:47:36.229041 systemd[1]: Removed slice kubepods-besteffort-pode6202cf7_5f5c_40a7_af62_43824356eaef.slice - libcontainer container kubepods-besteffort-pode6202cf7_5f5c_40a7_af62_43824356eaef.slice. Jul 6 23:47:36.235763 systemd[1]: Removed slice kubepods-burstable-podaeec81f1_9f8f_4aa1_86f4_45ef34453f42.slice - libcontainer container kubepods-burstable-podaeec81f1_9f8f_4aa1_86f4_45ef34453f42.slice. Jul 6 23:47:36.236051 systemd[1]: kubepods-burstable-podaeec81f1_9f8f_4aa1_86f4_45ef34453f42.slice: Consumed 7.147s CPU time. Jul 6 23:47:36.238749 containerd[1452]: time="2025-07-06T23:47:36.238702028Z" level=info msg="RemoveContainer for \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\" returns successfully" Jul 6 23:47:36.239031 kubelet[2509]: I0706 23:47:36.238999 2509 scope.go:117] "RemoveContainer" containerID="8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1" Jul 6 23:47:36.243405 containerd[1452]: time="2025-07-06T23:47:36.243228944Z" level=error msg="ContainerStatus for \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\": not found" Jul 6 23:47:36.243837 kubelet[2509]: E0706 23:47:36.243506 2509 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\": not found" containerID="8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1" Jul 6 23:47:36.243837 kubelet[2509]: I0706 23:47:36.243544 2509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1"} err="failed to get container status \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d8a9b6ef08ca8a833e90d4085eea29dc3317e78e983f411ed5315b41e21e9b1\": not found" Jul 6 23:47:36.243837 kubelet[2509]: I0706 23:47:36.243602 2509 scope.go:117] "RemoveContainer" containerID="bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094" Jul 6 23:47:36.245046 containerd[1452]: time="2025-07-06T23:47:36.245008878Z" level=info msg="RemoveContainer for \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\"" Jul 6 23:47:36.249237 containerd[1452]: time="2025-07-06T23:47:36.249051690Z" level=info msg="RemoveContainer for \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\" returns successfully" Jul 6 23:47:36.249859 kubelet[2509]: I0706 23:47:36.249737 2509 scope.go:117] "RemoveContainer" containerID="577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791" Jul 6 23:47:36.251072 containerd[1452]: time="2025-07-06T23:47:36.251000830Z" level=info msg="RemoveContainer for \"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791\"" Jul 6 23:47:36.255902 containerd[1452]: time="2025-07-06T23:47:36.255858894Z" level=info msg="RemoveContainer for \"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791\" returns successfully" Jul 6 23:47:36.256111 kubelet[2509]: I0706 23:47:36.256075 2509 scope.go:117] "RemoveContainer" containerID="f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566" Jul 6 23:47:36.257294 containerd[1452]: time="2025-07-06T23:47:36.257253358Z" level=info msg="RemoveContainer for \"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566\"" Jul 6 23:47:36.261333 containerd[1452]: time="2025-07-06T23:47:36.261296409Z" level=info msg="RemoveContainer for \"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566\" returns successfully" Jul 6 23:47:36.261482 kubelet[2509]: I0706 23:47:36.261458 2509 scope.go:117] "RemoveContainer" containerID="cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964" Jul 6 23:47:36.262335 containerd[1452]: time="2025-07-06T23:47:36.262308530Z" level=info msg="RemoveContainer for \"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964\"" Jul 6 23:47:36.265678 containerd[1452]: time="2025-07-06T23:47:36.265638589Z" level=info msg="RemoveContainer for \"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964\" returns successfully" Jul 6 23:47:36.265811 kubelet[2509]: I0706 23:47:36.265788 2509 scope.go:117] "RemoveContainer" containerID="6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3" Jul 6 23:47:36.266823 containerd[1452]: time="2025-07-06T23:47:36.266793877Z" level=info msg="RemoveContainer for \"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3\"" Jul 6 23:47:36.270251 containerd[1452]: time="2025-07-06T23:47:36.270209677Z" level=info msg="RemoveContainer for \"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3\" returns successfully" Jul 6 23:47:36.270417 kubelet[2509]: I0706 23:47:36.270374 2509 scope.go:117] "RemoveContainer" containerID="bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094" Jul 6 23:47:36.270631 containerd[1452]: time="2025-07-06T23:47:36.270591340Z" level=error msg="ContainerStatus for \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\": not found" Jul 6 23:47:36.270765 kubelet[2509]: E0706 23:47:36.270726 2509 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\": not found" containerID="bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094" Jul 6 23:47:36.270807 kubelet[2509]: I0706 23:47:36.270760 2509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094"} err="failed to get container status \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcbacc644448528dce952c09719d2ced5ad4892e1926cc454747f1fa44def094\": not found" Jul 6 23:47:36.270807 kubelet[2509]: I0706 23:47:36.270781 2509 scope.go:117] "RemoveContainer" containerID="577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791" Jul 6 23:47:36.271031 containerd[1452]: time="2025-07-06T23:47:36.270995314Z" level=error msg="ContainerStatus for \"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791\": not found" Jul 6 23:47:36.271151 kubelet[2509]: E0706 23:47:36.271130 2509 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791\": not found" containerID="577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791" Jul 6 23:47:36.271204 kubelet[2509]: I0706 23:47:36.271153 2509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791"} err="failed to get container status \"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791\": rpc error: code = NotFound desc = an error occurred when try to find container \"577754d6b5016aa7ac1020455520f2aecbc288a4c10df550665bb2c2225ed791\": not found" Jul 6 23:47:36.271204 kubelet[2509]: I0706 23:47:36.271169 2509 scope.go:117] "RemoveContainer" containerID="f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566" Jul 6 23:47:36.271360 containerd[1452]: time="2025-07-06T23:47:36.271327816Z" level=error msg="ContainerStatus for \"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566\": not found" Jul 6 23:47:36.271452 kubelet[2509]: E0706 23:47:36.271427 2509 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566\": not found" containerID="f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566" Jul 6 23:47:36.271452 kubelet[2509]: I0706 23:47:36.271448 2509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566"} err="failed to get container status \"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2b1ab829447ee4d2d7dec3da3c34493f1bffc48800f07437df1adc7bd862566\": not found" Jul 6 23:47:36.271564 kubelet[2509]: I0706 23:47:36.271462 2509 scope.go:117] "RemoveContainer" containerID="cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964" Jul 6 23:47:36.271645 containerd[1452]: time="2025-07-06T23:47:36.271604012Z" level=error msg="ContainerStatus for \"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964\": not found" Jul 6 23:47:36.271789 kubelet[2509]: E0706 23:47:36.271757 2509 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964\": not found" containerID="cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964" Jul 6 23:47:36.271845 kubelet[2509]: I0706 23:47:36.271797 2509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964"} err="failed to get container status \"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf98efff8120ddac07bdd0f751afb8e06855350c2d9bd2145d012de8fa5ab964\": not found" Jul 6 23:47:36.271845 kubelet[2509]: I0706 23:47:36.271831 2509 scope.go:117] "RemoveContainer" containerID="6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3" Jul 6 23:47:36.272064 containerd[1452]: time="2025-07-06T23:47:36.272031920Z" level=error msg="ContainerStatus for \"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3\": not found" Jul 6 23:47:36.272154 kubelet[2509]: E0706 23:47:36.272135 2509 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3\": not found" containerID="6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3" Jul 6 23:47:36.272185 kubelet[2509]: I0706 23:47:36.272154 2509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3"} err="failed to get container status \"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b48309d14773e53441770b9bd4b0733738a4ab401f7063e4c3a922c3089f3a3\": not found" Jul 6 23:47:37.034073 kubelet[2509]: I0706 23:47:37.034015 2509 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeec81f1-9f8f-4aa1-86f4-45ef34453f42" path="/var/lib/kubelet/pods/aeec81f1-9f8f-4aa1-86f4-45ef34453f42/volumes" Jul 6 23:47:37.034914 kubelet[2509]: I0706 23:47:37.034885 2509 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6202cf7-5f5c-40a7-af62-43824356eaef" path="/var/lib/kubelet/pods/e6202cf7-5f5c-40a7-af62-43824356eaef/volumes" Jul 6 23:47:37.051087 sshd[4173]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:37.059037 systemd[1]: sshd@23-10.0.0.18:22-10.0.0.1:54764.service: Deactivated successfully. Jul 6 23:47:37.060989 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:47:37.062650 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:47:37.071187 systemd[1]: Started sshd@24-10.0.0.18:22-10.0.0.1:54772.service - OpenSSH per-connection server daemon (10.0.0.1:54772). Jul 6 23:47:37.072149 systemd-logind[1440]: Removed session 24. Jul 6 23:47:37.106702 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 54772 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:37.108895 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:37.113664 systemd-logind[1440]: New session 25 of user core. Jul 6 23:47:37.122075 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:47:37.653512 sshd[4339]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:37.668756 systemd[1]: sshd@24-10.0.0.18:22-10.0.0.1:54772.service: Deactivated successfully. Jul 6 23:47:37.672596 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:47:37.676021 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:47:37.685519 systemd[1]: Started sshd@25-10.0.0.18:22-10.0.0.1:54776.service - OpenSSH per-connection server daemon (10.0.0.1:54776). Jul 6 23:47:37.693180 systemd-logind[1440]: Removed session 25. Jul 6 23:47:37.702182 systemd[1]: Created slice kubepods-burstable-pod4aa422a0_39bd_48e7_a0bd_39fee43fb647.slice - libcontainer container kubepods-burstable-pod4aa422a0_39bd_48e7_a0bd_39fee43fb647.slice. Jul 6 23:47:37.716390 kubelet[2509]: I0706 23:47:37.716344 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4aa422a0-39bd-48e7-a0bd-39fee43fb647-hostproc\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716390 kubelet[2509]: I0706 23:47:37.716384 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4aa422a0-39bd-48e7-a0bd-39fee43fb647-cni-path\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716596 kubelet[2509]: I0706 23:47:37.716433 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4aa422a0-39bd-48e7-a0bd-39fee43fb647-etc-cni-netd\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716596 kubelet[2509]: I0706 23:47:37.716463 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4aa422a0-39bd-48e7-a0bd-39fee43fb647-host-proc-sys-kernel\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716596 kubelet[2509]: I0706 23:47:37.716495 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4aa422a0-39bd-48e7-a0bd-39fee43fb647-cilium-cgroup\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716596 kubelet[2509]: I0706 23:47:37.716513 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4aa422a0-39bd-48e7-a0bd-39fee43fb647-xtables-lock\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716596 kubelet[2509]: I0706 23:47:37.716539 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4aa422a0-39bd-48e7-a0bd-39fee43fb647-cilium-config-path\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716596 kubelet[2509]: I0706 23:47:37.716555 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4aa422a0-39bd-48e7-a0bd-39fee43fb647-hubble-tls\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716735 kubelet[2509]: I0706 23:47:37.716568 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4aa422a0-39bd-48e7-a0bd-39fee43fb647-bpf-maps\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716735 kubelet[2509]: I0706 23:47:37.716584 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4aa422a0-39bd-48e7-a0bd-39fee43fb647-cilium-ipsec-secrets\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716735 kubelet[2509]: I0706 23:47:37.716602 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4aa422a0-39bd-48e7-a0bd-39fee43fb647-host-proc-sys-net\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716735 kubelet[2509]: I0706 23:47:37.716619 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4aa422a0-39bd-48e7-a0bd-39fee43fb647-cilium-run\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716735 kubelet[2509]: I0706 23:47:37.716647 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4aa422a0-39bd-48e7-a0bd-39fee43fb647-clustermesh-secrets\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716844 kubelet[2509]: I0706 23:47:37.716666 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxmdk\" (UniqueName: \"kubernetes.io/projected/4aa422a0-39bd-48e7-a0bd-39fee43fb647-kube-api-access-rxmdk\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.716844 kubelet[2509]: I0706 23:47:37.716682 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4aa422a0-39bd-48e7-a0bd-39fee43fb647-lib-modules\") pod \"cilium-dj46g\" (UID: \"4aa422a0-39bd-48e7-a0bd-39fee43fb647\") " pod="kube-system/cilium-dj46g" Jul 6 23:47:37.719598 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 54776 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:37.721327 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:37.725994 systemd-logind[1440]: New session 26 of user core. Jul 6 23:47:37.735095 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:47:37.785838 sshd[4352]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:37.797396 systemd[1]: sshd@25-10.0.0.18:22-10.0.0.1:54776.service: Deactivated successfully. Jul 6 23:47:37.799654 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:47:37.801622 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:47:37.809214 systemd[1]: Started sshd@26-10.0.0.18:22-10.0.0.1:54778.service - OpenSSH per-connection server daemon (10.0.0.1:54778). Jul 6 23:47:37.810180 systemd-logind[1440]: Removed session 26. Jul 6 23:47:37.846806 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 54778 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:47:37.848496 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:37.852221 systemd-logind[1440]: New session 27 of user core. Jul 6 23:47:37.860034 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:47:38.007251 kubelet[2509]: E0706 23:47:38.007083 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:38.008414 containerd[1452]: time="2025-07-06T23:47:38.007785784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dj46g,Uid:4aa422a0-39bd-48e7-a0bd-39fee43fb647,Namespace:kube-system,Attempt:0,}" Jul 6 23:47:38.030515 containerd[1452]: time="2025-07-06T23:47:38.029713700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:47:38.030515 containerd[1452]: time="2025-07-06T23:47:38.030452183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:47:38.030515 containerd[1452]: time="2025-07-06T23:47:38.030466851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:47:38.030725 containerd[1452]: time="2025-07-06T23:47:38.030553884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:47:38.030792 kubelet[2509]: E0706 23:47:38.030756 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:38.056065 systemd[1]: Started cri-containerd-5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a.scope - libcontainer container 5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a. Jul 6 23:47:38.081170 containerd[1452]: time="2025-07-06T23:47:38.081112940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dj46g,Uid:4aa422a0-39bd-48e7-a0bd-39fee43fb647,Namespace:kube-system,Attempt:0,} returns sandbox id \"5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a\"" Jul 6 23:47:38.082103 kubelet[2509]: E0706 23:47:38.082073 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:38.087460 containerd[1452]: time="2025-07-06T23:47:38.087334149Z" level=info msg="CreateContainer within sandbox \"5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:47:38.098726 containerd[1452]: time="2025-07-06T23:47:38.098670575Z" level=info msg="CreateContainer within sandbox \"5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb578555b3ed210fbf7fbcdc1e517a7c68c07b7f71780c0c268ee75e2663d2fb\"" Jul 6 23:47:38.099158 containerd[1452]: time="2025-07-06T23:47:38.099133721Z" level=info msg="StartContainer for \"cb578555b3ed210fbf7fbcdc1e517a7c68c07b7f71780c0c268ee75e2663d2fb\"" Jul 6 23:47:38.126064 systemd[1]: Started cri-containerd-cb578555b3ed210fbf7fbcdc1e517a7c68c07b7f71780c0c268ee75e2663d2fb.scope - libcontainer container cb578555b3ed210fbf7fbcdc1e517a7c68c07b7f71780c0c268ee75e2663d2fb. Jul 6 23:47:38.152682 containerd[1452]: time="2025-07-06T23:47:38.152634526Z" level=info msg="StartContainer for \"cb578555b3ed210fbf7fbcdc1e517a7c68c07b7f71780c0c268ee75e2663d2fb\" returns successfully" Jul 6 23:47:38.164296 systemd[1]: cri-containerd-cb578555b3ed210fbf7fbcdc1e517a7c68c07b7f71780c0c268ee75e2663d2fb.scope: Deactivated successfully. Jul 6 23:47:38.197652 containerd[1452]: time="2025-07-06T23:47:38.197582054Z" level=info msg="shim disconnected" id=cb578555b3ed210fbf7fbcdc1e517a7c68c07b7f71780c0c268ee75e2663d2fb namespace=k8s.io Jul 6 23:47:38.197652 containerd[1452]: time="2025-07-06T23:47:38.197639572Z" level=warning msg="cleaning up after shim disconnected" id=cb578555b3ed210fbf7fbcdc1e517a7c68c07b7f71780c0c268ee75e2663d2fb namespace=k8s.io Jul 6 23:47:38.197652 containerd[1452]: time="2025-07-06T23:47:38.197649120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:47:38.237565 kubelet[2509]: E0706 23:47:38.237521 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:38.245240 containerd[1452]: time="2025-07-06T23:47:38.245028361Z" level=info msg="CreateContainer within sandbox \"5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:47:38.261757 containerd[1452]: time="2025-07-06T23:47:38.261641678Z" level=info msg="CreateContainer within sandbox \"5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b91a969fab5f694cae77e2c66406cb53f8d303017d0f98fbe332033c094d384d\"" Jul 6 23:47:38.262247 containerd[1452]: time="2025-07-06T23:47:38.262206215Z" level=info msg="StartContainer for \"b91a969fab5f694cae77e2c66406cb53f8d303017d0f98fbe332033c094d384d\"" Jul 6 23:47:38.293139 systemd[1]: Started cri-containerd-b91a969fab5f694cae77e2c66406cb53f8d303017d0f98fbe332033c094d384d.scope - libcontainer container b91a969fab5f694cae77e2c66406cb53f8d303017d0f98fbe332033c094d384d. Jul 6 23:47:38.320398 containerd[1452]: time="2025-07-06T23:47:38.320331608Z" level=info msg="StartContainer for \"b91a969fab5f694cae77e2c66406cb53f8d303017d0f98fbe332033c094d384d\" returns successfully" Jul 6 23:47:38.328108 systemd[1]: cri-containerd-b91a969fab5f694cae77e2c66406cb53f8d303017d0f98fbe332033c094d384d.scope: Deactivated successfully. Jul 6 23:47:38.352567 containerd[1452]: time="2025-07-06T23:47:38.352481555Z" level=info msg="shim disconnected" id=b91a969fab5f694cae77e2c66406cb53f8d303017d0f98fbe332033c094d384d namespace=k8s.io Jul 6 23:47:38.352567 containerd[1452]: time="2025-07-06T23:47:38.352555273Z" level=warning msg="cleaning up after shim disconnected" id=b91a969fab5f694cae77e2c66406cb53f8d303017d0f98fbe332033c094d384d namespace=k8s.io Jul 6 23:47:38.352567 containerd[1452]: time="2025-07-06T23:47:38.352567136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:47:38.823521 systemd[1]: run-containerd-runc-k8s.io-5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a-runc.MpD3TP.mount: Deactivated successfully. Jul 6 23:47:39.100368 kubelet[2509]: E0706 23:47:39.100201 2509 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:47:39.240949 kubelet[2509]: E0706 23:47:39.240887 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:39.248196 containerd[1452]: time="2025-07-06T23:47:39.248126908Z" level=info msg="CreateContainer within sandbox \"5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:47:39.264122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1728221311.mount: Deactivated successfully. Jul 6 23:47:39.266070 containerd[1452]: time="2025-07-06T23:47:39.266022025Z" level=info msg="CreateContainer within sandbox \"5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d8b8edc4b46619869972091b6413f867105e1e983776781827db607b906378ae\"" Jul 6 23:47:39.266765 containerd[1452]: time="2025-07-06T23:47:39.266717788Z" level=info msg="StartContainer for \"d8b8edc4b46619869972091b6413f867105e1e983776781827db607b906378ae\"" Jul 6 23:47:39.297222 systemd[1]: Started cri-containerd-d8b8edc4b46619869972091b6413f867105e1e983776781827db607b906378ae.scope - libcontainer container d8b8edc4b46619869972091b6413f867105e1e983776781827db607b906378ae. Jul 6 23:47:39.369489 containerd[1452]: time="2025-07-06T23:47:39.369365110Z" level=info msg="StartContainer for \"d8b8edc4b46619869972091b6413f867105e1e983776781827db607b906378ae\" returns successfully" Jul 6 23:47:39.371076 systemd[1]: cri-containerd-d8b8edc4b46619869972091b6413f867105e1e983776781827db607b906378ae.scope: Deactivated successfully. Jul 6 23:47:39.399064 containerd[1452]: time="2025-07-06T23:47:39.398995204Z" level=info msg="shim disconnected" id=d8b8edc4b46619869972091b6413f867105e1e983776781827db607b906378ae namespace=k8s.io Jul 6 23:47:39.399064 containerd[1452]: time="2025-07-06T23:47:39.399054425Z" level=warning msg="cleaning up after shim disconnected" id=d8b8edc4b46619869972091b6413f867105e1e983776781827db607b906378ae namespace=k8s.io Jul 6 23:47:39.399064 containerd[1452]: time="2025-07-06T23:47:39.399065756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:47:39.823538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8b8edc4b46619869972091b6413f867105e1e983776781827db607b906378ae-rootfs.mount: Deactivated successfully. Jul 6 23:47:40.245332 kubelet[2509]: E0706 23:47:40.245182 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:40.252527 containerd[1452]: time="2025-07-06T23:47:40.252431813Z" level=info msg="CreateContainer within sandbox \"5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:47:40.266798 containerd[1452]: time="2025-07-06T23:47:40.266740797Z" level=info msg="CreateContainer within sandbox \"5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"860d5d9370a8a42f24386104dc19318fc7e998de3fd1ad52aa4de7627ac1bf7b\"" Jul 6 23:47:40.267336 containerd[1452]: time="2025-07-06T23:47:40.267313181Z" level=info msg="StartContainer for \"860d5d9370a8a42f24386104dc19318fc7e998de3fd1ad52aa4de7627ac1bf7b\"" Jul 6 23:47:40.296063 systemd[1]: Started cri-containerd-860d5d9370a8a42f24386104dc19318fc7e998de3fd1ad52aa4de7627ac1bf7b.scope - libcontainer container 860d5d9370a8a42f24386104dc19318fc7e998de3fd1ad52aa4de7627ac1bf7b. Jul 6 23:47:40.323214 systemd[1]: cri-containerd-860d5d9370a8a42f24386104dc19318fc7e998de3fd1ad52aa4de7627ac1bf7b.scope: Deactivated successfully. Jul 6 23:47:40.325895 containerd[1452]: time="2025-07-06T23:47:40.325850991Z" level=info msg="StartContainer for \"860d5d9370a8a42f24386104dc19318fc7e998de3fd1ad52aa4de7627ac1bf7b\" returns successfully" Jul 6 23:47:40.348646 containerd[1452]: time="2025-07-06T23:47:40.348577503Z" level=info msg="shim disconnected" id=860d5d9370a8a42f24386104dc19318fc7e998de3fd1ad52aa4de7627ac1bf7b namespace=k8s.io Jul 6 23:47:40.348646 containerd[1452]: time="2025-07-06T23:47:40.348643236Z" level=warning msg="cleaning up after shim disconnected" id=860d5d9370a8a42f24386104dc19318fc7e998de3fd1ad52aa4de7627ac1bf7b namespace=k8s.io Jul 6 23:47:40.348646 containerd[1452]: time="2025-07-06T23:47:40.348653094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:47:40.662075 kubelet[2509]: I0706 23:47:40.662007 2509 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:47:40Z","lastTransitionTime":"2025-07-06T23:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:47:40.823647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-860d5d9370a8a42f24386104dc19318fc7e998de3fd1ad52aa4de7627ac1bf7b-rootfs.mount: Deactivated successfully. Jul 6 23:47:41.248882 kubelet[2509]: E0706 23:47:41.248846 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:41.253503 containerd[1452]: time="2025-07-06T23:47:41.253461978Z" level=info msg="CreateContainer within sandbox \"5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:47:41.283614 containerd[1452]: time="2025-07-06T23:47:41.283545830Z" level=info msg="CreateContainer within sandbox \"5389bc55b321b6d038084d4963bae5675e547b3f245fdea804fdd3aa0a62505a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0dde90c13fa1892a2ed45383c935b9624a76c83beea4d193d607b5228bed8df9\"" Jul 6 23:47:41.284989 containerd[1452]: time="2025-07-06T23:47:41.284633333Z" level=info msg="StartContainer for \"0dde90c13fa1892a2ed45383c935b9624a76c83beea4d193d607b5228bed8df9\"" Jul 6 23:47:41.324105 systemd[1]: Started cri-containerd-0dde90c13fa1892a2ed45383c935b9624a76c83beea4d193d607b5228bed8df9.scope - libcontainer container 0dde90c13fa1892a2ed45383c935b9624a76c83beea4d193d607b5228bed8df9. Jul 6 23:47:41.357578 containerd[1452]: time="2025-07-06T23:47:41.357512959Z" level=info msg="StartContainer for \"0dde90c13fa1892a2ed45383c935b9624a76c83beea4d193d607b5228bed8df9\" returns successfully" Jul 6 23:47:41.792980 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 6 23:47:42.253509 kubelet[2509]: E0706 23:47:42.253351 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:42.266514 kubelet[2509]: I0706 23:47:42.266432 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dj46g" podStartSLOduration=5.2664121999999995 podStartE2EDuration="5.2664122s" podCreationTimestamp="2025-07-06 23:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:47:42.265753392 +0000 UTC m=+83.338883769" watchObservedRunningTime="2025-07-06 23:47:42.2664122 +0000 UTC m=+83.339542558" Jul 6 23:47:44.008677 kubelet[2509]: E0706 23:47:44.008602 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:44.953393 systemd-networkd[1401]: lxc_health: Link UP Jul 6 23:47:44.962654 systemd-networkd[1401]: lxc_health: Gained carrier Jul 6 23:47:45.032760 kubelet[2509]: E0706 23:47:45.032586 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:45.033239 kubelet[2509]: E0706 23:47:45.033158 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:46.009137 kubelet[2509]: E0706 23:47:46.009073 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:46.261905 kubelet[2509]: E0706 23:47:46.261766 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:46.716383 systemd-networkd[1401]: lxc_health: Gained IPv6LL Jul 6 23:47:47.263962 kubelet[2509]: E0706 23:47:47.263908 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:49.033746 kubelet[2509]: E0706 23:47:49.033694 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:47:50.634018 sshd[4360]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:50.638370 systemd[1]: sshd@26-10.0.0.18:22-10.0.0.1:54778.service: Deactivated successfully. Jul 6 23:47:50.640464 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:47:50.641130 systemd-logind[1440]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:47:50.642199 systemd-logind[1440]: Removed session 27.